By Guest Writer - January 09, 2020
Mammography is widely used to screen for breast cancer. It is an imaging procedure that transmits x-ray energy through the breast to provide information about tissue density and structure. However, mammography images are complex, have high spatial resolution and are difficult to interpret. Consequently, even experienced radiologists sometimes provide incorrect diagnoses.
These fall into two major categories: false-positives and false-negatives. False-positive diagnoses occur when a radiologist decides that cancer is present, when in fact it is not. This may cause considerable worry and stress in the diagnosed individual, in addition to costly follow-up procedures. False-negative diagnoses occur when a radiologist decides that cancer is absent, when in fact it is present. The undetected cancer remains untreated, potentially providing opportunities for it to grow and spread.
Advances in artificial intelligence (AI), particularly those related to computer vision, have prompted new methods to automatically detect and classify lesions in medical images. The hope is that this new technology might complement radiologists to improve the accuracy of breast cancer diagnosis.
A recent publication in the scientific journal Nature by a group of British and American scientists, including several from Google, bolsters this hope. The article describes an AI system for diagnosing mammograms that produces fewer incorrect diagnoses than experienced radiologists. The scientists used a collection of nearly 77,000 mammograms acquired in the United Kingdom to train and tune their AI system. They then tested the trained algorithm on a separate, sequestered collection of nearly 26,000 mammograms.
The AI system reduced the number of false-positive and false-negative diagnoses by 1.2% and 2.7% respectively, compared to the U.K. radiologists.
The team then investigated how well the U.K.-trained algorithm would “generalize,” or perform on a different patient demographic. To accomplish this, they trained and tuned the algorithm once more using the U.K. data complemented with data from over 15,000 United States patients. They then had the algorithm diagnose a separate, sequestered collection of over 3,000 U.S. mammograms. In this case, the AI system reduced the number of false-positive and false-negative diagnoses by 5.7% and 9.4% respectively, compared to the U.S. radiologists.
Finally, the team simulated the effect of using the AI system to complement the second radiologist in the double-reading process employed for screening mammograms in the U.K. They found that the AI system reduced the workload of the second radiologist by 88%.
These results are impressive and exciting. They point to a future where automated tools help radiologists by reducing both the workload of diagnosing mammograms and the number of incorrect diagnoses.
Nevertheless, the study had several limitations. For example, the U.S. dataset was relatively small, and all these mammograms came from a single academic medical center in Chicago. Clinical confidence in the new system would be greatly enhanced if mammograms from a larger, more geographically and ethnically diverse group of patients could be analyzed in a subsequent study.
This is an area where Moffitt Cancer Center might be able to help. Its patient catchment includes much of the state of Florida, a state with a very broad demographic including residents who have relocated from all over the U.S. This includes significant Hispanic and Caribbean populations. Though not working directly with this program, Moffitt is already partnering with Google in support of the cancer center’s digital initiatives. But working to further validate Google’s AI mammography efforts would complement Moffitt’s goal of leveraging digital technology, and AI in particular, to improve cancer patient outcomes.
This article was written by guest author Ross Mitchell, PhD, Moffitt’s Artificial Intelligence Officer and a senior member of its research staff in the Department of Biostatistics and Bioinformatics.