por
John W. Mitchell, Senior Correspondent | June 11, 2018
With machine learning algorithms recently approved by the FDA to diagnose images without physician input, providers, payers, and regulators may need to be on guard for a new kind of fraud.
That’s the conclusion of a Harvard Medical School/MIT study team comprising biometric informatics, physicians, and Ph.D. candidate members, in a paper just published in
IEEE Spectrum. The team was able to successfully launch “adversarial attacks” on three common automated AI medical imaging tasks to fool the programs up to 100 percent of the time into misdiagnosis. Their findings have imaging implications related to fraud, unnecessary treatments, higher insurance premiums and the possible manipulation of clinical trials.
The team defined adversarial attacks on AI imaging algorithms as: “…inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake."
Ad Statistics
Times Displayed: 16169
Times Visited: 33 Final days to save an extra 10% on Imaging, Ultrasound, and Biomed parts web prices.* Unlimited use now through September 30 with code AANIV10 (*certain restrictions apply)
“Adversarial examples have become a major area of research in the field of computer science, but we were struck by the extent to which our colleagues within healthcare IT were unaware of these vulnerabilities,” Dr. Samuel Finlayson, lead author and M.D.-Ph.D. candidate, Harvard-MIT told HCB News. “Our goal in writing this paper was to try to bridge the gap between the medical and computer science communities, and to initiate a more complete discussion around both the benefits and risks of using AI in the clinic.”
In the study, the team was able to manipulate the AI program to indicate positive findings in pneumothorax noted in chest X-rays, diabetic retinopathy observed in retinal mages and melanoma based on skin images. In the chest X-ray examples, the degree of accuracy based on the AI manipulation to indicate pneumothorax was 100 percent.
“Our results demonstrate that even state-of-the-art medical AI systems can be manipulated,” said Finlayson. “If the output of machine learning algorithms becomes a determinant of healthcare reimbursement or drug approval, then adversarial examples could be used as a tool to control exactly what the algorithms see.”
He also said that such misuse could cause patients to undergo unnecessary treatments, which would increase medical and insurance costs. Adversarial attacks could also be used to “tip the scales” in medical research to achieve desired outcomes.
Another member of the study team, Dr. Andrew Beam, Ph.D., instructor, Department of Biomedical Informatics, Harvard Medical School believes their findings are a warning to the medical informatics sector. While the team stated they were excited about the “bright future” that AI offers for medicine, caution is advised.
"I think our results could be summarized as: 'there is no free lunch'. New forms of artificial intelligence do indeed hold tremendous promise, but as with all technology, it is a double-edged sword,” Beam told HCB News. “Organizations implementing this technology should be aware of the limitations and take active steps to combat potential threats.”