Over 90 Total Lots Up For Auction at One Location - WA 04/08

Where imaging AI is headed, and the best way to get there

by John R. Fischer, Senior Reporter | November 16, 2020
Artificial Intelligence
From the November 2020 issue of HealthCare Business News magazine


“Radiologists are far more than just pattern recognizers. We need to understand the clinical situation, determine what the appropriate imaging exam is, and interpret the results providing context to our referring physicians as well as our patients,” said Recht. “AI can’t do all of that, so I think AI is going to be an assistant to radiologists and help us to be more efficient by doing some of the tasks we don’t need a radiologist to do, such as quantification.”

A glitch in data and judgement
Back in 2012, a deep learning method won the ImageNet Large Scale Visual Recognition Challenge for image classification, reigniting interest in deep learning. In the years that followed, however, reports of instabilities with these methods in classification began to emerge, leading to an avalanche of studies documenting the instability phenomenon in many other areas of deep learning applications.

“If you look at this from a mathematical point of view, you can simply see why this would happen,” said Anders Hansen, a doctor in mathematics and associate professor at the University of Cambridge. “You will end up getting unstable methods where even small perturbations on your input data can cause severe artifacts and even false positives and false negatives,” he said, adding that “In some sense, there is no way you can just run what you have and pray that you have enough data. It has to be much more sophisticated than this.”

The availability of the right type of data for training an algorithm is not the problem. It’s the accessibility, with hospitals reluctant to share the data.

“A number of lawsuits related to patient identifier leakage or access without necessary agreements or patient consent has resulted in further conservative efforts by hospitals to protect their data by not giving it out,” said Rubin.

How AI products are validated also requires improvement, according to Hansen, who says both a mathematical understanding and standardized testing are required to recognize the limits of an AI application and whether or not it is safe for use.

“Having a false positive or false negative is not something that should be evaluated equally.

Having a false negative if you have cancer is much worse than a false positive,” he said. “A false negative will give false assurance you don’t have it and you may not follow up and the cancer will develop. If you’re going to do testing on this, we need to have very sophisticated way of saying, ‘Is this safe and in what sense?’”

Further complicating things, when the limitations in AI development and testing are not fully realized or understood, scientific reports and the media can exaggerate the capabilities of a particular algorithm.

You Must Be Logged In To Post A Comment