Over 90 Total Lots Up For Auction at One Location - WA 04/08

Imaging deep learning AI successes kick off SIIM 2019

by John W. Mitchell, Senior Correspondent | June 28, 2019
Artificial Intelligence
A research team has identified “flairs” in head images using an AI program to spot 19 diseases, at rates comparable with experienced academic radiologists.

The findings were presented at the opening of the 2019 Society for Information Informatics in Medicine (SIIM), underway Wednesday morning in Aurora, Colorado.

The research is especially competent in finding rare diseases, according to presenter Dr. Andreas Rauschecker, a neuroradiology fellow the University of California, San Francisco.

Rauschecker detailed how the underlying machine learning in the research team's AI platform excelled at identifying 19 diseases of the cerebral hemispheres using explicit computational modeling of three fundamental steps in image interpreting. The study steps included:

1: Identifying image abnormalities (convolutional neural networks);
2: Characterization of the image (advanced imaging processing);
3: Integration of imaging findings to create different diagnoses (Bayesian inference)


The AI “U-Net” platform was able to identify a wide range of disease states comparable to that of experienced academic radiologists, according to Rauschecker. The disorders ranged from metastatic disorders and HIV encephalopathy to high- and low-grade glaucoma, to migraines. He emphasized that rather than training the U-Net to identify diseases based on diagnosis, the team instead focused on abnormal flairs evident in brain CT and MRI imaging to make a diagnosis. The next step for their findings is twofold.

“We [next] want to evaluate the software prospectively rather than retrospectively and integrate a portion into our workflow,” Rauschecker told HCB News. “We’re pretty confident that flair signature [model] will do really well.”

The team also wants to use the diagnostic model to increase understanding of, and find improved treatments for, rare diseases.

A second presenter at the opening event provided results from an AI study to improve breast cancer detection rates and reduce breast image reading time among both breast subspecialists and general radiologists. Dr. Jeffery Hoffmeister, VP and medical director at ICAD, a company that develops mammographic AI solutions, described a study of workflow driven by a combination of breast tomosynthesis (DBT) and digital mammography images.

The recent addition of DBT, while presenting more information to radiologists, has also almost doubled reading times. The AI iCAD platform used in the study with 24 general and sub-radiologists of 260 DBT cases was able to reduce reading times by about half for both groups. Sensitivity and specificity improved to better identify cancers, as well.

A third presentation by Dr. Alice Yu, diagnostic radiology resident, Johns Hopkins University, presented study results of a deep learning-based app to determine bone age using cell phones to upload clinical images.

You Must Be Logged In To Post A Comment