por Lisa Chamoff
, Contributing Reporter | December 02, 2020
Artificial intelligence has the potential to overcome institutional bias in healthcare, but deep learning algorithms have to be crafted without the bias baked in, experts shared during a session at this week’s RSNA virtual meeting.
During the session, titled “Artificial Intelligence and Implications for Health Equity: Will AI Improve Equity or Increase Disparities?” the panelists discussed how algorithms are affected by existing biases in healthcare, and how those biases can affect treatment of and outcomes for Black and Hispanic patients.
“The data that we use are actually collected in a social and technical system that already has social, cultural and institutional biases,” said the moderator, Dr. Judy Gichoya, assistant professor of interventional radiology and informatics and co-director of the Healthcare Innovations and Translational Informatics (HITI) Lab at Emory University, before introducing the panelists. “If we just use this data without understanding the inequalities then this algorithm is going to end up intuiting, if not magnifying, existing disparities.”
Numed, a well established company in business since 1975 provides a wide range of service options including time & material service, PM only contracts, full service contracts, labor only contracts & system relocation. Call 800 96 Numed for more info.
Dr. Ziad Obermeyer, an associate professor of health policy and management at the UC Berkeley School of Public Health, spoke about the “pain gap” between Black and white patients when evaluating them as candidates for a knee replacement. He noted that the Kellgren and Lawrence (KLG) system for classification of osteoarthritis was developed in 1957 after studying coal miners in England and that most algorithms are trained to match and exceed human performance.
“This is exactly what we don’t want to do here,” Obermeyer said. “The whole point is that the human may be missing causes of pain that may disproportionately affect disadvantaged patients.”
It’s also harder to find data sets that link images to a patient’s pain experience, but using those data sets instead of the KLG system could more than double the number of Black patients deemed eligible for knee replacement surgery, with fewer patients prescribed addictive painkillers.
“There’s a big difference between predicting what the radiologist says about the knee and predicting what the patient experience is, and that turns out to make a big difference for uncovering some sources of bias in the metrics and the measures that we use to grade arthritis,” Obermeyer said. “I think there’s a lot of reason for optimism that machine learning can open up new ways to fight that bias if we have the right data.”
Obermeyer noted that fighting bias by using imaging data related to patient outcomes to create AI is one of the goals of Nightingale project, an effort he is part of to “revolutionize the field of computational medicine by connecting world class researchers to forward-thinking medical systems.”
Connie Lehman, chief of breast imaging at Massachusetts General Hospital, spoke about her work with Regina Barzilay, a professor at the Massachusetts Institute of Technology, on creating race blind deep learning models to predict future breast cancer risk.
“Stronger risk factors need to be discovered for women of color,” Lehman said.
Their model puts mammogram images through an image aggregator to combine images across the views and uses those to predict a patient’s risk factor.
The model performed better than traditional models and promotes equity across the races, Lehman noted.
For example, using the Tyrer-Cuzick model to estimate the likelihood of a woman developing breast cancer, the area under the curve was .64 for white women, .58 for African American women and .53 for Asian women, versus .71 to .73 in their model.
“We saw ample evidence in our work that AI could reduce biases,” Lehman said. “That really opened our minds to use AI to reduce disparities we’ve seen in diagnosis and treatment.”