AI in Healthcare Part II: A Ghost in the Machine
AI in Healthcare: The Other Edge of the Scalpel is a five-part series examining the unmitigated risks of AI in healthcare
Concerns about the use of Artificial Intelligence in healthcare aren’t always a black-or-white issue. In some situations, they’re gray, such as a radiology study led by Judy Gichoya, MD, an interventional radiologist, informatician, and an assistant professor at Emory University, on whether AI could detect the self-identified race of a patient based on a chest x-ray.
For the study, which was completed in 2022, Dr. Gichoya and her team at Emory used two datasets of X-rays that included chest, cervical spine, and hand images. The first dataset contained the patient’s self-reported race. The second dataset had no race or patient demographic included. Despite no demographic information being reported in the second dataset, the AI model (which used a Machine Learning algorithm) was still able to predict the patient’s self-reported race with 90% accuracy.
“When it comes to medical images, they don’t have color – they’re all gray,” said Gichoya, who still has no conclusions about how the AI model predicted this, two years later. “It’s very difficult to say why this image would belong to a Black patient or not.”
Gichoya rattled off a list of items a radiologist could tell from a chest x-ray:
- Sex
- Approximate age
- ICD codes
Based on these, a radiologist could even get a sense of how much a person might incur in healthcare expenses over the next several years.
But a gray chest x-ray? There’s no way for a radiologist to determine the patient’s self-identified race from that, which has left Gichoya and her team perplexed and concerned about how the AI model has been able to predict it with such accuracy.
“You can imagine what this could mean for underwriting,” Gichoya said. “It’s very, very dangerous – but also very transformative."
Gichoya and her team went a step further by distorting the images. Even with distorted images, the models were able to predict the race of the patient with nearly the same accuracy.
Despite its effectiveness in predicting race, the model cannot be used for clinical purposes until Dr. Gichoya and her team determine how it is making predictions. She said the models are encoding “hidden signals” when they are being built.
“You can imagine what this could mean for underwriting,” Gichoya said. “It’s very, very dangerous – but also very transformative. Determining when we should use these types of tools is one pillar of the unique work our group is working on.”
Without understanding how the model reaches its conclusions, the use of such technology could lead to a Pandora’s Box of pre-Civil Rights era discrimination.
Stephen Norris is a strategic provider partnerships and management expert with a track record of driving growth, profitability, and collaborative partnerships. He has extensive experience building and expanding provider partnerships within the healthcare industry. Norris is skilled in contract negotiation, stakeholder management, and data analysis with a demonstrated ability to lead and motivate teams to deliver exceptional results. He has a deep understanding of the healthcare landscape and a passion for health equity through improving patient outcomes. He is #OpentoWork.