AI in Healthcare Part IV: Human Health and the AI Arms Race
This is a five-part series examining the unmitigated risks of AI in healthcare
Part I: When Algorithms Prescribe Prejudice
Part II: A Ghost in the Machine
Part III: Can AI Break Free from Healthcare's History of Bias?
Part IV
Researchers and developers have long been confounded by Artificial Intelligence's capacity to learn independently. Yet, newer, more powerful, and human-like versions of AI continue to be released, such as Generative AI.
Dr. Judy Gichoya’s study is just one of many examples of AI models performing tasks they weren’t programmed to perform. It’s a phenomenon that’s been termed, “Emergent abilities.” Emergent abilities are distinctly different from hallucinations (which occur when AI produces inaccurate or misleading information) in that emergent abilities involve performing tasks correctly that the AI model was not trained to perform. Examples include:
- In April of 2023, Alphabet (the parent company of Google) CEO Sundar Pichai told CBS’ 60 Minutes that its generative AI chatbot – then called Bard and since renamed Gemini – started providing answers in Bengali, despite only being trained in English.
- One of the most well-documented examples is AlphaGo Zero's mastery of Go: In 2016, Google's DeepMind AI program, AlphaGo, beat the (human) world champion in the complex game of Go. Just a year later, Google released AlphaGo Zero. The new model was not trained explicitly on the complex game, yet it beat the original version 100 games to 0, developing entirely new and unforeseen tactics, surprising even the developers.
- In January of 2024, an AI model created by researchers at Columbia University was able to match fingerprints from different fingers to the correct person, with a success rate between 75% and 90%, potentially debunking the long-held belief that each individual finger holds a unique print. Researchers have no definitive answer as to how the model was able to make the predictions.
Given this lack of understanding as to how AI performs emergent tasks, the Food and Drug Administration imposes a high bar for the use of AI in clinical decision-making. Despite the high bar, Gichoya still sees too many challenges in traditional approaches to moderating equity.
“I think we're being narrow-minded in thinking AI is going to make things better,” Gichoya said. “Look, this is not just predicting cats and dogs; we need to do more. If you look at the recently formed AI Safety Board, there are no voices from some of the more critical domains. It’s still the big tech, big academic institutions – the same people.”
"When people tell me they're afraid of AI in healthcare, I say, the only reason you're afraid of Machine Learning models killing you tomorrow is because your doctors are killing you today." – Dr. Marzyeh Ghassemi, PhD MIT researcher
There are already real-world examples of AI in healthcare resulting in health disparities, and most don’t involve clinical decision-making.
- In 2023, Cigna was sued for denying approximately 300,000 claims in just two months. The claims were denied in 2022 by its PxDX software.
- Also in 2023, United Healthcare was hit with a class-action lawsuit alleging it used an AI algorithm to deny elderly patients healthcare.
- In 2017, the use of race to determine the risk of a pregnant woman having a vaginal birth after previously having a C-section (called VBAC for Vaginal Birth After C-Section) was removed after it was determined the use of race to determine risk was erroneous. The error led to a disproportionate amount of C-sections performed on Black and Hispanic/Latina women.
- In 2019, it was reported that a healthcare-risk prediction algorithm that was used on over 200 million Americans, was discriminating against Black patients because the patient’s risk was being calculated based on how much patients had previously spent on healthcare. While Black patients have 26.3% more chronic risks than white Americans, they were given a lower risk score since their healthcare spending was in line with healthier white Americans.
- This past spring, nurses in the California Nurses Association picketed outside a San Francisco Kaiser Permanente medical center to protest the mandated use of AI without their consent. The nurses cited a chatbot that interacts with patients but relies on medical jargon to direct the patient to the appropriate representative as one concern and a patient acuity system that assigns a patient a score to indicate how ill they are, yet cannot account for mental status, language barriers or declines in health, as another.
- In 2020, an AI tool used to predict no-shows in healthcare providers’ offices led to Black patients being disproportionately double-booked for appointments due to the tool predicting that Black patients were more likely to no-show.
“The thing to think about is, we don't like healthcare as it’s practiced right now,” said Marzyeh Ghassemi, PhD, the MIT AI Healthcare researcher. “What I mean is we would not choose to have it be this way forever and the further back we go, the more we would probably say, we don't like this. When people tell me they're afraid of AI in healthcare, I say, the only reason you're afraid of Machine Learning models killing you tomorrow is because your doctors are killing you today.”
The stakes are raised with Generative AI because humans aren’t feeding it labeled data. When it doesn’t have the data to understand the context, it relies on what it has learned – which isn’t always applicable to the current task.
“Generative AI can go off the guardrails because there aren’t well-sampled spaces where we say you have to predict just this using this data or this feature set,” Ghassemi told the Quanta Research Institute earlier this year. “Instead, it looks at its own learned data to assume two things are close to each other, and when (the model is) generating in this space, (it) might start moving over and generating in this (other) space … and that’s where we need to be more careful because it might work in ways that are undefined or unknown.”
The need to scale is inherently capitalist, yet society is often faced with the need to balance public safety with some emerging new market that soon becomes a necessity (consider cell phones, computers, and internet access in the last 20 years). It’s difficult to say if AI will be markedly different from any other technology humans thought might cause irreparable societal harm but it seems the most unwise decision would be to leave it completely unchecked.
Stephen Norris is a strategic provider partnerships and management expert with a track record of driving growth and profitability. He has extensive experience building and expanding provider partnerships within the healthcare industry. Norris is skilled in contract negotiation, stakeholder management, and data analysis with a demonstrated ability to lead and motivate teams to deliver exceptional results. He has a deep understanding of the healthcare landscape and a passion for health equity through improving patient outcomes. He is #OpentoWork.