NIH Record - National Institutes of Health

Equity on Autopilot

Hightower Discusses Responsible AI in Health Care

Dr. Maia Hightower
Dr. Maia Hightower

Artificial intelligence (AI) tools hold great promise for innovating health care. Algorithms can sift through voluminous data to improve diagnostics, drug development and patient care. Such tools can optimize efficiency and reduce administrative burden, which in turn can lessen clinician burnout and potentially lower health care costs. These applications just scratch the surface of what’s possible.

But beware of the darker side of AI. Its tools, methods and the data itself can have unintended consequences, noted Dr. Maia Hightower, a physician and former executive vice president and chief digital transformation officer at the University of Chicago Medicine and former chief medical information officer at the University of Utah Health. Earlier this year, she spoke at the National Library of Medicine (NLM) Joseph Leiter lecture, co-sponsored by the Medical Library Association.

“There’s this real concern that AI systems can perpetuate bias based on the datasets they’re trained on, the decisions that are made along the development process all the way to the way an AI system is deployed into a care delivery system workflow,” Hightower said. “AI can perpetuate and amplify bias that already exists.”  

While formulating AI strategy, consider the quality and sources of data and whether the sample is representative of the target population, she advised.

“Real-world data has real-world bias,” said Hightower, who also is CEO and co-founder of Equality AI. “Bias starts in the moment of data creation.”

Who is formulating the questions and deciding what problems to solve? Is data stratified by demographics to ensure accuracy? Is the model fair across subpopulations?

“You can have the perfectly devised model and then you deploy it into an imperfect world,” Hightower said.

Sherry and Hightower converse online during lecture.
Hightower (l) chats with acting NLM Director Dr. Stephen Sherry, who moderated the lecture.

An ongoing dilemma in AI is “the human in the loop versus autopilot,” she said. AI should complement human expertise, not replace it, she said. “How do we ensure…we’re not becoming overly reliant on an automated system or a system that has not yet earned that trust?” 

Setting standards and guidelines can help. Hightower said regulations are moving fast in the AI space, but in some ways not fast enough, when it comes to intellectual property, data privacy and algorithmic bias.

Hightower cited a study—(Obermeyer et al.) published in 2019—that found racial bias in an AI model deployed in hundreds of health systems around the country, among an estimated 80 million patients. The study revealed that, due to the algorithm’s bias, Black patients were 62% less likely to be referred to case management despite being equally sick, leading to a significant underestimation of their health care needs.

“When that model was released, I was chief population health officer at a large health system,” Hightower said. “This model most likely was deployed in my health system and I didn’t have the tools to measure or mitigate it.” Developers ultimately fixed the model.

In a systematic review on the impact of health care algorithms on racial and ethnic disparities conducted by the Agency for Healthcare Research and Quality (AHRQ) and NIH, published in 2024, only 63 studies met inclusion criteria out of more than 13,000 citations. Inclusion criteria required measuring stratified performance or outcomes by demographic groups, which is a key requirement for assessing the impact on health disparities. While some of these algorithms perpetuated health disparities, a few decreased health inequities by design.

To Hightower, these results revealed that, “when intentful, AI algorithms can decrease health disparities. But the biggest travesty is that most did not even measure stratified performance, let alone the effect on outcomes.”

Nearly 80% of health equity leaders report that they’re not involved in their organization’s AI strategy.

“Often you’ll have your AI strategy on one pillar and your health equity on another pillar, and the two don’t meet,” said Hightower, emphasizing the need for the two to come together.

“When AI investments are not aligned with health equity goals, this limits AI’s transformative potential,” she said.

The solution has its own equation: validate, audit and monitor.

A technical audit evaluates performance and can be stratified by population—race, gender, age, disability, among the possible variables. If the model is flawed, consider what inputs could build a better one.

“There are a lot of different ways of rectifying your model, and then recycling it through this whole process,” Hightower said.

Librarians, contributors to NLM and staff all can contribute to responsible AI by ensuring AI research is transparent, reproducible and accessible. Science librarians, and all science communicators, play a critical role in translating digital literacy to patients, practitioners and the public.

“We’re going to need that in this new AI world,” Hightower said. “We need to give the broader community the resources to thrive in AI decision-making.”

Institutions must be held accountable for the AI systems they deploy, and patients must demand accountability, she emphasized. “Some of the most interesting ways of driving change have come from patient advocates who made visible vulnerabilities that really put patient privacy and safety at risk, and then advocated for responsible AI.”

When designing machine-learning models, she urged developers to “have subpopulations in mind, test thoroughly, listen to stakeholders, adopt new methods.”

Equity gaps can drive innovation. To health equity advocates, Hightower advised, “when you have that seat at the table, make sure you’re expanding that seat to others, including diverse community voices, and recognizing who is missing.”

Clinicians also should be involved in AI development and speak up about concerns, she said. Doing so ensures that, as the field of AI forges ahead, clinicians and others in health care remain humans in the loop. 

The NIH Record

The NIH Record, founded in 1949, is the biweekly newsletter for employees of the National Institutes of Health.

Published 25 times each year, it comes out on payday Fridays.

Editor: Dana Talesnik
Dana.Talesnik@nih.gov

Associate Editor: Patrick Smith
pat.smith@nih.gov

Assistant Editor: Eric Bock
Eric.Bock@nih.gov

Staff Writer: Amber Snyder
Amber.Snyder@nih.gov