Color Block

Text

Responsible use of analytic algorithms and AI to mitigate bias in health care

Harness technology for more equitable health outcomes.

Text
Text
Text
Text

Using advanced analytics responsibly, ethically and equitably

Text

 

When used responsibly, advanced analytics, including artificial intelligence (AI), has the potential to help achieve the health care Quadruple Aim: better patient experiences, better provider experiences and better health outcomes — all with an aim to be at a lower cost.

Every day, the health care industry realizes more benefits of AI, and the future possibilities are endless. As AI in health care becomes more commonplace, we’re learning about its power to improve care delivery and outcomes. We’re also learning how to mitigate its weaknesses and potential for health care bias. The algorithms that underpin today’s AI reflect the data generated by our health care system. That means that if health equity issues exist in the system today, the AI algorithms can learn these patterns and inadvertently perpetuate biases.

Text

If we base care decisions on a model that doesn’t account for key demographic and psychographic variables, or geographic health disparities, the predictive AI systems will do the same. Ultimately, this can affect an individual’s access to care and overall health. An algorithm can also produce inequities if we apply it inappropriately or we don’t use it correctly.

To overcome these challenges, the health care industry needs to acknowledge health care AI bias and put measures in place to limit it.

Video Component

The American Medical Association has purposefully chosen to use the term augmented intelligence to describe its use of AI computational methods, and the reasoning behind this is that augmented intelligence infers the use of AI to enhance and scale human clinical decision-making as opposed to replacing it. I think this is an important distinction from other industries. This idea that this premise that care will and should always be led by human clinical decision-making, but we're not going to hand over the reins entirely to artificial intelligence.

 

The great news is that there are countless opportunities to use AI to enhance and scale healthcare delivery. From a quality perspective, AI computational methods can be used for clinical decision support, ML models can be used to predict the onset of disease and the risk of disease progression, it can be used to improve identification of patients with rare disease, and we know that computers can do a much better job than the human eye of interpreting medical images, and that machine learning models can be used to enhance patient outreach and engagement. From an efficiency perspective, AI holds promise as a means to improve speed to diagnosis, and it can improve identification of fraud, waste, and abuse in healthcare delivery.

 

AI holds tremendous promise, but like all analytic methods, it also poses some challenges. As an analytics community, we should all commit to responsible use of these powerful methods. We are always accountable for the integrity of our work and the model results that we unleash into the world. Particularly in healthcare, the bar is set very high because the stakes are high. Our models and outputs are potentially informing treatment decisions for patients. We are always accountable for the validity and reliability of our analytic results. We also must understand and clearly communicate the strengths and limitations of the data and methods that we use. We must provide transparency around the data and our model outputs. For me and my organization, responsible use is also a commitment to promote health equity and proactively take steps to prevent the exacerbation or introduction of bias due to socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation.

Text

The need for human expertise to avoid bias in health care

A computer software’s binary code is based on ones and zeros; there’s not much room for interpretation beyond yes or no. Even millions of lines of code can’t fully capture the complexities of how the human brain functions. Without that human judgment — including moral and ethical perspectives — AI is an incomplete solution and can inherit bias from existing data.

While this AI bias affects every industry, it can be a big risk in health care. Today, the system can create health equity problems because individuals have varied access to care and different benefits through their employers or insurance carriers. Physicians also may vary in how they deliver care.

Until the health care ecosystem learns how to address those systemic issues and eliminate the resulting biases, it’s critical that technology, including AI, not make matters any worse.

But health care leaders don’t see AI as a replacement for human interaction. Instead, they believe it can be a resource that experts use to make care more efficient and effective. However, these experts must receive proper training on AI technology application.

To help prevent potential AI bias from affecting health equity, health care leaders are taking a number of important steps. Two of the key approaches are:

  • Helping their teams gain better insight into where and how people live, work and play. This information comes from social determinants of health (SDOH) data. By studying SDOH, health care leaders hope they will better identify the complex factors that affect health outcomes.
  • Adding explainable AI interfaces into their platforms. Explainable AI helps the user better understand algorithm process, so they can trust the results and more easily see any resulting biases. This increased transparency can help human experts ensure the AI model does not inadvertently favor one group or geography over another.
Text

How do we pursue responsible use of AI and analytic algorithms?

No solution will eliminate the risk of health care AI bias. However, we can use human experience and insights to minimize bias and be prepared to respond when it happens. 

At Optum, we’re taking steps to address the limitations and risks of AI. We brought together internal and external experts from various disciplines to develop a strategy for avoiding algorithm bias in health care. Known internally as the Committee for Responsible Use of Advanced Analytics, this diverse leadership team works across our enterprise analytics, legal, clinical and technology groups to drive consistency in an approach that includes:

  • Establishing a culture of responsible use. We developed a set of corporate guiding principles that serve as a statement of intention: We will use AI to advance our mission to help people live healthier lives and make the health care system work better for everyone. We also affirm our commitment to be thoughtful, transparent and accountable in our development and use of AI models.
  • Embedding fairness testing in the model development process. We use an open-source bias detection tool to test for fairness in AI model predictions.
  • Monitoring model performance and use following deployment. We are building a culture and related capabilities to assess health care AI algorithm models after deployment to determine if they need to be retrained.
  • Developing a diverse workforce. We are intentionally building an inclusive and diverse workforce that represents all the communities we serve across all parts of our business, including our analytics and technology teams.
  • Researching the root causes of health inequities. Raising awareness and increasing our knowledge of the root causes of health disparities helps us stay attuned to the risks of bias and allows us to be more informed consumers of AI-derived results.
Text

Annual AI in Health Care Survey

For three years, executives from hospitals and health systems, health plans, employers and life sciences organizations have shared their attitudes and expectations related to AI. This past year’s takeaways balance growing confidence in AI’s potential mixed with caution.

Explore more

Text

Text

96%

view as important to helping achieve health equity.

Text

Text

73%

indicate concern with transparency of AI.

Text

Text

3 out of 4

wary around bias in AI results.

Text

Text

92%

expect their workforce to understand how AI makes its predictions.

Text

Learn more about using AI and technology to support health equity

Text

Text

Artificial intelligence in health care

Advancing AI application plays a critical role in making health care work better.

Explore more

Text

VentureBeat logo

Text

AI, responsibility and creating a better health system

View the perspective in the VentureBeat special issue on AI in health care.

Read now

Text

Text

Data science and responsible use for our health

Get a data scientist’s perspective and guidance.

Read now

Text

Text

In pursuit of parity

Tech luminaries from inside and outside health care host a discussion on the ethics of AI and our health.

Watch the broadcast

Text

Text

Innovation for the good of health care

Tune in to conversations from a recent Fast Company event on enabling a healthier world.

View on-demand

Text

Optum Until It's Fixed logo

Text

A recipe for better health care

AI and human judgment take center stage on the Until It’s Fixed podcast.

Listen now

Text

Data and technology are moving health care forward

Explore innovation news and insights

Text

Sign up for updates

Receive fresh perspectives and expert advice on data, analytics and tech innovation in health care.

Eloqua Form - v3

When you're ready to talk to an expert, contact us.

Text

Want to connect with Optum?