O4 Hubs detail
O4 Detail Hero Banner

Watch now

The Future of Risk Adjustment

How Advanced Data Science can Predict Disease

O4 1 Column (Full)
O4 Video Player Component

- [Eric] Hello and welcome to our webinar on the future of risk adjustment. My name is Eric Haseman, vice president and general manager of Payer Solutions. And with me is Sanji Fernando, senior vice president of AI Products and Platforms. We're excited to be here with you today to discuss how advanced data science can predict disease. We'll be sharing how these data science methods were applied in recent studies we've done, and also where this is headed. The art of the possible, if you will, to improve the health system's ability to predict disease and engage an appropriate intervention. We'll start with a review of current AI trends, then, we'll shift to studies we've done to improve outcomes by applying AI and data science together with risk adjustment as a prediction tool. We'll talk about a payer provider collaboration that we did using these likelihood analytics as well as the study of early identification and treatment of chronic kidney disease, CKD, using these disease prediction models. And then finally, we'll wrap with recommendations and key takeaways. So with that, let's get started. Sanji, why don't you lead us off with a quick grounding on AI and machine learning?

 

- [Sanji] Great, thanks, Eric. So artificial intelligence actually has been around for quite some time. It may feel like a new concept for us but the term was coined in 1965. And the idea was to try to replicate human cognition and reasoning. It's gone through a lot of waves of promise, and then almost fail promise over the years. And through many of these breakthroughs in waves, we've leveraged some of the better approaches like around using rules and expertise to execute some of this artificial intelligence, but that only could go so far. More recently, in the last few years, we've seen some really amazing breakthroughs in machine learning, a type of artificial intelligence, that I'll talk to a little bit. These new breakthroughs are really important because we're able to accomplish tasks that we really didn't think were possible in software. One of the basic concepts that you might have heard and seen is image recognition. You know, every day, if you're using social media, you might get recommendations of pictures that you're in, or other capabilities, in popular prep for social media platforms. And it's really interesting because it's very hard to create a set of rules around what's in a picture. The classic example we use is cats versus dogs. They both have four legs, they both have the ears. Those all could be rules to say if those are present in a picture, it must be a cat or it must be a dog or a cat. But we know that cats and dogs are very different. We can recognize that immediately when we see them. We know what they might be and what they might not be. And it's hard to create all the rules that are needed to define that. With some of these new breakthroughs in machine learning, we don't necessarily have to create those rules. We have algorithms that can learn from pictures, voice, and text. And from that learning, it could do what people do very accurately. And this artificial intelligence is in our lives every day now. If you're using a smart speaker, or virtual assistant, Siri, Alexa, Google Assistant are great examples of these new breakthroughs of artificial intelligence at work today. Other examples are maybe your Netflix recommendations or even some of the emerging breakthroughs in driverless cars, all of which are not defined by rules, but really about what we learned from the data. So what is machine learning? In the broadest sense as we think about artificial intelligence, we are essentially learning from subject-matter expertise, oftentimes but not always, a common way to think about machine learning is having information and input as well as sort of the answer key from experts or people, people who might've said, there's a dog in this picture, there's a cat in this picture. This, at a very broad and simplistic way, is at the heart of machine learning that we can develop mathematical algorithms that can essentially learn from all these examples. Within machine learning, we have some interesting subcategories, representation learning, and probably most relevant for our discussion today, deep learning. These are complex networks of equations that we refer to as neural networks. And with many layers or many deep layers of neural networks, we can accomplish things that, for quite some time, we thought only people could do. And training these deep learning neural networks, we, as you'll see later in these slides, has been very well suited for specific use cases in health care. So what are these opportunities to leverage machine learning in health care? Today, we do so much to make sure that there's appropriate reimbursement, appropriate utilization of health care resources every day. Yet, we know those reviews, data evaluation, data assessment can be extremely complex, require very skilled professionals who need to work at the top of their license to get the most complex cases or reviews or analysis done. And so, we see a lot of opportunity train machine learning models, in many cases, deep learning models to support them in that work, whether it be prior authorization, risk adjustment, quality in closing gaps in care, understanding who might be at risk for a disease as well as even adjudicating claims. Oftentimes, the burden for much of this work are on these skilled professionals. And what we aim to do is find those opportunities where two things are happening. It's a very clear-cut example where the review is very straightforward for that person working at the top of their license. And more importantly, there's agreement within the two parties, whether it be payer provider or some other counterparty where everyone agrees on the answer. And if we could find those examples where everyone could get to yes, those might be the most well-suited to apply machine learning model and move that off the queue of these really skilled professionals, so they can focus on the most important and complex decisions that are very appropriately done by people, not a machine learning model. So as we think about how to find and use artificial intelligence and machine learning, we started to understand what makes a great use case. And a lot of this was through trial and error. Some of the basics are, first and foremost, what is the opportunity? What's the opportunity to deploy a model? How would we go to market? How would we enable people to use this? Does it make sense from a business standpoint to deliver a solution? And do we understand how exactly they'll drive value? The second question goes to the basics of machine learning. While breakthroughs are happening every day, a good rule of thumb right now is that we do need a lot of data to train these machine learning models, much more than a person might need to learn a task or skill. And sometimes, this requires hundreds of thousands millions of examples of data to be presented to the machine learning model. Somewhat related to this, while not a hard and fast rule, it really helps for us to have the answers like, to have the answers that a person might have drawn upon from seeing the same data. We, in the data science field, call that labels. And so we're always looking for those examples where we have a history or a record of the great decisions that skilled professionals across often make every day, and use that as our answer key to train these machine learning models. Finally, we also want to understand how much of a requirement will there be for us to explain the why of the decision. Oftentimes, not always, these machine learning models are not truly mathematically interpretable. And so if we have to explain why the machine learning model came to a decision, we need to think about that burden and think about how we apply that and if it's appropriate to apply. With some specific areas like deciding a clinical treatment or diagnosis path, we want to understand: Is this appropriate use of machine learning and can we provide more guidance or causality, or what role does a person or a skilled professional like a clinician play in the decision-making, and what expectations do they have in understanding how the model came to a decision? What we've found is, as I mentioned earlier, when we think about prioritizing work, typically in an administrative setting, that burden of explainability may not be as high, especially if both parties agree on what the right answer is. With machine learning models' tool, we also have a different way to build software. At the heart of it, we are essentially asking the machine learning model to complete a task, and we're giving it lots of information, data input, lots of answers, and we're evaluating how well it does. This drives a very experiment-driven approach to machine learning, and essentially requires a very iterative process. We initially start by establishing some baselines and building some basic models or retraining existing models that we have. We take a very experiment-driven approach, and in that process, we evaluate how well the models working both in performance, but we also evaluate for fairness as well. At that point, if we can meet the criteria we've set for the model both in performance and equity, a champion selected and we could deploy that into a business solution. At that point, the model can be presented with new information and provide feedback and scoring on what, it's an assessment of that data . But that's not the end of the story. We have to constantly monitor the performance of these models, make sure we don't see changes in the data being presented to the model, which could necessarily change how the model makes a recommendation or a score. And more importantly, these models get better with more data. So as new information, new data is presented to the model, it gives us an opportunity to retrain it, to reevaluate it for fairness, and see if we can do better. In some regards, there's no finish line here. But with some great tools, we're able to make this a very efficient and rigorous process. In this example, here, this is a classic example of how we are leveraging machine learning to predict the likelihood of a disease and its outcome, or disease risk. And what's great about this example is that you can see, with more information, more claims history, more clinical lab values, we can understand how a person's disease risk might change over time. And that could lead us to any number of next steps, those are all described, but helping us understand how we can help this person or assist that disease risk or ensure we have the appropriate information and documentation to reflect the complexity of this. What you're seeing here is, with each doc, the increasing or decreasing risk, with that, changes in data in the likelihood of this person being diagnosed with vascular disease.

 

- [Eric] Thanks, Sanji. I think with that, let's transition into some of the studies that we wanted to highlight here. I'll start with chart review and how we're using AI with medical charts and cover a couple of the studies together with you. So, many of you are familiar with how artificial intelligence is used to review medical records. It's very common today. When I talk to others about how we're using AI in this space, I really talk about it in three areas of focus. AI targeting, AI retrieval, and AI chart review. So first, let's start with targeting. So what is AI targeting? We're using it to predict and prioritize records most likely to support specific unreported diagnosis codes and gaps in care. What is AI retrieval? It's helping identify the right modality most likely to be successful based on practice patterns, propensity to complete the request, et cetera. And then lastly, we have AI chart review. And even within there, there's multiple use cases. I think we start with, you know, we're analyzing charts for potential unreported conditions or gaps in care. We use that to route reviews based on coder expertise. Some coders have demonstrated their proficiency with certain conditions more than others. And so we have the ability to determine which medical coder makes the most sense to review a particular chart. We also use AI to improve automation of coder reviews in terms of providing them with the support they need to do it accurately and completely. And then lastly, we can also use AI to detect if a member's health history may still indicate unreported diagnosis codes and then routing that medical record for additional completeness reviews. We've also seen these advanced data science models support physicians in monitoring the health of their patients. Many EMRs today can identify risk adjustable conditions within the current problem list of a member. What we did, and what we were able to do was take that a step further by helping providers identify potential conditions early in the disease process so they could be assessed. We look at a much broader dataset than it's available within the EMR itself. This specific study use disease likelihood analytic model that was trained on years of clinical history. The model scored the likelihood of a condition and delivered it for use at the point of care, together with a prediction score, essentially a means to quantify the likelihood. And what we learned is that the prediction model was more effective at capturing on reported conditions, and even better, the providers in the study reported higher engagement on how the data and the tools were applied. Particular, as we were able to quantify and display it and integrate into their workflow, it made it much easier for them to be able to take action in the next steps, relative to that information. We saw an 8% increase in unreported conditions documented as part of the study, and an 11% increase in recapture conditions, compared to existing, more, I'll say rules-based analytic models. And then, given the success of this study, this approach was expanded and we're continuing to innovate on similar applications, such as this. For another example, Sanji, why don't you cover our study on the early detection of chronic kidney? I think this is really exciting work. It's really quite remarkable in terms of where this is headed. Sanji?

 

- [Sanji] Absolutely, Eric. As described by many academic and clinical organizations, as many as nine in 10 adults may have chronic kidney disease and not even know it. The causes are mainly diabetes and high blood pressure, but there are other factors like age and history, race, family history, and race, all play a role in this debilitating disease. And our hope is that by understanding and detecting early the progression of this disease, we can intervene both clinically and through other mechanisms like lifestyle and diet change to help maybe change the trajectory of folks who are at risk for this disease. And what's exciting is that by applying sets of machine learning approaches, as described earlier, we're exploring how early we can identify these folks and drive hopefully helpful intervention and outcomes to change that trajectory. I'll go to the next slide. At a very high level, what we want to try to do is identify members with the likelihood of CKD very early. At that point, we're exploring how we can test them to see if that risk is really present. With that home screening test, that make it possible to test across a wide population. And once identified, we then have a number of programs designed by our clinical leaders to engage members in those changes that hopefully can change that trajectory. But it's a pretty exciting approach. And by both identifying early with machine learning and testing early based on that predicted risk, we hope to see if we can really change the trajectory of the disease.

 

- [Eric] Thanks, Sanji. As we pull together the key takeaways that both Sanji and I have spoke about here today, I summarize them, it's really four key things. First, advanced machine learning models and tools are showing promise in predicting disease from undocumented conditions to, really the forefront, being emerging risk for conditions not previously diagnosed. Number two, augmenting clinical decision-making with machine learning models can illustrate the likelihood of illness or health events over time, such as, say, an inpatient admission risk. Third, the foundation of AI is the ability to pull actionable insights from data. Organizations need resources who can do strong data analysis that understand the patterns that can determine what data sources would be best to be able to inform these models. And then lastly, understanding your data as well as the answers that we described help us train the models on how to predict, what the outcomes that we are looking to test against. And so, prioritizing things such as data governance, data management, data best practices, so that you're curating the data that exists in the organization. Have teams that are focused on ensuring the data is being applied, the data is given in the context that you need in order to apply it to these models and make it most actionable and useful. So with that, thank you so much for your time today. We appreciate it. We've got our contact information there for both Sanji and myself. If you like more information, please visit us at optum.com/risk. With that, thank you for your time.

 

- [Sanji] Thanks for your time.

O4 1 Column (Full)
O4 Text Component

Advance your data science strategy

  • See how advanced data science combined with risk adjustment is leading to early disease detection and prediction
  • Hear recommendations for how you can advance your data science strategy
O4 1 Column (Full)