Skip to content

The Cedars-Sinai AI story, from primary care to dataset training

  • Health

Editor’s Note: This is part two of our interview with Craig Kwiatkowski. To read part one, click here.

Cedars-Sinai, the prominent California health system, has a variety of artificial intelligence programs either deployed or in the works. It’s ahead of the game in the field of healthcare AI.

Craig Kwiatkowski is chief information officer at Cedars-Sinai, leading the teams putting together the AI that is designed to improve care and help patients and providers.

In today’s interview, we talk with Kwiatkowski, who holds a pharmacy doctorate, about some of the AI tools being used at the health system. He describes how he measures the success of AI-enabled initiatives, and how AI can help advance health equity. Specifically, he shows how Cedars Sinai Connect, an AI-powered primary care app, is addressing AI biases and training on datasets that reflect diverse populations.

Q. What AI tools are you using or deploying at Cedars Sinai that seem particularly promising?

A. Our focus on tools is really looking at ones that can help reduce friction, improve efficiency and simplify things, frankly, to help our caregivers and clinicians and patients. There’s no shortage of opportunities in the generative AI category.

One thing I’m excited about is ambient documentation, sometimes called ambient scribe or virtual scribe. That technology seems very promising and I feel pretty bullish about it. We’ve been piloting these tools for a bit. The feedback’s been solid so far.

Many physicians are finding the ambient tools help with the cognitive load and the administrivia of writing notes, and we’ve begun to see that as we push these tools out. We’ve also noticed it doesn’t always save time in all cases, but it makes it easier for them and combats the burnout factor. And it allows them to focus more on the patient and less so on the computer, which obviously is important.

One of the physicians I spoke with described ambient as a really good medical student. It doesn’t get everything perfect, but it’s pretty darn good and helps them almost having a scribe on the side, so to speak.

But we’ve also begun to appreciate that it’s not for everyone. Some physicians have a very efficient workflow using current tools, templates, phrases, and a lot of muscle memory, to click through their notes and gather the information they need. And it’s more efficient that way than them actually having to read through the prose and all of the language that might exist in an AI-generated note.

We’re seeing those themes around some of the other tools we’ve been piloting. Like the draft in-basket capabilities. The AI-generated content is really good and comprehensive, but it does tend to be a bit more verbose.

The other technology we’re excited about and beginning to lean into is virtual sitters and virtual nursing, using some AI and visualization capabilities to provide alerting and more proactive management as those ratios start to change. And that seems to have really great potential to improve efficiency and care and help with staffing.

Quite frankly, we’re also excited about work planned and in progress around patient access and expanding our virtual tools further, again asking ourselves, how can we make it easier, not just for caregivers and staff but also for patients to be able to schedule themselves and receive care more easily?

Q. How do you measure the success of AI-enabled initiatives?

A. We’re handling it very similarly or consistently with how we measure any technology or new solution. Maybe it’s good to remind ourselves we can continue to lean on many of the more time-tested ways we’ve deployed and used to measured technology through the years.

And that is we look to develop KPIs and metrics, and then we measure the performance of the initiative against those criteria. And those criteria are typically tied to the problem we’re trying to solve, as well as hopefully the ROI we expect to achieve from the solution.

And so those outcome metrics should be pretty clear. In the example I mentioned in the case of access, we’d probably be looking at next available appointment, or if we’re looking to expand digital scheduling capabilities, it’s a simple numerator-denominator and a percentage of where we are versus where we want to be. So those things are usually pretty clear.

What sometimes becomes a little more challenging is we don’t always have a baseline, we don’t always have the baseline metrics, or it might be something that’s a little bit more difficult to measure. In those cases where we can, we’ll look to quickly gather those baselines or make some educated guesses for extrapolations as a measurement for the new tool.

In the case of ambient documentation, it’s not always easy to quantify or measure physician wellness or burnout. Turnover is certainly one way, but there’s a sliding scale of burnout that may never get reported or lead to turnover. And so it’s trying to sort of measure if you’re not already doing it.

Surveys are a way to do that, happiness scales, intention to stay, so on and so forth. But then there’s also other surrogate measures and notes we can look at that are aspects of note-writing – pajama time, time outside of work, total time and documentation. So, there are ways to get at the information and measure that value, but it requires a bit more intentionality in some cases, and maybe some creativity we haven’t always been proactive about.

Q. How can AI help advance health equity?

A. There are a number of ways AI can help. It can analyze vast amounts of health data to identify disparities in access and outcome, it can help with personalizing care. AI automation can make systems more efficient to hopefully improve access and availability.

A good example of that is something we’ve done here at Ceder’s Sinai called CS Connect, which is a virtual healthcare option that has physicians available 24/7 for urgent care, same-day care and just routine primary care. That helps alleviate capacity challenges within our brick-and-mortar locations. And it gets access to people whenever and wherever they need care.

There’s a guided intake that responds dynamically to the Q&A the patient will go through in the intake process. They can see information about what their potential diagnosis might be, and then they have a choice to have a visit with a physician or not.

We’ve recently expanded that offering to children, from age three and up, and to Spanish speakers, again, broadening the pool of folks who can use these tools to receive care.

Q. How is Cedars Sinai Connect addressing AI biases and training AI on data sets that reflect diverse populations?

A. We know the effectiveness of these large language models and AI tools is heavily dependent on the quality and the diversity of the data on which it was trained. We know the more variety of demographics and geographics we include, the more we’ll be able to control for certain biases. If populations are underrepresented, we can have a bias prediction.

So, we know that’s important, as is the volume of data that goes into training and monitoring these tools for CS Connect. The AI technology was developed by a company called K Health out of Israel, and we sort of co-built the app experience with them. Again, back to the build-versus-buy question.

We saw a gap in the market and decided to build. But the AI was initially trained on patient populations in Israel, and those populations are obviously very different than the people within our community here in L.A., and then throughout California where the tool is available.

So, recognizing there’s mathematical methods and approaches to adjust the datasets and the training to ensure our populations are accounted for to control for those sorts of biases, it’s also the growing appreciation that data and training is local, and it has to be.

And we need to account for that as we build these tools, along with ongoing training and monitoring of the models as they’re deployed. As we’ve deployed CS Connect, we’ve had roughly 10,000 patients who have gone through the tool and about 15,000 visits. All of those patients and visits are going to help with ongoing training and enhancement of the models, which hopefully will continue to improve the accuracy and maintain the safety and soundness of the solution over time.

Editor’s Note: This is the eighth in a series of features on top voices in health IT discussing the use of artificial intelligence in healthcare. To read the first feature, on Dr. John Halamka at the Mayo Clinic, click here. To read the second interview, with Dr. Aalpen Patel at Geisinger, click here. To read the third, with Helen Waters of Meditech, click here. To read the fourth, with Sumit Rana of Epic, click here. To read the fifth, with Dr. Rebecca G. Mishuris of Mass General Brigham, click here. To read the sixth, with Dr. Melek Somai of the Froedtert & Medical College of Wisconsin Health Network, click here. And to read the seventh, with Dr. Brian Hasselfeld of Johns Hopkins Medicine, click here.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Leave a Reply

Your email address will not be published. Required fields are marked *