There’s a lot of buzz around the potential of artificial intelligence to transform care delivery. New technologies offer advanced capabilities for ensuring the right patients receive the right care at the right time – while lifting clinicians’ administrative burden.
But what do physicians and clinicians think about the power of AI to change healthcare and health outcomes now? And what are the risks associated with rapid adoption of AI systems?
We interviewed Dr. Carrie Nelson, chief medical officer at telemedicine technology and services company Amwell, to get her insight on where AI is making a difference now in healthcare – for patients and clinicians – as well as the risks of accelerated adoption of AI in care and the keys to taking a value-based approach.
Q. What do you see as some of the most exciting AI tools to emerge for clinicians?
A. Certainly, there is valid reason for excitement.
For instance, one recent survey of nurses found documentation takes up 15% of every 12-hour nursing shift. Tech-enabled activities – including the use of AI tools – hold strong potential to reduce 35% of that burden, according to the analysis.
By leveraging AI to help clinicians make the best use of their skill sets, we could reduce burnout and allow clinicians to focus more fully on those we serve. In the conversations I’ve had with physicians, many are very excited about the potential for AI to help improve work-life balance.
We’re also seeing a wave of AI innovations that empower health systems to deliver hybrid care at scale to enhance access and improve outcomes. At Spectrum Health, which sees more than 200,000 emergency department patients a year, automated, chat-based check-ins with patients after discharge help catch a change in condition earlier, so care teams can intervene quickly.
The program has achieved a 5% reduction in ED visits at a savings of $1 million, with a 90% patient satisfaction rate. Nurses are also finding their work more satisfying, since they are reaching out to the right patients at the right time.
At St. Luke’s University Health Network, use of automated digital behavioral health tools for employees with anxiety and depression has helped 71% of participants achieve clinically significant improvement.
In addition, efficiencies can be gained from AI-powered referral management, prior authorization requests and other tasks to lift the administrative burden from clinicians and staff. Automating these tasks gives clinicians more time to care for patients, allows enhanced focus on what matters most and mitigates burnout.
Q. What is the potential for AI to improve quality of care?
A. As the healthcare community gains more experience with AI-powered automated chats – especially for vulnerable populations – new use cases for improving quality of care are being identified and leveraged.
Take maternal health. Women in our country today are twice as likely to die of pregnancy complications than their mothers were, particularly if they’re low-income or live in a rural area. At Northwell Health, virtual, automated companions for pregnant women have helped identify high-risk patients in 16% of interactions.
Many of these chats were otherwise unexpected based on in-person interactions. This has empowered the organization to escalate cases to clinicians’ attention between visits, allowing for timely specialized support for these women and families.
Meanwhile, across more than 35 specialties, Northwell has experienced 69% of AI-powered interactions to close gaps in care. These successes, demonstrated across populations and conditions, are fueling rapid adoption. The more we learn about how to improve quality of care and health outcomes through automation, the more we can see the value of this approach to both common and complex conditions.
There also are certain aspects of care that could be automated within the context of a live patient visit for better short-term and long-term results. For example, it doesn’t take a physician to know that a woman over the age of 50 with an average risk of breast cancer needs a mammogram order.
Such things can be automated. Even in more complicated care scenarios, automation could be used to gather information regarding a patient’s complex family medical history or other risk factors for disease. AI could synthesize the relevant information from the patient’s medical record to help physicians bridge gaps and ensure accurate medical documentation.
Q. What are the risks associated with accelerated adoption of AI tools in healthcare?
A. Innovation around AI is happening very rapidly, and there is absolutely a risk in moving too quickly. For instance, there’s talk that ChatGPT could be used to help physicians respond to messages from patients that are received via patient portals – but is that the right use of AI in healthcare? As challenging as that inbox is, we must pause and assess the risks before charging ahead.
We also know longstanding healthcare system inequities and bias will insert themselves into and be potentially magnified by AI algorithms. This bias, including the type of data that has and hasn’t been collected and documented in our medical records, limits the potential for AI to improve quality of care today, especially for vulnerable populations.
It’s essential we identify those gaps and work to strengthen those data sets if AI is to live up to its potential to support healthcare workers in improving care quality.
It will take more experience with AI-supported care models to discover what’s possible, what isn’t, and how to establish the right guardrails. Any margin of error in healthcare is unacceptable. While I’m optimistic, recent data shows that we still have a long way to go.
In fact, 60% of consumers say they would feel uncomfortable if a provider were to rely on AI for their care, according to a recent Pew Research Trust survey.
Q. What are the keys to taking a value-based approach to AI adoption in healthcare?
A. Just as one Boston hospital is hiring an AI chat engineer to design and develop AI prompts for large language models like ChatGPT, it will take a combination of intelligent discovery and human guardrails for AI innovation in healthcare to deliver on value.
The progression of medical knowledge has far outpaced our ability as clinicians to bring it all to bear in the context of caring for patients. AI can help distill that knowledge across that vast literature to help inform a complex diagnosis or prepare a treatment plan. I would love to see us get to a point where physicians are using AI in efficient ways to enhance diagnostic accuracy.
Diagnostic error is a major patient safety issue. Clinicians could leverage AI tools, applied against the background of their knowledge of the patient and the patient’s wishes, to optimally and accurately tailor care to the individual.
To realize the vision of what’s possible, we need to take a structured approach to generative AI – one that does not incorporate a lot of free texting. We’ve seen the funky answers AI can provide when we engage in free-flowing conversations with a bot and ask it to extrapolate meaning from those encounters.
A better approach at this time is to establish guardrails for AI support in the management of complex conditions by ensuring chatbots ask specific questions that generate yes-or-no answers or prompt patients to respond with a discrete data point, such as their glucose level or weight. AI algorithms can then detect when to prompt clinicians to respond based on the data that is input.
At Amwell, we’re applying this approach across a number of disease states and patient populations, and it’s making a difference for quality of care and population health.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.