Most hospitals and health systems still are evaluating whether artificial intelligence is right for their organization. Many, though, are seeking to expand existing AI and machine learning implementations.
Healthcare provider organizations that fall in either of these buckets are the target of an educational session at an upcoming HIMSS forum, a session offering a strategic guide to AI adoption in healthcare.
Tom Hallisey is digital health strategy lead at the Healthcare Association of New York and a board member at Columbia Memorial Health. He will be speaking on the health system’s AI work at the 2023 HIMSS AI in Healthcare Forum, December 14-15 in San Diego. His panel session, which he is moderating, is titled, “A Strategic Guide to Incorporating AI into Your Healthcare Roadmap.”
Panelists include Albert Marinez, chief analytics officer at the Cleveland Clinic; Tatyana Fedotova, director, global data, platforms and partnerships, at Johnson & Johnson; and Christopher Larkin, chief technology officer at Concord Technologies.
Panelists will reveal the most critical questions to ask and decisions to be made at each phase of the AI journey, from build versus buy and tool selection to ensuring AI investments are targeted for maximum impact, and much more.
We sat down with Hallisey to get a sneak preview of the session and a better idea of what it takes to jump into healthcare AI.
Q. What is an example of a key question to ask during the beginning phase of a healthcare AI journey? And why is it important?
A. As is usually the case, the most important question to ask is what problem are we trying to solve. Generative AI tools are capable of so many new and potentially valuable results, but we must have a specific measurable goal in mind in order to show value and scale the work.
Once we have a pilot idea, then comes the AI-specific question of whether to build, buy or tweak an existing large language model tool with internal data and rules. This decision will be based on internal capabilities, privacy and security concerns, scalability and data/bias considerations.
The projects most likely to fail are those that are looking to find a use for really cool new tools, and there are no tools cooler than AI right now. A careful, measured approach is needed to what type of value are we looking for and what types of tools can we trust and provide the resources to implement successfully.
Q. What is one way of ensuring AI investments are targeted for maximum impact?
A. To ensure the best impact from AI investments, start a committee to collect and prioritize ideas, guide resource selections, review results of pilots and assist with scaling. Be careful to include a diverse group in the committee.
The business units and end users know best where problems and inefficiencies lie and can guide planning for best impact; their buy-in will be essential in achieving success. If a project is deemed too risky for a given area, as this technology is still very new and not well understood, it is not likely to succeed. Better to start elsewhere while educating staff on the abilities and potential problems with AI tools.
However, it’s also important to have top leaders in the selection process ensuring decisions are based on the current leading organizational strategies and most significant concerns. There are many use cases for AI tools that might add some value but could take important resources from attacking the most pressing issues of the day.
We also have to select projects and tools that are mature enough for proper integration into existing or updated workflows. One-off or niche projects aren’t going to bring the big results. Look at ChatGPT web use, even that has declined from its peak. The tools have to be integrated into operations to change the workflow and bring real value.
Q. What is one tip to ensure long-term success with a healthcare AI investment?
A. AI tools are often so new, long-term success may be difficult to accomplish. As AI LLMs or clinical algorithms continue to be used, data is updated, demographics change, and results can vary.
A recent study even pointed to how algorithms can build in their own obsolescence, as interventions change the underlying data they are built on and therefore their ability to predict.
Plans must be put in place to measure continually the results of each AI tool and intervention. What works in one site or in one population may not in another, and as I noted, what works today may not next year. New AI regulations from the White House executive order are looking to address these concerns, as well as the recent algorithm proposed rules from ONC dealing with EHR integrated clinical decision support.
Attend this session at the HIMSS AI in Healthcare Forum taking place on December 14-15, 2023, in San Diego, California. Learn more and register.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.