Organisational readiness, prevailing regulations, and user understanding are some factors that impact the success of AI deployment in health systems.
During the HIMSS24 APAC panel session, “AI Horizons: Exploring the Future of Innovations,” Professor In-Young Choi, vice CIO of Catholic Medical Centre (CMC), Dr Shankar Sridharan, chief clinical information officer at Great Ormond Street Hospital, and Dr Ngai Tseung Cheung, the head of Information Technology and Health Informatics at Hospital Authority, Hong Kong (HAHK), shared outcomes, lessons, and anticipation of AI implementation in healthcare. Professor Kee Yuan Ngiam, head of the AI office at the National University Health System, moderated the panel.
The realities of AI implementation
AI in healthcare has been touted as an all-in-one solution for digital needs, but Dr Cheung thinks otherwise.
“AI is an amazing technology and sometimes appears magical, but it really isn’t. AI is a tool no different from any other technology we can put in play.”
He shared questions to consider when applying AI: “How does AI get put into a workflow at scale so that it can deliver positive outcomes? What is the impact [of AI on the organisation]? How does it affect the users? Does the output give you something actionable?”
Dr Shankar of Great Ormond Street Hospital shared these preliminary perspectives. “The medicinal asset [for implementing AI] is enthusiasm, which can be quite infectious. But we need to address its purpose… When we do a benefits assessment on safety, security and technology, is there a clinical, operational or patient experience benefit?”
Prof Choi, however, held a different view, given the strict regulatory environment she is working with.
“We do not have government-based control for EMR contracts. Each hospital has its own [exclusive] data. It is not easy to share data with other hospitals, and it is not easy for AI companies to access hospital data. If the [AI] algorithm uses cloud computing, it will be outside Korea, and the law does not allow our medical data to enter foreign countries,” she elaborated.
Overcoming barriers to adoption
Given its multi-layered nature, the deployment of AI tools may require additional reassurance. Dr Cheung shared HAHK’s proactive stance in conducting internal validations of AI tools.
In contrast, Dr Shankar provided data safety reassurance through vetted partnerships. “We have a trusted research department and commercial relationships with pharma and industry that airlock [data]. Then, we communicate to our CEO that our data is safe and that the [AI] tool is useful and valuable without losing data due to risks.”
Dr Cheung also noted that regular demonstrations can appease pessimism levels among health staff. “Good thing is we have 43 hospitals. If it (an AI test) does not work out, it is fine. Then we can show that we tried… If we can show them that it works and the hesitant [staff and doctors] switch [to optimism], that will be good… We have to show them that the AI is okay [for use].”
Realising future AI potential
Despite adoption barriers, CMC has software development plans which could help pathologists perform treatment for lung cancer.
“One patient can have many subtypes of cancer. For example, an individual can have a 20% capillary subtype. Humans can’t count that [figure], but AI can. It can help clinicians provide better treatment to the patients,” Prof Choi explained.
“We focus on one specific disease for now. But after generative AI comes, maybe AI can see [relevant health] details in a more comprehensive way.”
Meanwhile, Dr Cheung contested the human impact of future AI.
“Today’s AI is not sentient. It is not at the level of a human being. However, there is a concept of artificial general intelligence that is touted to be superhuman. If that [technology] is better than humans in every way, there will be a massive change to medicine and humanity.”
Dr Shankar believes that there is a long way to go in realising AI’s full potential. “We are a bit like cavemen [with AI’. Our limitations are our own, as are our use cases.”