AI may be optimally applied in healthcare settings without overstepping human boundaries.
“When it comes to AI, I still apply principles of the physician’s Hippocratic Oath but repurposed – AI shall do no harm, we will maintain equity and confidentiality, and above all, a human must be kept in the loop,” shared Dr Shankar Sridharan, the chief clinical information officer at Great Ormond Street Hospital, United Kingdom. He spoke in the plenary session, “Deploying Generative AI in Clinical Environments,” at HIMSS24 APAC.
“Perfection [in AI] is not necessarily great; it abdicates the responsibility of the user. AI is already faster, but it can be kinder and safer – kinder to pay attention to the patients that we [doctors] check, and safer when a human is always kept in the loop to verify the AI.”
Dr Shankar gave a live demonstration of using Ambient AI to auto-generate clinical notes. He conversed with a mock patient about his symptoms and medical histories.
“The transcripts come immediately,” he remarked as he displayed the AI solution’s output. “The platform automatically segmented notes into clinical notes that included the patient’s profile, allergies, symptom summary, observations, and plans for follow-ups with the doctor.”
Despite the benefits, Dr Shankar stressed that hospitals should focus more on governance and digital infrastructure for optimal AI deployment.
“AI protects cognitive loads, but it needs to be done [within a governance framework]. We need to look at word error rates and hallucination rates. The responsibility falls on hospitals to have systems in place. This can include AI contracts that have an evolving technology test. Such systems should not just be for Ambient AI, but for any AI technology.”
Dr Shankar expressed interest in applying AI to documentation for nurses and other healthcare staff, besides clinicians.