Skip to content

How to apply responsible artificial intelligence in healthcare

  • Health

Responsible artificial intelligence is basically the only way to go if one is implementing AI technology at their hospital or health system. It is crucial that AI, as complex and important as it is, be trustworthy.

Anand Rao is a service professor at Carnegie Mellon University’s Heinz College. He is an expert in responsible AI, economics of AI and generative AI. He has focused on innovation, business and societal adoption of data, analytics and artificial intelligence over his 35-year consulting and academic career.

Previously, Rao was the global artificial intelligence leader at consulting giant PwC, a partner in its data, analytics and AI practice, and the innovation lead for AI in PwC’s products and technology segment.

We interviewed Rao to discuss responsible AI, how responsible AI should be applied in healthcare, how to combine responsible AI specifically with generative AI, and what society must understand about adopting responsible AI.

Q. Please define what responsible AI is, from your point of view.

A. Responsible AI is the research, design, development and deployment of AI that is safe, secure, privacy-preserving or enhancing, transparent, accountable, interpretable, explainable, bias-aware, and fair. This really can be thought of as three successive levels of AI:

  1. Safe and secure AI. This is the minimum bar where “AI does no harm.” It includes not causing physical or emotional harm, factual as needed, and secure against adversarial attacks.
  2. Trustworthy AI. This is the next level where “AI does good.” It includes AI that is accountable, interpretable and explainable. It includes both building AI systems and governing AI systems.
  3. Beneficial AI. This is the next level where “AI does good for all.” It includes AI that is bias-aware and is built in a way that is fair at least across one or more dimensions of fairness.

Q. How should responsible AI be applied in healthcare? Healthcare is a very different industry compared with others. Lives are constantly at stake.

A. Given the high stakes in healthcare, responsible AI must be applied in healthcare primarily to augment human decision making, rather than replacing human tasks or decision making. “Human-in-the-loop” must be an essential characteristic for most, if not all, AI healthcare deployments.

In addition, AI healthcare systems must be compliant with existing privacy laws, thoroughly tested, evaluated, verified and validated using the latest techniques before being deployed at a large scale.

Q. Generative AI is one of your specialties. How do you combine responsible AI specifically with generative AI?

A. When it comes to generative AI, it brings in more powerful and complex technology that can potentially cause more harm than traditional AI. Generative AI could potentially produce wrong results, with a confident tone.

It could produce harmful and toxic language and is more complex to explain or reason with. As a result, responsible AI for generative AI must consider more extensive governance and oversight as well as rigorous testing under different contexts.

Q. One of your areas of focus is societal adoption of artificial intelligence. What must society understand about adopting responsible AI, especially when people go to see a doctor?

A. With the widespread use of generative AI, the public are increasingly using generative AI to obtain medical advice. Given that it is difficult to ascertain when the generative AI is correct or when it is wrong, there could be disastrous consequences for patients or caregivers who do not check with their clinicians.

Educating the public and the caregivers on the negative consequences of generative AI is essential to ensure the responsible use of generative AI.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Leave a Reply

Your email address will not be published. Required fields are marked *