Skip to content

Responsible AI governance is needed now, says UNC Health chief analytics officer

  • Health

In the December 2022 AI & Analytics Study of 250 health system leaders, the Advisory Board depicted how artificial intelligence had shifted from transformational in nature (as reported in their 2018 study) to merely adding incremental value to healthcare.

The incremental value in healthcare came in the form of narrow AI, focused on solving individual problems – early detection of sepsis, falls risk, clinical documentation improvement, identifying best candidates for care, among others. Healthcare historically has lagged other industries in technology adoption, resulting in the 2018 AI hype fizzling down to incremental as opposed to transformational value, the Advisory Board concluded.

By January 2023, however, the buzz around AI became reinvigorated. OpenAI released ChatGPT to the world in November 2022. The ease of accessibility to and application of so-called large language models and natural language processing has led AI to the point of revolutionizing how everything, including healthcare, operates.

“However, as AI becomes more computationally complex and integrated into new and existing technology, the need for thoughtful and responsible AI governance becomes an imperative to ensure human-centered AI adoption that is ethical and trustworthy,” said Rachini Ahmadi-Moosavi, chief analytics officer at UNC Health, a health system based in Chapel Hill, North Carolina.

Healthcare IT News sat down with Ahmadi-Moosavi to get her expert advice on what health IT leaders need to know – today – about responsible AI governance.

Q. How would you define responsible AI governance? Generally speaking, what is it and what problems or potential problems is it trying to solve?

A. UNC Health has developed homegrown AI capabilities since 2016. In the early stages of developing this capability, we self-governed AI within the solution teams and analytics organization.

However, as artificial intelligence pushes our organization to move faster and further into AI technology development and partnerships, the need to fully vet AI capabilities grows. Microsoft CEO Satya Nadella was recently quoted regarding the application of AI that “what we should do is maximize the benefits and mitigate the challenges.” Responsible AI governance aims to accomplish both goals.

By leveraging the expertise of leaders across our health system and outside partners, like Microsoft, UNC Health developed its own Responsible AI Framework. The framework defines four pillars of AI governance: fairness, transparency, accountability and trustworthiness.

Responsible AI governance ensures both built and bought AI technologies meet certain expectations: evaluations for bias, moral and ethical appropriateness, visibility beyond “the black box,” accountability for continued investment in models, model reliability and tuning, education about safe and effective adoption of the technology, and more.

In addition to a well-defined, human-centered framework, UNC Health is standing up an AI & Automation Advisory (AAA) workgroup to ensure that a multidisciplinary group of experts, clinicians and leaders across the health system will continue to vet these transformational capabilities.

Mitigating the risks and maximizing the benefits of AI adoption requires that scrutiny over the technology provides common metrics before the implementation and throughout the lifecycle of its deployment. Developing AI governance KPIs to track these key risks and benefits enables the organization and the AAA workgroup to continuously learn, monitor and improve the technology.

Q. More specifically, what kind of responsible AI governance is needed in the operational arena of healthcare?

A. AI can be leveraged in healthcare operations to improve patient call center experiences, automate appointment scheduling, drive efficiency in hospital throughput, and delivering effective patient communication.

Responsible AI governance holds the technology accountable to safe and enhanced healthcare operations by ensuring bias is not introduced in risk models and generative AI solutions. This further enables fairness and consistency in recommendations impacting patients and teammates and builds trust by the people who monitor or interact with the technology.

Q. What kind of responsible AI governance is needed in the clinical area of healthcare?

A. AI may be used in clinical care to summarize patient encounters in pre-visit preparations, alert clinicians to patient risk, draft nursing and provider documentation based on ambient listening devices and information from the electronic health record, spot anomalies in imaging scans, and connect risk of disease to testing, treatment and outcomes.

The risk is much greater in the clinical space due to the direct impact on patient health and wellbeing. Adoption of AI in this space must avoid previous pitfalls related to clinical bias (racial bias in transplant recommendations, gender bias in screening for liver disease, skin color bias in diagnosis, etc.); ensure that ethics are a cornerstone of care delivery; vet model inputs and reliability; and thoroughly test recommendations.

Clinicians will need to be engaged in the governance of clinical AI to earn their trust in the technology. Additionally, when aiming at improving health outcomes, reducing burnout and ameliorating staffing challenges, monitoring and evaluating the impact on these desired outcomes will be key for effective long-term adoption.

Q. And what kind of responsible AI governance is needed in the financial/administrative sphere of healthcare?

A. In the financial and administration realm, AI may help bend the cost curve, increase timely revenue attainment and improve efficiencies.

Many processes in revenue cycle management – streamlining prior authorizations, connecting self-pay to philanthropic aid services, fine-tuning registration, providing patient estimates, preventing and adjudicating denials, processing payment posting and write-offs – are touted to see great potential benefit from AI and automation. Similarly, supply chain, finance, human resources and other administrative areas likely can benefit from AI.

The accuracy of information surfaced and used in these processes is critical. Ensuring AI is well understood in terms of what it’s consuming and how recommendations are presented also is necessary in this space. With the sensitive data in this sphere, tracking anomalies and presenting findings through the right channels is critical.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Leave a Reply

Your email address will not be published. Required fields are marked *