Editor’s note: This is part two of a two-part interview. To read part one, click here.
Dennis Chornenky, chief artificial intelligence adviser at UC Davis Health, knows what it takes to be a chief AI officer in healthcare – he’s been one twice.
That’s why we’ve sat down with him for this two-part interview – to share lessons he’s learned on this new C-suite role in healthcare.
Today Chornenky, who has two decades of IT leadership experience and also serves as CEO of Domelabs AI, discusses where and how UC Davis Health is making most use of artificial intelligence.
He describes some of the many AI projects he’s working on at the California health system – and offers tips for other executives who might be looking to become a chief AI officer for a hospital or health system.
Q. Please talk at a high level about where and how UC Davis Health is using artificial intelligence?
A. I am fortunate to have the opportunity to work with UC Davis Health and the great leadership there. I think there’s great vision, very innovative, amazing clinicians and staff, just a great team all around.
We’re tracking more than 80 AI applications across the health system, and it’s quite a diverse range. A lot of this is also coming from individual research grants from the NIH and others that some of our researchers and clinicians have been engaged with, some really interesting applications.
And it’s a variety of applications across care delivery, patient engagement, patient management, operations, administration. We’ve been looking a lot more on that side recently, as well. On the administrative side. We recently held a UC-wide conference at UCLA, focused on how we can think about using AI more on the administrative side of all the different UC campuses and the academic medical centers across UC.
I don’t really want to get into any particular vendors, but it’s been great to see a fairly rapid adoption of AI. There’s still, I think, a long way to go.
There are so many capabilities. As I mentioned in part one of our interview, AI is evolving really quickly. A lot of the role now is thinking about how we position for things that are going to be really relevant, really powerful, in just the next one or two years.
Sam Altman, CEO of OpenAI, which makes ChatGPT, recently said he thinks we may have something AGI-like or resembling AGI [artificial general intelligence] within a thousand days. So, I think to the extent that something can mimic those capabilities, whether we want to think of it as AGI or not, it’s going to be very powerful. [Editor’s note: AGI is software with intelligence similar to that of a human being and the ability to self-teach.]
Cognition that’s orders of magnitude more powerful than what we have, even in the most advanced models that have been released so far. So how organizations think about positioning for that is a really important dimension, both on the governance and adoption sides.
Q. More specifically, please describe and discuss just one particular AI project you are proud of that is working well for UC Davis Health and some outcomes you are seeing. How did you oversee this project?
A. I don’t individually oversee AI projects. I’m a couple of steps outside of that, really looking more at the strategic governance levels, ensuring safety and broader directionality for innovation and adoption. But we certainly, as I mentioned, track different projects and encourage them and help support them with various resources in various ways.
One that I can mention that’s a really good one is the adoption of a technology we’ve been using to help us identify stroke patients and prioritize strokes. This has been really helpful. The vendor we’re working with also helps to share some of that information across other academic medical centers and health systems, creating more efficiencies and better patient journeys for patients who may have a presence in some other organizations as well.
And it’s really improved patient outcomes in the space. The ability to identify stroke more quickly makes a huge difference in what that patient outcome is. So that’s a project we’re very proud of.
Q. What are some tips you would offer to other IT executives looking to become a chief AI officer for a hospital or health system?
A. That’s a really interesting question, and I get it a lot from colleagues and folks who have watched my journey and are interested in doing something similar. A lot of folks have really great backgrounds, and so they’re thinking about how to potentially advance into that space. I’ll say again, at a high level, is what I mentioned in part one, I think you really have to think about those different dimensions of skill sets that are going to be required to be successful in this role in the future.
So, understanding policy, business strategy, technology, what it can and can’t do, and having domain expertise for whatever domain it is you’re going into. If you feel like you’ve got a couple of those, but are maybe lacking a little bit in some of the other areas, I would definitely encourage folks to go deeper into those other areas and broaden their capabilities overall.
Because, again, AI is a multidimensional technology, and multidimensional capability requires, I think, multidimensional leadership. And it’s evolving so quickly and the governance is evolving, even though its governance is lagging way behind AI. It’s very complex.
And this is what I call the AI governance gap – where you have technologies that are evolving much more quickly than governance can catch up with.
And you have very limited internal expertise, particularly in regulated sectors like healthcare and government. It becomes really challenging for those organizations to adopt AI quickly as it comes out, especially if they don’t have guardrails in place. So, we’ve seen a lot of memos across academic medical centers and other organizations coming out over the last year saying, please don’t use ChatGPT until there is some clear established policy where you can use it.
Now, some folks go ahead and use it anyway. It’s not something that organizations can always control. It’s certainly better to have those policies in place ahead of time to understand what types of applications and activities, potential risk impacts or threat vectors, that you’re likely to see.
I think cybersecurity is probably another one I should mention for folks interested in this role. Cybersecurity is becoming really critical, especially in healthcare. A lot of threat actors view healthcare as somewhat of a soft target that’s very data-rich with very valuable data that can be used in a lot of different branching applications, anything from ransomware to exploiting insider threat capabilities with additional data.
So, I think understanding the intersection of AI and cybersecurity is very important, as well.
I recommend folks just getting educated about these different dimensions, trying to develop as many skills as you can, as much understanding as you can in those areas, reading the news, doing your best to keep up and partner with good people.
It’s difficult for anyone to be a deep expert in any one of these areas. So it’s really good to partner and to have good communities where there’s peer-to-peer collaboration among executives so if you do end up in a leadership role in AI in your organization, you have the skill sets and the broad perspective necessary to help the organization bridge that AI governance gap.
For valuable BONUS CONTENT not found in this article, click here to watch the HIMSS TV video of this interview.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.