It will probably not surprise you that Chief AI Officers are fast becoming a common sight in C-suites across healthcare. Their job descriptions and the skills they must possess to succeed in this new role, are just as complex as the artificial intelligence and machine learning technologies they oversee.
Some provider organizations are taking people with deep experience in machine learning and data science and making them CAIO – or bolting those letters onto their existing IT titles. But that might not be the best approach. Do these technology professionals know healthcare? Regulations? Business strategy? Governance?
This is the first article in Healthcare IT News’ new series: Chief AI Officers in Healthcare.
Today we’re speaking with Dennis Chornenky, chief artificial intelligence adviser at UC Davis Health and CEO of Domelabs AI, which provides AI governance advisory and systems to healthcare and national security sectors and manages the Valid AI program.
Chornenky is an executive with more than 20 years of leadership and business strategy experience at the intersection of healthcare and advanced technology, with a focus on AI strategy and governance. He has held senior roles at the White House, UnitedHealth Group and Morgan Stanley.
Chief AI advisor at UC Davis Health is the health system’s current equivalent of the chief AI officer. Chornenky is an “advisor” as he did not wish to be in the role full time, even though he is fully onboarded as an executive. He works primarily with the CEO, CIO and chief strategy officer.
Here, Chornenky discusses what UC Davis Health was looking for in its first chief AI officer, who that executive would report to, what in his background made him a good fit for the role, what his daily work looks like – and the skills other executives looking to become a chief AI officer should aim to possess
Q. How did UC Davis Health approach you to become its chief AI officer? What were they looking for and who would you report to?
A. Domelabs AI had a good relationship with some of the folks across UC and with UC Davis Health. And they were coming to a point where their understanding of AI governance and the need for it was maturing.
They had put together a pretty good analytics oversight process as part of a broader health data oversight committee that had been previously mandated by the UC office of the president. So, they expanded that into the analytics realm. And they were looking to accelerate their ability to adopt AI technologies more efficiently, perhaps more quickly, but ensuring safety and not sacrificing on those kinds of issues while continuing to expand their governance process.
There also was some interest in potentially building out a collaborative with health systems and academic medical centers designed to help advance the responsible adoption of generative AI technologies and AI governance best practices. And that is what eventually turned into what is now called Valid AI, launched last year.
The idea was they wanted the full-time chief AI officer to help them with those initiatives. I had just come out of a full-time role with a larger organization and was looking to do something a bit more independently and to start building out a team and a business to provide some of these kinds of services to help meet these kinds of needs I just described – perhaps more broadly for more health systems in other organizations.
So eventually, University of California ended up putting out an RFP that Domelabs AI applied for, and thankfully was able to get through. It was a blind review. A number of others applied. So, we’ve been supporting UC Davis Health and some initiatives across UC broadly as well since then.
Q. This is your second post as a chief AI officer. What in your background makes you a good fit to be a chief AI officer? And what skills should anyone looking to become a chief AI officer have?
A. It’s a great question, and one that we see being discussed quite a bit today. More and more, organizations are thinking about this role and bringing people into them, resourcing these roles and offices. For me, it’s a combination of things.
My background and my interests happen to be a really strong fit for this role just organically as it developed. I had a strong interest in technology policy, AI policy, regulations for a long time, some of the more complex issues around AI fairness and bias and mathematical tradeoffs, and how to communicate these things to business leaders and policy leaders in ways they can understand.
I’ve also had a strong interest in advanced technologies, machine learning, AI, data science. I have spent a lot of time in academic environments, doing work and research and intersecting with industry on a lot of projects in the space. And I’ve also spent a lot of time around business strategy. I’ve had a previous career in finance. I was an asset manager and an investment banker in various roles with some of the large investment banks.
I’ve had a couple of startups, so I’ve been around the innovation and business space. I feel like all of these areas of experience are actually really important, and then combined with one additional one, which is domain expertise for a chief AI officer. I spent a lot of my time and work in the healthcare space around healthcare information technology.
Having that domain expertise in healthcare made a big difference.
So, when I was coming out of the White House, completing my work as a senior advisor and presidential innovation fellow, working on AI policy and also pandemic response, I was trained as an epidemiologist and had some background in telehealth as well. But as I was coming out of that role, there was an opportunity to work with United Health Group, and they had never had an AI officer before.
So, this was the first time that this role was created. We initially had some conversations about what would be a good fit for some of the work I could do, and organically came about that they understood this was an important role.
I started doing that work and helped to stand up a large governance structure, managed a portfolio of AI for a lot of patients, along with their clinical and business environments there. That combination of skill sets in AI is probably relatively rare right now.
A lot of organizations are taking people with a lot of experience in machine learning and data science, maybe PhDs in those spaces, and making them the chief AI officers. That can be a bit of a mistake because artificial intelligence is a multidimensional problem that really covers so many other areas, including this rapidly expanding regulatory environment.
It’s really important for chief AI officers to have a strong sense of what the AI policy regulatory environment looks like, how that’s evolving, and what the implications are for their organizations. If I could summarize the skill set, AI policy regulations is one really important one. A business strategy is another important one.
So, you can translate a lot of these more complex concepts into an organizational strategy, making sure AI investments are aligned with the broader organizational mission and strategy. An understanding of technology is important. It doesn’t have to be a PhD in data science, but a strong sense of what these technologies can do and what they can’t do is just as important to help ensure an organization is correctly thinking about the capabilities they want to pursue.
The fourth area, as I mentioned, is domain expertise. Really knowing your domain and how AI intersects with all the different aspects of that domain. Whether your domain is healthcare or government or finance, especially in regulated sectors, I think it’s critical to try to get folks with as many of those capabilities as possible.
As an example of things in the regulatory environment, we had the AI executive order come out last year in October, signed by President Biden, and then some additional guidance from the Office of Management and Budget came out, as usually happens after an executive order, that now requires all federal agencies to maintain an AI inventory, to stand up an AI governance board, and to have a chief AI officer.
So, what a lot of federal agencies have done is, if it’s been difficult for them to really wrap their mind around what this new role could look like, they’ve taken existing folks in senior technology roles, maybe a chief data officer or chief technology officer, or chief information officer, and they’ve added the AI title to them.
So now someone becomes the chief technology and AI officer person. I think it’s a good step, at least in the interim, because what a number of these agencies have also done is then opened up roles for full-time standalone chief AI officers they’re currently in the process of interviewing for. I think it’s an evolving role in how it’s defined, how organizations are thinking about it. But I think it is a very multidimensional role, and it’s very important for organizations to keep that in mind.
Q. Please describe the AI part of your job at UC Davis Health. In broad terms, what is expected of you? And in more specific terms, what is a typical day for you like?
A. Organizations have been approaching this role a bit differently here and there. A lot of it depends on what is already in place for an organization. As I mentioned earlier, at UC Davis Health, there already was a really great foundation for AI oversight and some really smart folks working on those topics. Whereas others may have less maturity in that area, a chief AI officer may end up doing a lot more as far as building out even the most basic foundations for AI governance and AI oversight.
At UC Davis Health, there was a great process already that was fairly robust. I ended up focusing more on some of the strategic aspects. We built out an AI strategy.
We expanded on an AI roadmap, which helps an organization identify areas it wants to target for investment and what types of AI capability it wants to build out over some phased sequence of time, be it 12 months, 18 months, 36 months, whatever time period an organization wants to think about. And I also think a lot about education for different areas of the organization.
I’ve had a lot of inbound requests since I started in the role from different groups, some in the clinical space, from the emergency department or cardiology or oncology or different areas where folks want to learn more about AI.
So, I spent a good deal of time providing presentations and doing calls with leaders in those organizations to help make them aware of our enterprise-wide efforts, but also to help provide educational perspective and perhaps some suggestions and some guidance on how they can build out their own mini AI adoption roadmaps that are more specific to their own departments.
Talking about how they can think about developing those capabilities, whether it’s building some things in-house on their own, if they have the capability and the resources, or what types of vendors they might be thinking about the same way that we’re thinking about it at the larger level.
Also, talking with a lot of folks on the legal compliance data side as well who are very interested in better understanding the intersection of AI and compliance, the intersection of AI and data, data stewardship, data governance.
How can we evolve our processes around data to make sure we have more AI readiness in our data and also that we have appropriate diverse, equitable, representative datasets for us to use with our AI applications, which are really important toward ensuring safety and equity in how we deliver healthcare?
For valuable BONUS CONTENT not found in this article, click here to watch the HIMSS TV video of this interview. Part two of this interview will appear tomorrow.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.