Humberto Farias has been watching the explosion of generative AI very closely.
Farias is cofounder and chairman of Concepta Technologies, a technology company specializing in software development and programming in the areas of mobile, web, digital transformation and artificial intelligence.
For example, he noticed that Apple is putting generative AI at the very center of the lives of hundreds of millions of iPhone-toting people. But with recent data leaks, patient privacy problems and other IT issues, he says he’s worried health IT teams will become prone to seeing AI as a threat rather than a tool.
The question becomes: How can health systems protect valuable patient data while still reaping the benefits of generative AI?
Farias has debuted the Concepta Machine Advancement and General Intelligence Center, or MAGIC, a collaborative research program, virtual incubator and service center for artificial intelligence and advanced technologies.
Healthcare IT News spoke recently with Farias to learn more about MAGIC and understand concerns he has heard from healthcare CTOs about implementing artificial intelligence. He offered tips and real-world examples to securely deploy AI and learning and described what he believes should be the primary focus for CIOs, CISOs and other security leaders at hospitals and health systems as AI and machine learning continue to transform healthcare.
Q. Please describe your new organization, MAGIC. What are your goals?
A. Our mission is to push the boundaries of AI research and development while providing practical applications and services that address real-world problems. At MAGIC, we aim to foster cutting-edge research for both fundamental technologies and applied solutions, support and nurture early-stage AI ventures, educate and train professionals in AI skills, provide consulting services, and build a network of collaboration.
Some of our initial partnerships include healthcare companies dedicated to improving healthcare for patients, hospitals and clinical teams. They combine assessments, analytics and education, and then measure it all to improve healthcare for everyone. Through our partnership, we are implementing AI to make programs run even more efficiently and cost-effectively for their teams.
We’re open to working with large health systems on some of the key issues they’re facing when it comes to AI implementation. We’ve worked with health systems like Advent Health on other software technology and are well-equipped to handle the unique regulatory and patient security issues healthcare faces.
Q. What are some of the concerns you have heard firsthand from healthcare CTOs about implementing AI into their business structures?
A. I’ve heard from healthcare CTOs that their main concerns regarding the implementation of AI into their business structures is still data privacy and security. Health executives want to ensure the privacy and security of sensitive patient data are a top priority, given the stringent regulations from HIPAA and other mandates.
There also is hesitation around how AI solutions can integrate with legacy systems and if they are compatible, as well as navigating the complex regulatory landscape to ensure AI solutions comply with all relevant laws and guidelines.
There also is a cost to implement AI, and many healthcare CTOs are uncertain about the return on investment this technology can provide. I’m always looking for ways to cut these costs by collaborating with peers and ensuring we don’t operate in a silo – learning from mistakes and building upon successes from other leaders in the industry.
In pairing with that, there is also a lack of skilled personnel to develop, implement and manage AI systems. Health systems already are on tight budgets and experiencing cutbacks, so working with an AI research program can fill this need and help advance the use of AI throughout their institutions.
We’re working to educate health systems on how AI can be used for simple things like minimizing repetitive admin tasks and large-scale projects that can improve workflows for providers and care with real patients.
Finally, there always are ethical concerns when it comes to AI, healthcare CTOs want to ensure AI is used ethically, particularly in decisions that directly affect patient care. The top concerns in this area are informed consent and data bias.
Patients must be aware AI is included in their care, as well as making sure data used to train AI algorithms does not result in biased healthcare decisions that exacerbate disparities in healthcare outcomes among different demographic groups.
Q. What are some tips and real-world examples you can offer to safely and securely deploy AI, especially considering sensitive medical data?
A. There are several ways healthcare executives can deploy AI safely and securely. One of those is through data encryption. It’s important always to encrypt sensitive medical data both in between networks and when filed in records systems to protect against unauthorized access.
Another tip is to implement robust access control mechanisms to ensure only authorized personnel can access sensitive data. Large healthcare centers should employ multi-factor authentication, role-based access controls and a 24/7 monitoring system. Conducting regular security audits is another way to ensure security and safety by continuous monitoring to detect and respond to potential threats promptly.
Regulating compliance is another tip to ensure trust; you would do this by aligning AI deployments with regulatory frameworks such as HIPAA and GDPR. Making a priority to develop and adhere to ethical guidelines for AI usage is another tip, making sure to focus on fairness, transparency and accountability.
For instance, Stanford Health Care has an ethics board that reviews AI projects for potential ethical issues.
Q. What would you say is the primary focus CIOs, CISOs, and other security leaders at hospitals and health systems should have as AI continues to explode in healthcare?
A. The use of AI is inevitable in healthcare, so the primary focus for CIOs, CISOs and other security leaders should be to continue to ensure data privacy and security and to protect patient data from breaches. The top priority should be making sure programs comply with regulations.
Healthcare leaders also should focus on the development of a scalable and secure IT infrastructure that can support AI applications without compromising performance or security. Then to support this system, provide ongoing training for staff at every level – from staff to providers to C-suite – on the latest AI technologies and security practices to mitigate risks associated with human error.
To ensure there’s a fail-safe plan, healthcare leaders should develop and maintain a comprehensive risk management strategy that includes regular assessments, incident response plans and continuous improvement.
Collaboration is key to creating the best team ready to handle the challenges of the world we live in, encouraging collaboration between IT, security and clinical teams to ensure AI solutions meet the needs of all stakeholders while maintaining security and compliance standards.
The HIMSS AI in Healthcare Forum is scheduled to take place September 5-6 in Boston. Learn more and register.
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.