Skip to content

HOPPR demonstrating AI-powered multimodal foundation model for medical imaging

  • Health

HOPPR is a technology company developing a multimodal foundation model for medical imaging. The company is backed by Health2047, the Silicon Valley venture studio powered by the American Medical Association.

Today at HIMSS24, HOPPR will be demonstrating this artificial intelligence-fueled model – which can provide diagnostic, clinical and operational insights from medical imaging data – in AWS’s Booth 1561 in the South Hall.

We interviewed HOPPR CEO Dr. Khan Siddiqui to get a better understanding of this foundation model and what it could mean for healthcare.

Q. You’re at HIMSS24 discussing your development of a multimodal foundation model for medical imaging. Please tell conference attendees what that is.

A. HOPPR is developing [a] comprehensive generative model across all medical imaging modalities. Let’s break down what that means.

Available via an API service, the HOPPR foundation model can be used by developers, radiology picture archiving and communications systems (PACS) vendors, and AI companies to develop their own applications.

Rather than spend a year or more developing AI algorithms from scratch, application developers can fine-tune HOPPR’s pretrained Large Vision model to meet their exact specifications and compress the development process to about a month. In this way, the HOPPR model significantly shortens the time to market for sorely needed AI-powered applications for medical imaging.

The HOPPR foundation model is multimodal, built on troves of high-quality medical data from different sources – including images from computed tomography and magnetic resonance imaging scans, X-rays, and other medical imaging scans. The model can be used to create products that dramatically improve the way radiologists, technicians and support staff interact with medical images.

Applications built on the HOPPR foundation model promise to improve both patient and clinician experience. Imagine, for example, you are a neurosurgeon planning an aneurysm coiling procedure. Treatment planning for these procedures depends heavily on medical imaging.

Applications powered by HOPPR’s API service can help fully characterize the aneurysm and predict which vessel approach, catheter type and coil selection will be most conducive to success. Statistics, prior procedures with similar anatomy, and published research can be summoned as needed to further support decision making.

But the benefits don’t end with treatment planning. Once a plan has been finalized, scheduling and allocating sufficient OR time, staff and materials can be automated to help maximize outcomes and cost efficiency.

AI holds tremendous promise to advance medical imaging and improve patient outcomes. HOPPR aims to unlock that potential by empowering developers to more quickly bring AI applications to market.

Q. You’re backed by Health2047 and UHG. Please talk about these organizations and why their backing is important to this development.

A. In November 2023, HOPPR announced a milestone investment from Health2047, the Silicon Valley venture studio powered by the American Medical Association and created to overcome systemic dysfunction in U.S. healthcare. Their goal is to make a meaningful and measurable impact on healthcare by the American Medical Association’s 200th anniversary in 2047.

Health2047 is transforming healthcare at the system level, seeking powerful ideas, industry partners and entrepreneurs to address systemic transformation in the areas of data, chronic disease and productivity.

Health2047’s deep relationships with both the AMA and its network of strategic partners create a unique force multiplier that helps drive informed, large-scale change in healthcare.

Q. You’re talking with attendees about how you say HOPPR enables end users to unlock diagnostic, clinical and operational value from medical imaging data. Please explain how this happens and the hoped-for outcomes.

A. The model can provide diagnostic, clinical and operational insights from medical imaging data.

Diagnostic value: At scale, HOPPR’s foundation model will be trained on over a petabyte of medical imaging study data, able to discern nuances and detect abnormalities that transcend the capability of the human eye. Whereas many current AI tools were developed by down-sampling grey scale to 256 shades, HOPPR sees 65,000 shades of grey. HOPPR developed proprietary vision transformers to train its model on full resolution and full bit-depth images, ensuring no loss of useful data contained within the images.

Clinical value: The model can be fine-tuned for use in applications that allow users to converse with medical imaging studies about findings, alternative imaging views, suggested surgical interventions, and treatment protocols. In mammography, a patient could be notified at the point of care – before leaving the imaging center – whether additional diagnostic imaging is needed, reducing patient stress, streamlining care delivery, and improving clinical outcomes.

Operational value: From an operational standpoint, one potential application of the technology is to automatically fill in radiological reports. Much of what we dictate in radiology is repetitious, and research has shown that doing manual work in addition to analyzing images creates cognitive dissonance. Using AI to automate some elements of the job – prepopulating a preliminary report that the clinician can finalize after review, for example – would create more time for image review and improve the clinician experience.

Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Leave a Reply

Your email address will not be published. Required fields are marked *