Experts agree that evaluating AI technologies from vendors within a governance framework is essential in healthcare. Tools provided by vendors need to be safe, effective, and ethical for the best patient outcomes.
A governance structure should include a clear method for assessing the risks of AI systems used in clinical settings. This system categorizes risks into four areas, allowing for better evaluation.
The four risk categories are:
- Correctness and transparency
- Fairness and equity
- Integrated workflow
- Safety and privacy
By focusing on these risks, healthcare systems can make smarter choices about which tools to adopt. This lets technology teams create appropriate strategies to manage those risks.
Having a strong governance framework to engage with vendor products enables healthcare organizations to enjoy the advantages of commercial AI while reducing risks and preserving public trust, according to Glenn Wasson from UVA Health, who has a doctoral degree in computer science.
Wasson will delve into this topic at a HIMSS25 educational session titled “Dear AI Vendors: This Is What We Need.”
At UVA Health, Wasson manages how the health system collects and analyzes data for patient care and research, covering areas like data operations and analytics.
His work impacts various tasks, from bedside predictive care to understanding hospital ranking algorithms. He is passionate about fostering problem-solving cultures.
In a recent interview, Wasson shared insights about his HIMSS25 session.
Q. What should hospitals know about AI tools versus older software?
A. AI is becoming a key part of modern healthcare, improving everything from diagnoses and treatment plans to billing and research. However, it also introduces unique risks that older software did not have. Organizations need to recognize these risks and how to manage them for effective governance of AI.
Understanding the risks and rewards of AI systems isn’t easy. Many healthcare organizations lack the resources and expertise to analyze vendor code thoroughly. Instead, the focus of this session will be on fostering dialogue between providers and vendors to pinpoint risks. This requires vendors to be open about their data, algorithms, and workflows, which can strengthen trust in their solutions.
Q. What will your HIMSS25 session focus on regarding AI?
A. The session will explore various applications of AI, including the latest in generative AI. AI can analyze past human experiences to improve diagnosis predictions, treatment choices, personalized medicine, staff scheduling, and more.
We’ll discuss specific AI use cases and how humans and AI can work together effectively. Understanding the outcomes of AI decisions and their associated risks is key to ensuring that these systems deliver safe and effective support.
Q. What key takeaway do you hope attendees will gain from your session?
A. Attendees will learn to evaluate AI systems in ways that haven’t been part of traditional assessments. We’ll provide a framework for dialogue between providers and vendors, which includes questions about risk sources like data sets and workflows. This approach isn’t just for data professionals but should involve leaders who understand the workflow and environment for deployment.
We’ll also emphasize the importance of ongoing conversations due to the evolving nature of AI technology.
Wasson’s session, “Dear AI Vendors: This Is What We Need,” will take place on Tuesday, March 4, from 10:15-11:15 a.m. at HIMSS25 in Las Vegas.