Over 1850 Total Lots Up For Auction at Six Locations - MA 04/30, NJ Cleansweep 05/02, TX 05/03, TX 05/06, NJ 05/08, WA 05/09

Six keys to safely bringing AI to biomeds and the HTM department

by John R. Fischer, Senior Reporter | April 26, 2023
Artificial Intelligence HTM
In the last few years, artificial intelligence has attracted rapid interest in the healthcare sphere as it has become increasingly integrated in clinical and administrative operations, from prioritizing and interpreting radiology reports, to providing guidance to nonspecialists using point-of-care ultrasound systems.

Despite that undeniable upside, there are valid concerns around AI's safe and ethical use, fearing that carelessness and lack of understanding about how the technology makes decisions or functions will result in inaccurate diagnoses, lead to medical errors, or expose health information to malicious cyberattacks.

Even algorithm engineers who develop instructions for understanding AI-derived data may be unsure of how these solutions process information to produce these results, thereby creating a so-called “black box conundrum.” Additionally, AI-based technologies can make mistakes that humans would never make.

In its recently published report, “Artificial Intelligence, the Trust Issue”, the first in a series of Medical Device Safety in Focus reports, the Advancement of Medical Instrumentation (AAMI) delves into strategies for addressing these challenges.

Here are six of its recommendations:

Broader cooperation is essential
Tech companies, data scientists and algorithm engineers generally are not familiar with the quality, safety, and effectiveness standards that hospitals and other healthcare organizations follow, let alone their clinical environments, workflows, and practices.

It is therefore essential to include these professionals in meetings and events where they can communicate with healthcare stakeholders, including standards and regulatory communities, patients and patient advocacy groups, clinicians and professional associations, experts on social determinants of care and underrepresented patient populations, researchers, private and public insurers, industry associations, health technology management professionals and risk managers, experts on social, moral, legal, and ethical issues and policy implications in AI, and cybersecurity experts.

“The more manufacturers, AI developers, and organizations that have experience with AI products communicate the risks and benefits of AI technology, the more informed additional parties can be to make informed decisions,” said Mike Powers, system director of healthcare technology management at Intermountain Healthcare, and a member of AAMI’s Artificial Intelligence Committee, in the report.

You Must Be Logged In To Post A Comment