A new AI technique protects medical devices from following faulty instructions generated by cyberattacks and human or system errors.

AI technique helps medical devices avoid abiding by cyberattack commands

September 02, 2020
by John R. Fischer, Senior Reporter
A new AI method may better fortify medical devices from falling prey to dangerous and misleading instructions generated by a cyberattack, or other human and system errors.

Researchers at Ben-Gurion University of the Negev have developed a dual-layer architecture that utilizes AI to analyze instructions and detect any anomalous commands sent from a host PC to the physical components of devices.

"Common defenses employed today mostly focus on securing the hospital network and not the device itself. The host control PC has very limited defenses, such as whitelisting [i.e., a list of approved software to run], which modern malware can easily bypass [e.g., a rootkit attacks the underlining operating system and can thus, often bypass the whitelisting]," BGU researcher and Ph.D. candidate Tom Mahler told HCB News. "Furthermore, we found that the host PC often uses out-of-date software and operating systems, since installing an update usually requires the manufacturers to perform rigorous (and expensive) validation tests to make sure that the host PC still complies with regulations after being updated."

Medical devices like CT, MR and ultrasound scanners function through instructions sent from a host PC. Cyberattacks, host PC software bugs and human errors such as a technician’s configuration mistake can create abnormal or anomalous instructions that can potentially harm patients in a number of ways. These include radiation overexposure, manipulation of device components, or functional manipulation of medical images.

The architecture is designed to detect context-free (CF) anomalous instructions, which are unlikely values or instructions, such as giving 100 times more radiation than typical, and context-sensitive (CS) anomalous instructions, which are normal values or combinations of values and instruction parameters, but are considered anomalous relative to a particular context. Examples of CT anomalous instructions include mismatching the intended scan type, or mismatching the patient's age, weight, or potential diagnosis.

The research team tested the architecture using 8,277 recorded CT instructions. They evaluated the CF layer using 14 different unsupervised anomaly detection algorithms and assessed the CS layer for four different types of clinical objective contexts with five supervised classification algorithms for each context.

Overall anomaly detection performance improved from an F1 score of 71.6%, using only the CT layer, to between 82% and 99% when adding the second CS layer to the architecture, depending on the clinical objective or body part. The CS layer also enabled detection of CS anomalies, using the semantics of the device’s procedure, an anomaly type that cannot be detected using only the CF layer.

"The problem of medical device security is not easy to solve," said Mahler. "Attackers are becoming more and more sophisticated, organized, and dangerous, and in order to protect patients from them, we must always be one step ahead of them. This cat-mouse chase is not naturally part of the healthcare DNA. Although AI is not a “magic cure” for any problem, its improvement in recent years makes it a very strong tool for the detection of anomalies. However, more research and development must still be done, and I expect (and hope) that we will see progress in the next few years."

Mahler presented his research, "A Dual-Layer Architecture for the Protection of Medical Devices from Anomalous Instructions" this month at the 2020 International Conference on Artificial Intelligence in Medicine (AIME 2020).