Over 150 New York Auctions End Today - Bid Now

AI tool for Alexa and smart devices detects cardiac arrest in sleeping patients

by John R. Fischer, Senior Reporter | June 25, 2019
Artificial Intelligence Cardiology
The AI skill would enable smart
devices like Alexa to monitor for
and alert help to cardiac arrest
events (Photo credit: Sarah McQuate/
University of Washington)
Out-of-hospital cardiac arrest often takes place in a patient’s bedroom when they are alone, asleep and have no one around to respond. In such situations, the delivery of CPR from a bystander could double, even triple, a person’s chance of survival.

Researchers at the University of Washington are trying remedy this problem with the development of a new AI-based system that enables smart speakers like Amazon Alexa or Google Home to monitor for and alert paramedics to instances of cardiac arrest, based on how patients breathe while asleep — all without touching them.

"We are excited about this project since this opens up the use of 'smart speakers' for patient monitoring. So far, researchers and the industry have been focused on using smartphones to monitor and detect medical conditions," co-corresponding author Shyam Gollakota, an associate professor in the UW's Paul G. Allen School of Computer Science & Engineering, told HCB News. "However, smart speakers like Google home and Amazon Alexa are increasingly becoming common in households, and are very convenient, since they are plugged in, and so can passively monitor patients."

The tools enable smart speakers and smartphones to detect gasps for air known as agonal breathing, which is present in about 50 percent of people who experience cardiac arrest and indicates better chance for survival, according to 911 call data. If such breathing is detected, the smart device then calls for assistance.

The proof-of-concept algorithm was trained using real agonal breathing instances captured from 162 911 calls to Seattle’s Emergency Medical Services between 2009 and 2017. The sounds were recorded by bystanders who placed their phones up to the patient’s mouth during 911 calls so the dispatcher could determine if immediate CPR was required.

The team extracted 2.5 seconds of audio at the start of each agonal breath, collecting 236 clips all together on Amazon Alexa, an iPhone 5s and a Samsung Galaxy S4. It then utilized various machine learning techniques to boost the dataset to 7,316 positive clips. It also used 83 hours of audio data for the negative data set of typical sounds people make when asleep, collecting the data from sleep studies and yielding 7,305 sound samples.

When tested, the tool was capable of detecting agonal breathing 97 percent of the time, with the smart device placed up to six meters way from a speaker generating sounds. It incorrectly categorized a breathing sound as agonal breathing 0.14 percent of the time and produced a false positive rate of 0.22 percent for separate audio clips recorded by volunteers of themselves while they were asleep. This rate, however, dropped to zero percent when the team used the tool to classify something as agonal breathing only in instances when it detected two distinct events at least 10 seconds apart.

The team believes the tools could function as an app or skill for Alexa that runs passively on smart speakers or smartphones. It plans to commercialize the technology through a UW spinout called Sound Life Sciences.

"Right now the algorithm has been trained on agonal breathing sounds from 911 calls in the Seattle area from 2009-2017," said Gollakota. "Getting more 9-1-1 call data across the country will help generalize the performance."

The findings were published in the journal, npj Digital Medicine.

You Must Be Logged In To Post A Comment