- Multiple AI labs are working on algorithms that they believe could help diagnose COVID-19 simply by listening to the sound of people speaking.
- Researchers at universities including Carnegie Mellon University, Harvard, and MIT are collecting voice data to train algorithms.
- If they’re proven to be successful, the AI tools could be used to screen workers at businesses attempting to reopen amid COVID-19.
- But research has hit some early roadblocks as labs that specialise in AI attempt to foray into epidemiology.
- Visit Business Insider’s homepage for more stories.
In the fight against COVID-19, several artificial intelligence labs are turning to an unexpected piece of evidence that might help diagnose the illness: people’s voices.
A team of researchers from Harvard and MIT is using machine learning to comb through voice recordings from COVID-19 patients and healthy people in an attempt to identify specific vocal signatures that could indicate someone is carrying the virus. A similar project is underway at Carnegie Mellon University’s CyLab.
Research is still in early stages, but the teams aim to develop AI tools that could tell people whether they have coronavirus based on an audio recording of their voice. If proven successful, the tools could allow more people to choose to self-isolate even if they don’t have access to a COVID-19 test.
“If we can prove that it works, this would be a very easy-to-use tool for businesses when they open up. People can just talk into a machine and the machine could alert them if there’s something wrong,” Carnegie Mellon researcher Rita Singh told Business Insider. “It would be a powerful technology that could ease testing all across the world.”
The team of Harvard and MIT researchers is using speech audio data provided by Voca.ai, an Israeli startup that sells AI-powered customer service tools. Voca.ai cofounder and CTO Alan Bekker told Business Insider the company has set up a web portal for voice recording donations, and has collected more than 100 samples from COVID-19 patients and several thousand samples from healthy people.
Analysing people’s speech, coughing, and breathing patterns as a diagnostic tool isn’t new – tussiphonography, or the study of cough sounds, has been around for decades. Now, AI researchers are emboldened by early reports from doctors that COVID-19 appears to have unique effects on patients’ coughing and speech.
While the research shows promise, it has also hit some early roadblocks. AI researchers eager to help the worldwide fight against COVID-19 have faced a difficulties due to their relatively limited experience with epidemiology.
To give people an incentive to donate voice audio, Singh’s lab initially published a rough AI tool online that would predict whether people have a higher chance of being COVID-19 positive using voice samples, along with a disclaimer that the tool wasn’t giving real medical advice. But within 48 hours, Carnegie Mellon forced the lab to take down the online test, which could have run afoul of FDA guidelines and be misinterpreted by people regardless of the disclaimer.
“It’s a perfectly valid concern, and my whole team had not thought of that ethical side of things,” Singh said. “The other side is that hopefully the COVID pandemic will pass, and once it passes, hopefully it will never come back. So if we don’t get the data now, we’re never going to have data for research.”
Satrajit Ghosh, a professor at MIT and Harvard overseeing the schools’ research on COVID-19 AI voice tools, and Daniel Low, a PhD student in the program, echoed the need for more data in a statement to Business Insider.
“Screening for COVID-19 using voice recordings is promising especially given how easy it is to acquire samples while maintaining social distancing,” they said. “[But] we simply do not yet have enough data to understand the diversity of symptoms and the changes that occur when an individual is infected.”