Google AI could soon use a cough to determine disease type


A team of Google scientists has reportedly created a machine-learning tool that can help diagnose diseases by evaluating noises such as coughing and breathing.

Last week, Google showcased new healthcare product innovations and research and said that AI-powered products and generative models are designed to help users better understand their own health data.

Well, how about a cough? According to the researchers who reported the tool earlier in March in a preprint that has not yet been peer-reviewed (PDF), the newly developed AI system has been trained on millions of audio clips of human sounds.

The system, called Health Acoustic Representations (HeAR), could be used to diagnose diseases, including COVID-19 and tuberculosis, and to assess how well a person’s lungs are functioning.

For now, the new tool is only aimed at other researchers and will probably reach physicians one day, but its promise for use in the general population seems huge – imagine being tortured by constant coughing but immediately finding out what type of drugs you should use to get better.

“Health acoustic sounds such as coughs and breaths are known to contain useful health signals with significant potential for monitoring health and disease, yet are underexplored in the medical machine learning community,” say the researchers.

They used self-supervised learning, which relies on unlabelled data. Through an automated process, the scientists extracted more than 300 million short sound clips of coughing, breathing, throat clearing, and other human sounds from publicly available YouTube videos.

Each clip was then converted into a visual representation of sound called a spectrogram. The researchers then blocked segments of the spectrograms to help the model learn to predict the missing portions.

This is actually quite similar to how the large language model that powers the ChatGPT bot was taught to predict the next word in a sentence.

Finally, using this method, the researchers created what they call a foundation model, which they say can be adapted for many tasks.

Scientists have explored using sound as a biomarker for disease previously. For example, during the COVID-19 pandemic, they discovered that it was possible to detect a respiratory disease through a person’s cough.

HeAR’s advantage is the massive dataset that it was trained on. Besides, the system can be fine-tuned to perform multiple tasks. However, the researchers admit that it’s too early to tell when or even whether HeAR will become a commercial product.

The plan for now is to give interested researchers access to the model and to spur further innovation in the field, Sujay Kakarmath, a product manager at Google who worked on the project, told Nature.


More from Cybernews:

Cybernews podcast: TikTok – to ban or not to ban?

Telegram temporarily blocked in Spain

Darknet marketplace ‘Nemesis Market’ brought down by authorities

Cash management company hit by cyberattack

Agencies in China won’t be allowed to buy AMD, Intel chips

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked