Game-changing AI breaks the silence for the hard-of-hearing


Communication gaps for the deaf community could soon be a thing of the past, with a new AI-powered real-time ASL interpreter that promises 98% accuracy and no need for expensive hardware.

For over 11 million deaf and hard-of-hearing Americans, communicating effectively in society can be a burden. It often requires the use of an interpreter, which can be costly and sometimes lack effectiveness.

Human interpreters are not always available in spontaneous scenarios like asking for help in public, doctor visits, or job interviews.

ADVERTISEMENT

Existing American Sign Language (ASL) recognition systems often prove inaccurate in dynamic conditions, such as dim lighting or cluttered backgrounds.

Additionally, some letters are difficult to distinguish – for example, producing an “A” is similar to making a “T,” as well as “M” and “N.”

Luckily, the intervention of AI has made the process much easier, with an impressive accuracy of 98%.

Developed by Florida Atlantic University’s College of Engineering and Computer Science, researchers have created an innovative real-time ASL interpretation system that can precisely interpret all standard ASL letters.

Real-time accessibility, simplified

The software works with a simple, built-in webcam – no wearables or expensive cameras are required. It excels at mapping hand structure and classifying spontaneous gestures, whereas older software struggled with shaky camera output, lighting inconsistencies, and natural motion blur.

Mohammad Ilyas, co-author of the study and professor at FAU’s Department of Electrical Engineering and Computer Science, said:

“The significance of this research lies in its potential to transform communication… across education, workplaces, health care, and social settings.”

ADVERTISEMENT

This software could, in effect, be used in classrooms, customer service desks, telehealth calls, ATMs, and kiosks. It could help immensely with logistical difficulties such as spelling names or explaining complex situations – for example, describing medical ailments – and may eventually be embedded in mobile devices, browsers, or public service systems.

Additionally, the software isn’t only for the deaf community – it can also assist those who are temporarily speechless, such as after surgery or with vocal disorders.

This represents a technological breakthrough where AI works alongside humans to see and listen with nuance and accuracy.

Having accessibility baked into the system is much more advantageous for the deaf community than constantly adding bolt-on solutions.

And by establishing a foundational layer for new computer-to-human interaction norms, those hard-of-hearing can move from the margins and enjoy greater inclusion.

vilius jurgita Konstancija Gasaityte profile Paulina Okunyte
Don’t miss our latest stories on Google News