© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

New MIT tool can trick natural-language-processing systems


Growing up, the concept of a digital assistant was dominated by HAL, arguably the lead character in Stanley Kubrick's sci-fi masterpiece 2001: A Space Odyssey. The notion that you could have a realistic conversation with a computer and it could provide a seemingly endless supply of wisdom was fantastical.

With the launch of devices such as the Amazon Echo and Google Home, such technology is increasingly available in the modern home. Indeed, so popular have the devices become that the sector generated income of over $11 billion in 2018, with this predicted to grow to over $27 billion by 2022.

Just as with HAL, however, the modern smart speaker has not been without controversy, not least when revelations emerged last year that Amazon employees were listening to voice recordings from the Echo, and any other Alexa-enabled speakers. The company said that recordings were used by employees to improve the speech recognition capabilities of the devices, and pleaded that they had robust systems in place to prevent abuse, but nonetheless it was sufficient to spook customers.

Fears around the manipulation of smart speakers are only going to be inflamed by the release of a new system by MIT, called TextFooler, which the makers claim can fool natural language processing systems, such as the Amazon Echo, and thus manipulate its behavior.

Adversarial attacks

Concern around so called adversarial attacks has been growing in AI circles for some time, not least after Google's AI was famously fooled into thinking a turtle looked like a rifle. The team, from MIT’s famous Computer Science and Artificial Intelligence Laboratory (CSAIL) developed the tool to perform similar attacks on natural language processing (NLP) systems, such as power systems such as Alexa and Siri.

It should perhaps go without saying that the tool is not so much designed to exploit vulnerabilities in Alexa et al, but rather to help shore them up. The developers believe the tool could have a wide range of applications in the text classification field, from hate speech identification to smarter spam filtering.

The tool has a couple of elements to it. The first of these works at altering any given text, with the second part then using this new text to test a couple of different language tasks to try and successfully fool machine learning systems. It begins by trying to identify the most important words in any given text in terms of their power in influencing the predictions of the system. It then aims to explore synonyms of those words to replace those words with, all while maintaining the grammar rules and original meaning of the text so that it looks sufficiently human to trick the system.

The system is designed to perform both accurate and efficient text classification, and also entailment, which refers to the relationship between fragments of text in any particular sentence. Both tasks are geared towards manipulating either the way text is classified or invalidating the way technologies perform entailment of that text.

For instance, TextFooler was able to manipulate a text that originally said that “characters, cast in impossibly contrived situations, are totally estranged from reality,” into text that said “characters, cast in impossibly engineered circumstances, are fully estranged from reality.”

The system was capable of successfully attacking a number of widely used NLP models, including the open-source BERT system that was developed by Google in 2018. TextFooler was able to successfully fool these systems such that they went from 90% accuracy rates to below 20%, with this deterioration achieved by manipulating just 10% of the words in any given text.

Losing control

As voice assistant systems have grown in popularity, so too have concerns around the security of the systems. The MIT work joins Japanese research published last year, which used ultrasound in order to hack into voice-assisted devices.

The approach, which the researchers refer to as an Audio Hotspot Attack, utilizes the way ultrasound waves travel to hack into smart speakers, which emit ultrasound in a directional manner. The researchers describe that emitting ultrasound at the right angle, and from the right distance, enables them to manipulate the target device, just as though a human being was communicating with it.

The researchers were able to perform their feet on both the Google Home and Amazon Echo, with a smart phone paired with parametric loudspeakers to trick the devices, with the attackers able to both activate the devices and provide actionable commands to them. They were able to transmit a range of legitimate voice clips via ultrasound to fool the devices into behaving as though a real human was instructing them.

While both approaches are a little way off the kind of subversion displayed by HAL in 2001, they do nonetheless highlight the security risks presented by devices that are increasingly widespread, and increasingly integrated into our homes.

Leave a Reply

Your email address will not be published. Required fields are marked