Cybernews
  • News
  • Editorial
  • Security
  • Privacy
    • What is a VPN?
    • What is malware?
    • How safe are password managers?
    • Are VPNs legal?
    • More resources
    • Strong password generator
    • Personal data leak checker
    • Antivirus software
    • Best VPN services
    • Password managers
    • Secure email providers
    • Best website builders
    • Best web hosting services
  • Follow
    • Twitter
    • Facebook
    • YouTube
    • Linkedin
    • Flipboard
    • Newsletter

© 2021 CyberNews - Latest tech news, product reviews, and analyses.

Our readers help us create quality content. If you purchase via links on our site, we may receive affiliate commissions. Learn more

Home » Security » The growing threat of adversarial attacks

The growing threat of adversarial attacks

by Adi Gaskell
8 April 2020
in Security
0
digital globe
0
SHARES

Recently I attended the launch of Dark Data, the latest book by Imperial College’s emeritus professor of mathematics, David Hand, in which he outlines the various ways in which our big data era may be insufficient to make the kind of decisions we hope it can provide. He explores the many ways in which we can be blind to missing data, and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous.

The book is undoubtedly fascinating, especially for anyone interested in data and statistics, but for me, the most interesting aspect was, appropriately, something omitted from the book. Wrapping up the event was Professor Sir Adrian Smith, the head of the Turing Institute, and one of the architects of the recently published report into AI and ethics for the UK government. He raised the increasingly pertinent issue of adversarial data, or the deliberate attempt to manipulate the data upon which our AI systems depend.

As artificial intelligence has blossomed in recent years, the importance of securing the data upon which AI lives and breathes has grown in importance, or at least it kinda has. Data from the Capgemini Research Institute last year showed that just one in five businesses were implementing any form of AI cybersecurity, and while the same survey also found that around two-thirds were planning to do so by the end of this year, you do wonder just how seriously we’re taking the problem.

Trusting the data

There have already been numerous examples of AI-based systems that have gone astray on account of having poor, often biased, data with which to train the systems, which often results in discriminatory outcomes. It’s likely that a greater number of systems have poor quality outcomes due to the same lack of quality in the data they’re based upon.

In these kinds of examples the lack of quality is something the vendors are complicit in, but adversarial attacks involve the deliberate manipulation of data to distort the performance of AI systems. There are typically two main kinds of adversarial attack: targeted and untargeted.

A targeted attack has a deliberate form of distortion that it wants to create within the AI system, and sets out to ensure that X is classified as Y. An untargeted attack doesn’t have such specific aims, and merely wishes to distort the outputs of the system so they’re misleading. While untargeted attacks are understandably less powerful, they’re somewhat easier to implement.

Adversarial attacks

Ordinarily, the training stage of machine learning strives to minimize any loss between the target label and predicted label. This is then tested to ensure that the system can accurately predict the predicted label, with an error rate calculated as the difference between the two. Adversarial attackers change the query input such that the prediction outcome is changed.

It perhaps goes without saying that in many instances, attackers will have no idea what machine learning model the AI system is utilizing, which you might imagine would make distorting it very difficult. The reality, however, is that even when the model is unknown, adversarial attacks are still highly effective, due in large part because there is a degree of transferability between models. This means that adversarial attacks can practice on one model, before attacking a second, confident that it will still prove disruptive.

The question is, can we still trust machine learning? Research has suggested a good way to protect against adversarial attacks is to train systems to automatically detect them and repair them at the same time. One approach to achieve this is known as denoising, and requires methods to be developed to remove any noise from the data. Ordinarily this could be simply Gaussian noise, but by using an ensemble of denoisers, it’s possible to strip the noise out for each distinct type of noise. The aim is to return the data to as close to the original, uncorrupted version as possible, and thus allow the AI to continue functioning properly. The next step is to then use a verification ensemble that reviews the denoised data and re-classifies it. This is a simple verification layer to ensure the denoising has worked well.

Suffice to say, these defensive tactics are still themselves at an experimental stage, and it’s clear that more needs to be done to ensure that as AI becomes a growing part of everyday life, we can rely on it to be providing reliable and effective outcomes free from the distorting effects of hackers. There’s a strong sense that initial biased outputs have eroded any inclination to blindly trust the systems to deliver excellent results, but there is perhaps more work to be done to truly convince vendors to tackle adversarial attacks properly, and indeed for regulators to ensure such measures are in place.

ShareTweetShareShare
Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Editor's choice

500M LinkedIn user records sold on hacker forum
News

Scraped data of 500 million LinkedIn users being sold online, 2 million records leaked as proof

by CyberNews Team
6 April 2021
5

We updated our leak checker database with more than 780,000 email addresses associated with this leak...

Read more
LinkedIn, FB, Twitter, Clubhouse apps seen on an iPhone

Recent Facebook, LinkedIn and Clubhouse leaks explained

15 April 2021
Cheapest tool to kill satellites? A computer

Cheapest tool to kill satellites? A computer

13 April 2021
A gift to criminals and tyrants? Soon, wireless devices could become object sensors

A gift to criminals and tyrants? Soon, wireless devices could become object sensors

13 April 2021
“Not ideal” from a privacy standpoint: Clubhouse API lets “anyone” scrape public user data

“Not ideal” from a privacy standpoint: Clubhouse API lets “anyone” scrape public user data

12 April 2021
  • Categories
    • News
    • Editorial
    • Security
    • Privacy
  • Reviews
    • Antivirus Software
    • Password Managers
    • Best VPN Services
    • Secure Email Providers
    • Website Builders
    • Best Web Hosting Services
  • Tools
    • Password Generator
    • Personal Data Leak Checker
  • Engage
    • About Us
    • Send Us a Tip
    • Careers
  • Twitter
  • Facebook
  • YouTube
  • Linkedin
  • Flipboard
  • Newsletter
  • About Us
  • Contact
  • Send Us a Tip
  • Privacy Policy
  • Terms & Conditions
  • Vulnerability Disclosure

© 2021 CyberNews - Latest tech news, product reviews, and analyses.

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.
Subscribe For Security Tips And CyberNews Updates
Email address is required. Provided email address is not valid. You have been successfully subscribed to our newsletter!
Our Privacy Policy and Terms & Conditions

Home

News

Editorial

Security

Privacy

Resources

  • About Us
  • Contact
  • Careers
  • Send Us a Tip

© 2020 CyberNews – Latest tech news, product reviews, and analyses.