• About Us
  • Contact
  • Careers
  • Send Us a Tip
Menu
  • About Us
  • Contact
  • Careers
  • Send Us a Tip
CyberNews logo
Newsletter
  • Home
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
Menu
  • Home
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
CyberNews logo

Home » Security » The growing threat of adversarial attacks

The growing threat of adversarial attacks

by Adi Gaskell
8 April 2020
in Security
0
digital globe
0
SHARES

Recently I attended the launch of Dark Data, the latest book by Imperial College’s emeritus professor of mathematics, David Hand, in which he outlines the various ways in which our big data era may be insufficient to make the kind of decisions we hope it can provide. He explores the many ways in which we can be blind to missing data, and how that can lead us to conclusions and actions that are mistaken, dangerous, or even disastrous.

The book is undoubtedly fascinating, especially for anyone interested in data and statistics, but for me, the most interesting aspect was, appropriately, something omitted from the book. Wrapping up the event was Professor Sir Adrian Smith, the head of the Turing Institute, and one of the architects of the recently published report into AI and ethics for the UK government. He raised the increasingly pertinent issue of adversarial data, or the deliberate attempt to manipulate the data upon which our AI systems depend.

As artificial intelligence has blossomed in recent years, the importance of securing the data upon which AI lives and breathes has grown in importance, or at least it kinda has. Data from the Capgemini Research Institute last year showed that just one in five businesses were implementing any form of AI cybersecurity, and while the same survey also found that around two-thirds were planning to do so by the end of this year, you do wonder just how seriously we’re taking the problem.

Trusting the data

There have already been numerous examples of AI-based systems that have gone astray on account of having poor, often biased, data with which to train the systems, which often results in discriminatory outcomes. It’s likely that a greater number of systems have poor quality outcomes due to the same lack of quality in the data they’re based upon.

In these kinds of examples the lack of quality is something the vendors are complicit in, but adversarial attacks involve the deliberate manipulation of data to distort the performance of AI systems. There are typically two main kinds of adversarial attack: targeted and untargeted.

A targeted attack has a deliberate form of distortion that it wants to create within the AI system, and sets out to ensure that X is classified as Y. An untargeted attack doesn’t have such specific aims, and merely wishes to distort the outputs of the system so they’re misleading. While untargeted attacks are understandably less powerful, they’re somewhat easier to implement.

Adversarial attacks

Ordinarily, the training stage of machine learning strives to minimize any loss between the target label and predicted label. This is then tested to ensure that the system can accurately predict the predicted label, with an error rate calculated as the difference between the two. Adversarial attackers change the query input such that the prediction outcome is changed.

It perhaps goes without saying that in many instances, attackers will have no idea what machine learning model the AI system is utilizing, which you might imagine would make distorting it very difficult. The reality, however, is that even when the model is unknown, adversarial attacks are still highly effective, due in large part because there is a degree of transferability between models. This means that adversarial attacks can practice on one model, before attacking a second, confident that it will still prove disruptive.

The question is, can we still trust machine learning? Research has suggested a good way to protect against adversarial attacks is to train systems to automatically detect them and repair them at the same time. One approach to achieve this is known as denoising, and requires methods to be developed to remove any noise from the data. Ordinarily this could be simply Gaussian noise, but by using an ensemble of denoisers, it’s possible to strip the noise out for each distinct type of noise. The aim is to return the data to as close to the original, uncorrupted version as possible, and thus allow the AI to continue functioning properly. The next step is to then use a verification ensemble that reviews the denoised data and re-classifies it. This is a simple verification layer to ensure the denoising has worked well.

Suffice to say, these defensive tactics are still themselves at an experimental stage, and it’s clear that more needs to be done to ensure that as AI becomes a growing part of everyday life, we can rely on it to be providing reliable and effective outcomes free from the distorting effects of hackers. There’s a strong sense that initial biased outputs have eroded any inclination to blindly trust the systems to deliver excellent results, but there is perhaps more work to be done to truly convince vendors to tackle adversarial attacks properly, and indeed for regulators to ensure such measures are in place.

ShareTweetShareShare

Related Posts

Covid-19 vaccine

Covid vaccines are now an excuse to launch phishing attacks

22 January 2021
MyFreeCams data leaked on hacker forum

MyFreeCams hack: 2 million user records stolen from top adult streaming site and sold on hacker forum

21 January 2021
Nohow International leaks sensitive worker data

12,000+ workers’ IDs, banking details, and other personal data leaked by UK staffing agency

19 January 2021
Telegram app on mobile

Watch out: there’s a new Telegram scam about

15 January 2021
Next Post
Ccybersecurity threats

Improving your cyber hygiene during the coronavirus pandemic

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Popular News

  • 70TB of Parler users’ messages, videos, and posts leaked by security researchers

    70TB of Parler users’ messages, videos, and posts leaked by security researchers

    83031 shares
    Share 83021 Tweet 0
  • 8 best cybersecurity podcasts for 2021

    56 shares
    Share 56 Tweet 0
  • Facebook is tracking you: learn how to delete all Facebook data

    56 shares
    Share 56 Tweet 0
  • How to find what Google knows about me and get back my privacy?

    0 shares
    Share 0 Tweet 0
  • How to find all accounts linked to your email to protect your privacy

    277 shares
    Share 276 Tweet 0
Elon Musk

Elon Musk to offer $100 million prize for ‘best’ carbon capture tech

22 January 2021
Is there life on Mars?

Is there life on Mars?

22 January 2021
Covid-19 vaccine

Covid vaccines are now an excuse to launch phishing attacks

22 January 2021
Alphabet shutting Loon, which used balloon alternative to cell towers

Alphabet shutting Loon, which used balloon alternative to cell towers

22 January 2021
what is wireguard

WireGuard protocol: everything you need to know

22 January 2021
Parler loses bid to require Amazon to restore service

Parler loses bid to require Amazon to restore service

22 January 2021
Newsletter

Subscribe for security tips and CyberNews updates.

Email address is required. Provided email address is not valid. You have been successfully subscribed to our newsletter!
Categories
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
  • VPNs
  • Password Managers
  • Secure Email Providers
  • Antivirus Software Reviews
Tools
  • Personal data leak checker
  • Strong password generator
About Us

We aim to provide you with the latest tech news, product reviews, and analysis that should guide you through the ever-expanding land of technology.

Careers

We are hiring.

  • About Us
  • Contact
  • Send Us a Tip
  • Privacy Policy
  • Terms & Conditions
  • Vulnerability Disclosure

© 2021 CyberNews

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.

Home

News

Editorial

Security

Privacy

Resources

  • In the News
  • Contact
  • Careers
  • Send Us a Tip

© 2020 CyberNews – Latest tech news, product reviews, and analyses.

Subscribe for Security Tips and CyberNews Updates
Email address is required. Provided email address is not valid. You have been successfully subscribed to our newsletter!