
Researchers have demonstrated that adversarial attacks, when someone manipulates data being fed into an AI system to control what the system sees or doesn’t see in an image, are certainly possible.
New peer-reviewed research from North Carolina State University shows that the new technique, called RisingAttacK, is effective at manipulating all of the most widely used AI computer vision systems.
The paper’s title, “Adversarial Perturbations Are Formed by Iteratively Learning Linear Combinations of the Right Singular Vectors of the Adversarial Jacobian,” is certainly mind-boggling. But it’s not that complicated, actually.
For instance, someone might manipulate an AI’s ability to detect traffic signals, pedestrians, or other cars, which would cause problems for autonomous vehicles. Or, a hacker could install code on an X-ray machine and cause an AI system to make inaccurate diagnoses.
“We wanted to find an effective way of hacking AI vision systems because these vision systems are often used in contexts that can affect human health and safety – from autonomous vehicles to health technologies to security applications,” said Tianfu Wu, co-corresponding author of a paper on the work and an associate professor of electrical and computer engineering at North Carolina State University.

“That means it is very important for these AI systems to be secure. Identifying vulnerabilities is an important step in making these systems secure, since you must identify a vulnerability in order to defend against it.”
According to the study, RisingAttacK consistently outperforms prior state-of-the-art attacks across four major models and ranking depths, achieving higher success rates and lower perturbation norms.
It’s very subtle. The technique consists of a series of operations aimed at making the fewest changes to an image.
First, First, RisingAttacK identifies all of the visual features in the image. The program also runs an operation to determine which of those features is most important to achieve the attack’s goal.
“For example,” says Wu, “if the goal of the attack is to stop the AI from identifying a car, what features in the image are most important for the AI to be able to identify a car in the image?”
RisingAttacK then calculates the AI system's sensitivity to changes in data, specifically, the sensitivity of the AI to changes in the data of the key features.
“This requires some computational power, but allows us to make very small, targeted changes to the key features that make the attack successful,” Wu says.
Someone might manipulate an AI’s ability to detect traffic signals, pedestrians, or other cars, which would cause problems for autonomous vehicles.
“The end result is that two images may look identical to human eyes, and we might clearly see a car in both images. But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image.
“And the nature of RisingAttacK means we can influence the AI’s ability to see any of the top 20 or 30 targets it was trained to identify. That might be a car, a pedestrian, a bicycle, a stop sign, and so on.”
Your email address will not be published. Required fields are markedmarked