
A new academic study has revealed that popular art protection tools designed to ward off unauthorized artificial intelligence (AI) training, like Glaze and NightShade, can be effortlessly bypassed.
These tools protect art that is uploaded online by using image‐poisoning techniques. They insert subtle distortions that make it impossible for AI to read it and therefore learn from it.
However, the walls guarding any intellectual property easily fall against a new method called LightShed. Researchers developed it from the University of Cambridge, TU Darmstadt, and UT San Antonio. LightShed is able to detect and remove these distortions with 99.98% accuracy, leaving artists vulnerable even when using state‑of‑the‑art defenses.
Tools like Glaze and NightShade aim to protect human creatives by confusing AI models during training. However, LightShed reverses the “poison,” neutralizing the tools’ effectiveness and enabling unscrupulous actors (or AI developers) to reuse protected artwork in training data sets.
Both tools have been downloaded nearly nine million times and gained popularity as creative defenses in a space where consent is rarely prioritized. But according to the researchers, LightShed can reverse-engineer and remove those embedded distortions in three steps: detect the poison, learn its pattern from known examples, and restore the image to its “clean,” unprotected form.
“This shows that even when using tools like NightShade, artists are still at risk of their work being used for training AI models without their consent,” said lead author Hanna Foerster from Cambridge's Department of Computer Science and Technology.
Although LightShed exposes significant vulnerabilities in current protections, the researchers emphasize their work is meant as a call to action, not sabotage. “We see this as a chance to co-evolve defenses,” said co-author Prof. Ahmad-Reza Sadeghi, calling for collaboration with artists and developers to build more resilient, adaptive tools.
The research team emphasizes that their goal is to expose the flaws in current defenses and influence the invention of better tools.
Artists, writers, and journalists are taking matters into their own hands
As AI companies train their models on intellectual property that’s been created by people, their communities, like book authors, journalists, and visual artists, take these disputes to court.
The BBC is suing startup Perplexity for using its content without authorization.

The UK has also raised some eyebrows when it proposed to permit unlicensed AI training over copyrighted material. Artists like Paul McCartney and Elton John have publicly called it “criminal”.
Multiple lawsuits are currently underway, including major cases involving OpenAI, Meta, Stability AI, and Midjourney, alleging unauthorized training on copyrighted text, images, and even music.
Your email address will not be published. Required fields are markedmarked