Microsoft drops legal hammer on AI jailbreaks


Individuals attempting to jailbreak Microsoft‘s AI services will face legal action, the tech behemoth's Digital Crimes Unit (DCU) has said, pointing to foreign threat actors abusing and weaponizing the company‘s services.

Microsoft‘s DCU wants to create a precedent that would allow for legal action against malicious actors who develop AI jailbreaks allowing the crafting of harmful content. AI attackers have devised various methods to circumvent built-in safety measures, enabling threat actors to use the tool in a way its makers did not intend users to.

The company’s move signals that even the wealthiest tech companies are unable to put a lid on human creativity to weaponize new technology. As Microsoft said themselves: “Cybercriminals remain persistent and relentlessly innovate.” Especially when the effort is coordinated at the highest levels.

ADVERTISEMENT
justinasv Stefanie Paulina Okunyte Ernestas Naprys
Be the first to know and get our latest stories on Google News

For example, Microsoft points to a “foreign-based threat actor group” that developed “sophisticated” software that allowed the scraping of credentials off public websites, leading to unauthorized access of accounts “with certain generative AI services.”

Once inside, malicious actors would tweak AI's capabilities and proceed to sell access to such accounts. With such tools on hand, Microsoft said, cybercrooks were able to devise harmful and illicit content. While the Windows maker eventually revoked hackers’ access to the modified service, legal action allowed the company to seize a website that criminals used to discuss and organize the crime.

“The court order has enabled us to seize a website instrumental to the criminal operation that will allow us to gather crucial evidence about the individuals behind these operations, to decipher how these services are monetized, and to disrupt additional technical infrastructure we find,” Microsoft said.

While the company did not identify what type of malicious actions were initiated using its tools, earlier this year, Microsoft-backed OpenAI said it had disrupted five covert influence operations that sought to use its AI models for "deceptive activity" across the internet.