Microsoft sues cybercriminal operation that developed tools to bypass AI safety guardrails
Microsoft Corp.’s Digital Crime Unit has taken legal action to disrupt a cybercriminal operation that developed tools specifically designed to bypass the safety guardrails of generative artificial intelligence services.
The complaint, filed in the Eastern District of Virginia in December, claims that the unnamed cybercriminals violate U.S. law and the Acceptable Use Policy and Code of Conduct for Microsoft services. The complaint alleges that “Does 1-10 Operating an Azure Abuse Network” breached laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act, as well as trespass to chattels and tortious interference under Virginia state law.
Microsoft alleges that the defendants used stolen customer credentials and custom software to bypass security measures and, in doing so, generated harmful content through Microsoft’s platform. The defendants are also alleged to have used tools such as de3u and a reverse proxy service to manipulate Microsoft’s generative AI systems. De3u is a client-side tool designed to facilitate the generation of AI-created images using DALL-E 3, an image-generating AI model developed by OpenAI that is also accessible through Microsoft.
Having gained access to Microsoft AI services, the defendants are then alleged to have resold access to other malicious actors with detailed instructions on how to use these custom tools to generate harmful and illicit content.
“Every day, individuals leverage generative AI tools to enhance their creative expression and productivity,” Steven Masada, assistant general counsel of Microsoft’s Digital Crimes Unit, said in a blog post. “Unfortunately, and as we have seen with the emergence of other technologies, the benefits of these tools attract bad actors who seek to exploit and abuse technology and innovation for malicious purposes.”
Exactly what the attackers were using the bypass of DALL-E 3 to create is not entirely clear – “harmful and illicit content” could range from AI-generated abuse material through to something Microsoft simply doesn’t like. Also not clear is why the attackers would go to such effort to bypass safety guardrails on DALL-E 3 when there are better and even open-source tools readily available on the market that produce superior images to DALL-E 3. But we do know how they went about it.
“Unlike in other API attacks, where an attacker often targets business-critical data and running, in this situation, we have the attackers setting up a shadow AI,” Katie Paxton-Fear, principal security researcher at application programming interface security company Traceable AI, told SiliconANGLE via email. “This worked by providing a DALL-E-like front end, which then sent user’s prompts to OpenAI via Azure.”
“The attackers would then check if it had been censored to enable users to bypass the safety checks in the DALL-E front end on Open AIs website,” Paxton-Fear added. “By using legitimate OpenAI credentials for other users and businesses stolen in other attacks, they were able to go unnoticed, moving their operations between many legitimate accounts.”
That Microsoft is also taking action in the case has also raised eyebrows. Cybersecurity expert Ophir Dror, cofounder of generative AI security company Lasso Security Inc., told SiliconANGLE that “the fact that Microsoft is taking this case to court seems exceptional” and that “it’s not always the case with such scenarios and might indicate a change in behavior from tech giants.”
Image: SiliconANGLE/Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU