Microsoft Takes Legal Action Against AI “Hacking as a Service” Scheme

Not sure this will matter in the end, but it’s a positive move:

Microsoft is accusing three individuals of running a “hacking-as-a-service” scheme that was designed to allow the creation of harmful and illicit content using the company’s platform for AI-generated content.

The foreign-based defendants developed tools specifically designed to bypass safety guardrails Microsoft has erected to prevent the creation of harmful content through its generative AI services, said Steven Masada, the assistant general counsel for Microsoft’s Digital Crimes Unit. They then compromised the legitimate accounts of paying customers. They combined those two things to create a fee-based platform people could use.

It was a sophisticated scheme:

The service contained a proxy server that relayed traffic between its customers and the servers providing Microsoft’s AI services, the suit alleged. Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company’s Azure computers. The resulting requests were designed to mimic legitimate Azure OpenAPI Service API requests and used compromised API keys to authenticate them.

Slashdot thread.

Posted on January 13, 2025 at 7:01 AM9 Comments

Comments

Kukumix January 13, 2025 7:17 AM

Is Microsoft masking here its own mistakes?

“They then compromised the legitimate accounts of paying customers” – we have seen in the past multiple deep security holes in Azure. Have they exploited those?

“Among other things, the proxy service used undocumented Microsoft network application programming interfaces (APIs) to communicate with the company’s Azure computers” – sounds like another security hole in Microsoft.

Clive Robinson January 13, 2025 9:29 AM

@ Bruce, ALL,

In another “news source”,

https://arstechnica.com/security/2025/01/microsoft-sues-service-for-creating-illicit-content-with-its-ai-platform/

Dan Goodin says,

“All 10 defendants were named John Doe because Microsoft doesn’t know their identity.”

And goes on to say seven of them were customers of the illicit service.

And more importantly, it appears access credentials were compromised. Microsoft is not saying how or on who’s servers the credentials were stored.

Another article indicates Microsoft have been busy in the courts getting permission to raid and grab a server…

Steve January 13, 2025 5:55 PM

From the story:

Microsoft is also suing seven individuals it says were customers of the service. All 10 defendants were named John Doe because Microsoft doesn’t know their identity.

In other words, lawsuit as performance art.

ResearcherZero January 14, 2025 1:44 AM

@Kukumix

Microsoft forbids doing things with it’s products it hoped would not happen, that it never imagined would happen, or that are completely possible but should not be done.

Microsoft would never make a mistake by bringing products to market that are full of gaping security flaws. It’s the users who were non compliant with the Microsoft law. If for example a foreign government was to break into Microsoft’s network and steal tokens for it’s APIs, or compromise the credentials for accounts, it would totally be the user’s fault.

ResearcherZero January 14, 2025 2:00 AM

Generative AI and LLMs are like a small child, which you have placed in a forest, given a box of matches (in the middle of summer), then kindly asked to please not ignite the forest. Clearly the child’s fault, and not a decision a responsible adult would make.

Silicon Valley is fortunately situated a long way from any side-effects of disruption.

Anonymous January 14, 2025 8:01 AM

What sort of “illicit” content did they create exactly?

I can’t see how any AI generated content could be more “harmful” than, say, photoshopped content, animated content, content researched normally, etc.

The only thing they did wrong that I can see was compromising the accounts of existing uninvolved customers.

Who? January 15, 2025 12:40 PM

AI systems need to be trained, just in the same way young childs must.

The difference is that childs go to school, while AI systems are provided with access to anything available on the Internet. Go figure… and then we complain that LLMs have hallucinations…

Who? January 15, 2025 1:09 PM

“All 10 defendants were named John Doe because Microsoft doesn’t know their identity.”

So defendants must be the group “Anonymous.” On the bright side, at least they are not accusing the real owners of the compromised API keys. Undocumented APIs are not the problem here, we cannot depend on these APIs remain hidden forever, the real issue is that these services do not use appropriate digital keys to authenticate both ends.

We do not need undocumented APIs to securely flash the firmware on a workstation, we need to trust on cryptography and the fact these firmware updates are signed with a private key whose public counterpart is provided with the workstation.

It looks MSFT has choosed some sort of symmetric keys, or something really weak.

A proxy would just be “an alternative communication path” if encryption is done in the right way. I doubt something like this can be done using an SSH proxy.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.