Adversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, “What are the potential legal risks to adversarial ML researchers when they attack ML systems?” Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA’s application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Posted on July 23, 2020 at 6:03 AM5 Comments


Clive Robinson July 23, 2020 11:56 AM

@ Bruce,

We argue that the court is likely to adopt a narrow construction of the CFAA

One can but hope…

As I’ve noted in the past ICT related legislation tends to be considerably over broad in scope ay the best of times, and prosecuters have tried very hard to open it up further with case law.

Whilst some judges do pull things in a bit, to many alow prosecutorial over reach go to far.

A rule of thumb for legislation should be to reset any proposed legislation from ICT and see what equivalent legislation exists for non ICT situations. Thus any ICT legislation should be similarly restrained in scope.

After all it is not illegal to walk up to somebodies door and knock politely, if you’ve made a nusance of yourself there are civil remidies. However ICT legislation makes the equivalent online activity actually a criminal activity from the get go, and it’s frequently treated as something worse than armed robbery.

Then there is the DMCA and the way it destroys the doctrine of “first sale” or as others put it “the right to tinker” and “The right to sell on”.

These “legal gains” are so lucrative for some, that it is going to be difficult to get them changed in line with other legislation. Worse though as we know, with legislators these days it’s not a case of right or wrong but who pays most for their time and who writes the legislation for them to rubber stamp… Thus the citizens do not realy get a say and if they try, well take a look around the streets of some US Cities, where people hiding in ambiguous uniforms and face coverings grab people off of the street for apparently no legal reason.

I guess people need to be reacquainted with the origins of the word “Terrorist”… Originaly it was where an inefectual leader sent out forces/guard labour to “terrorise” the population and put them in fear.

George H.H. Mitchell July 23, 2020 1:03 PM

Is not “… predicting how the US Supreme Court may resolve some present inconsistencies …” an ideal potential application for machine learning?

MarkH July 23, 2020 1:06 PM

ICT means “Information and Communication Technology”, for the benefit of anyone (like me) not familiar with this initialism.

If it’s widely used in the U.S., I missed it. But I don’t get around much any more …

“ICT” seems to have first gained wide exposure in the U.K.

echo July 23, 2020 6:57 PM

I think the problem with Bruces paper is “it depends”. There will be systems which need protection not just because of computer misuse acts but also human rights and equality legislations and, yes, fraud legislation which Bruce does mention. Just because someone can mount an attack doesn’t mean they should so if Bruce is going down a typical American path and supporting “unqualified freedom” not far removed from “unqualified free speech” then Bruce simply does not get the law or purpose of the law. Nor does Bruce get the issue of American monopoly legislation and regulation verus the rest of the world especially the European Union and the UK. Nor does Bruce get the difference of emphasis between legal systems of risk management versus prevention of harm. Until Bruce and America and Americans get this in general this will always cause friction.

In the UK at least there is no “the computer said so” defence. A human being is ultimately responsible and accountable regardless of whether a computer is running on autofire or a manual trigger.

Never faff with a live system or system intended to go live.

Why is it security is always about the boys toys? Why always the technical stuff? Why always an adversarial relationship? This is not to say these elements don’t have their placein the overalscheme of things but some problems or aspects of the problem simply do not fit this model. Thus, the responsibility falls on security to make security practice better by understanding this. Perhaps before pulling the trigger?…

If I were Bruce I would withdraw this paper because it’s a career landmine waiting to go off.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.