Model Extraction Attack on Neural Networks

Adi Shamir et al. have a new model extraction attack on neural networks:

Polynomial Time Cryptanalytic Extraction of Neural Network Models

Abstract: Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons).

In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.

Posted on October 10, 2023 at 7:09 AM18 Comments

Comments

Winter October 10, 2023 8:41 AM

An attack following the approach by Carlini et al. requires an exhaustive search over 2256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.

Impressive speedup!

Now, when will we be able to do this with human brains? 😉

DavidH October 10, 2023 10:37 AM

@I feel like a 5 year old:

I am not an expert in this particular field, but have enough experience in related fields to do an ELI5 (I think).

When you get a neural network, it is usually a “black box”. That is, you put something, and get something else out. How those two are actually connected is a mystery to the user. We are talking about “reverse engineering” the black box, and figuring out how the internals actually work. This is generally considered so “hard” that you can let people freely use your software without them being able to reverse engineer the “secret sauce” that makes it work just by observing the input and output.

This paper is saying they have figured out a way to make it significantly easier. On the test neural network, they figured out the secret sauce in 30 minutes. How well that translates to other, more complex neural networks might be a non-trivial story, but it might be a lot easier to reverse engineer these than we thought.

Emoya October 10, 2023 1:25 PM

There is a big push for AI transparency in an effort to provide explanations of how AIs reach their conclusions. I’m guessing that this attack is not quite the same thing?

From both product improvement and potential regulatory perspectives, it seems that developers of an AI product may benefit from knowing how the AI arrives at its results, but they would also want to maintain the secrecy of their “sauce”.

Clive Robinson October 10, 2023 2:55 PM

@ Winter, ALL,

Re : New model extraction attack on ReuL DDNs

“Impressive speedup!”

Yes but is it general to all “Deep Neural Networks”(DNNs) or specific to just one type or range of types?

The answer to which is important to your second humourous question,

“Now, when will we be able to do this with human brains?”

To which the answer, –is almost certainly never,– is part of casting light into the linear part of the black box hidden behind the “activation functions” of each neuron.

However for DNNs it becomes quite interesting, and it boils down to,

Getting behind the nueron “activation function”

Which is different in the many and varied types of artificial neural networks and is also seen as a significant part of “the magic sauce” that not just differentiates neural networks but the ways they work.

The question thus becomes,

How much of a “One Way Function”(OWF) is the activation function.

In part that depends on how linear the translation is –not very– or if some other attack say similar to a “dictionary attack” is feasable.

To see why the “activation function is so important in non biological Neural Networks, you first have to understand what a neuron is from a computing aspect. As I’ve noted in the past, DNNs are little more than an outgrowth of “Digital Signal Processing”(DSP) networks going back to the 1970’s when general CPU’s aided by early Maths Co-Processors became powerfull enough to do basic low frequency audio in something approaching “real time”.

The base primitive for DSP’s and thus even modern DNNs is usually called a “Multiply and ADd”(MAD) or “Multiply and Accumulate”(MAC) instruction. All it does is take a discrete input, multiplies it by a constant, and adds the result to a running total, that eventually becomes the outpt of the DSP or DNN sub circuit (we call a neuron in DNNs)

In a DNN those MAD constants are the “weights” however it quickly becomes clear that a quite significant problem arises which is the “running total” the number of bits required is the depth of precission (2^p) plus 2^N extra bits where N is the number of discrete inputs being sumed in each “neuron”. In modern DNNs a thousand or more inputs to a neuron is quite desirable. But these then become the inputs to following neurons, so unless some form of compression or normalisation is used at the neuron output things quickly become impossible to deal with.

But there is another problem which is the type, and bias as well as precision of the output. In organic nurons this is effectively solved by the input to output being effectively logrithmic, which is why telephone networks developed A-Law companders

“Rectified Linear Units”(ReLU)[1] are a form of activation function on the output of a neuron and are now used quite commonly in deep learning models. But they are far from the only type. For instance a form of “log” system has been used as a variation of an A-Law compressor and this gave rise to the Sigmoid and tanh() function.

In practice these input to output functions are actually implemented by “look-up table”(LUT) as it is about as fast as you can reasonably expect. However there is a penalty for this speed and that is the size of the ROM needed in the LUT. So other not quite as fast methods are considered that are generally a “poorman’s aproximation” that use a few simple integer comparisons[2] implementing a form of piecewise linear line approximation.

It’s known that some functions are effectively “One Way Functions”(OWFs) that are resource inexpensive in the forward direction but extreamly expensive in the reverse direction.

The speed of this “attack” is directly proportional to the effort and resources required for reversing the OWF to “look behind the ‘activation function'”.

Thus for some activation functions this is going to be both fast and inexpensive, for others slow and expensive.

From the point of view of the maker of a DNN system, it would appear at first sight to be in their interests to stop the black box having light cast into it. Thus this paper will probably be seen to be a reason to go with near OWF activation functions…

But that involves a more interesting problem which is the training of a DNN to find the weights a process called “backpropagation” is used.

Worse whilst organic neurons are effectively only “close neighbour” dependent, thus can “learn asynchronously” or independently of each other. This is very far from the case with current DNNs, where all neurons “learn synchronusly” in a near fully dependent way thus all DNN neurons have to be switched on to some extent. Which is not just a significant failing of DNNs it also is the reason that the initial training process of a DNN is so expensive in resources.

Whilst the idea of an activation function in DNN neurons was directly inspired by the “action potential” in an “Organic Neural Network”(ONN) half a century ago, there are few similarities between the ONN and DNN mechanisms these days.

ONN action potentials do not return continuous values and do not need to unlike a DNN. Therefore An organic neuron is either triggered or it is not, much like a threshold step function (think of an off/on switch).

However for DNN training backpropagation has to be possible, so there is no possibility for a neuron to be “switched off” as it is required in order for backpropagation to be possible and thus for the DNN to learn. Hence it is a highly inefficient process requiring much power for the silicon to switch quickly, and much water to remove the excess heat due to I^2R effects.

[1] A Rectified Linear Unit, or ReLU, is a bit of a mouth full to say let alone remember 😉 However understanding an ReLU from looking at it’s inputs to output mappings makes it appear almost stupidly simple, when you get behind the usual “f(x)=max(0,x)” you see given in texts.

Simply for any input in range, the ReLU function acts as a “perfect positive halfwave rectifier” and only ouputs what is above the zero line (ie positive or zero),

https://commons.wikimedia.org/w/index.php?curid=63474513

In effect, the function returns 0 if the input is negative, and if the input is a positive value, the function will return the same positive value

Simplistically it works by “examining the sign bit, thus is very fast as a macro in assembler (two lines of code excluding data movment instructions).

1, Load value from stack or register
2, Branch if positive to (4)
3, XOR value with value
4, Save value to stack or register.

It also has other advantages in that it can implement “scarcity” in that a neurons output is turned off when it is not usefully being used. Thus as an “activation function” it is quite useful thus popular.

[2] As I’ve explained before a simple two line aproximation to a sinewave is a triangle wave. The difference between the two can then be aproximated by a very much reduced ROM in the LUT that is identical for each quadrant, but needs it’s inputs and outputs complemented –bit inverted by XOR– in the correct way for each quadrant. Thus the size of the ROM in the LUT can be reduced by 2^10 quite easily, more if a differential or further “straight line aproximation” mode can be used.

Winter October 11, 2023 2:10 AM

@Clive

Yes but is it general to all “Deep Neural Networks”(DNNs) or specific to just one type or range of types?

As the abstract says, it is only networks using the ReLU [1] activation function.[1] That function is important as it is simple enough, a cut-off and a line, to train the networks fast. It is probably the simplest, computationally cheapest, non-linear function that works with current deep learning procedures.

To which the answer, –is almost certainly never,– is part of casting light into the linear part of the black box hidden behind the “activation functions” of each neuron.

That, and the fact that mammal neural systems are non-linear on so many scales. Most importantly, these attacks work on feed-forward networks without closed loops/feedback (see definition 1 and Figure 1) and without memory. Without loops and memory, the network has no state and the output is a stateless stochastic function of the input in the current session (which can be loooong).

If it is possible, mammal brains are the opposite of feed-forward. “Brains” are one big collection of closed loops with occasional nodes that connect with the outer world, and lots of memory. Which means that brains have hysteresis: The result of an input depends on any previous inputs (and the time of the day, day of the week, date, year and weather). There is never a “reset” to a clean state.

It is “difficult” to analyze such a non-linear system with loops and memory from input-output relations alone.

[1] ‘https://en.wikipedia.org/wiki/Rectifier_(neural_networks)

ResearcherZero October 11, 2023 2:22 AM

It will take time to see if it is a jumping off point, or where it might lead.

Secrecy is harder to maintain when you pop something out in the field. But it still remains a difficult proposition, given the complexities of analyzing such ‘black box’ systems which may include in a range of configurations.

Attacking one specific implementation is much easier than a whole cart load.

Even widely useful solutions have their limitations. There are however a number of ways to analyse and reduce noise (filtering for example), or build specific circuits that improve processing with lower signal corruption.

bl5q sw5N October 11, 2023 3:25 AM

@ Winter

on feed-forward networks without closed loops/feedback (see definition 1 and Figure 1) and without memory.

These deep NN can be said to have loops since the training process uses feedback; and they have memory, namely the weights that the training produces. It’s just that the feedback loop is not present as part of the network connectivity layout.

one big collection of closed loops with occasional nodes that connect with the outer world, and lots of memory.

ANN designs incorporating loops and feedback in the network connections follow natural networks more closely and in learning can avoid the offline ad hoc (from the point of view of connectionism) external looping. S. Grossberg’s research systematically works from this viewpoint.

Winter October 11, 2023 4:54 AM

@bl5q sw5N

These deep NN can be said to have loops since the training process uses feedback

But the crucial part is that the feedback is not present during the attack phase. Then the network is frozen and differential input selection tries to simulate feedback.

ANN designs incorporating loops and feedback in the network connections follow natural networks more closely and in learning can avoid the offline ad hoc

These currently cannot be efficiently trained. I have not followed developments closely, but those of Grossberg’s networks I have seen described have at most one hidden layer, never tens of layers. Feed-forward networks are chosen because very large and deep networks can be trained in a human lifetime. And to produce GPT and Dall-E you need huge networks.

bl5q sw5N October 11, 2023 9:28 AM

@ Winter

These currently cannot be efficiently trained. I have not followed developments closely, but those of Grossberg’s networks I have seen described have at most one hidden layer, never tens of layers.

Perhaps there is still no insight into whether it’s possible or feasible to train a completely “general” recursive ANN. However Grossberg uses recursive networks modeled on natural recursive networks as building blocks and these have excellent training and learning characteristics.

Winter October 11, 2023 10:11 AM

@bl5q sw5N

However Grossberg uses recursive networks modeled on natural recursive networks as building blocks and these have excellent training and learning characteristics.

But we do not know how these natural networks learn. The development of neural networks is very complex and neural connections are not at all like ReLU activation. Some things we would like to use are:

  • Temporal spike coding (is very energy efficient)
  • Positional and time integrating coding on the input of neurons
  • Temporal sensitivity of synapses (connections)

But even thinking about how to program the updating of these connections during training blows the mind. On the other hand, maybe nature uses an extremely simple and obvious way to update neural connections strengths during learning [1].

[1] An example is Hebbian learning, which is so simple it is embarrassing.
‘https://en.wikipedia.org/wiki/Hebbian_Learning

Clive Robinson October 11, 2023 11:35 AM

@ Bruce, ALL,

Re : Faux-AI bubble building and profiteering.

People should read,

https://www.theregister.com/2023/10/11/github_ai_copilot_microsoft/

“Microsoft is reportedly losing up to $80 a month per user on its GitHub Copilot services.

According to a Wall Street Journal report citing a “person familiar with the figures,” while Microsoft charges $10 a month for the service, the software giant is losing $20 a month per user on average and heavier users are costing the company as much as $80 every 30 days.”

People should ask, especially as Microsoft is in it upto it’s neck in litigation over forcing people from the relative freedom of one time “Purchase Licencing” in comparison to the oppression of death by a thousand cuts perpetual “Subscription licencing” some call “Death by ‘salami slicing’ licencing”[1].

The answer is “Knoweth a dog by it’s vomit” especially when it keeps re-eating it’s own dog food or worse. As most readers here know Microsoft has for nearly half a century had the reputation of being the sort of mutt that plans to do more than “hump your leg”.

Well the bad news is AI is power hungry thus expensive to run, not just in ever faster hardware but on going costs like energy…

It appears Microsoft have decided that you must have part of it’s AI on your personal system. The problem is Win 11 has been deliberately eviserated by Microsoft to force you into hardware upgrades. In fact before it’s “MS AI Inside” dreams Win 11 needs lots of new hardware, and it apprars from recent whisperings it will require more for MS AI Inside, specifically AI hardware that will almost certainly require subscriptions to work…

Read down the article to see what is perhaps “The Second Sign Post to Disaster”,

“One way that Big Tech has tried to control the cost [Profit from users] of AI is with custom accelerators, such as Google’s Tensor Processing Unit and Amazon’s Trainium and Inferentia silicon. Now, if a report last week is to be believed, Microsoft may be about to reveal [to the public] its own custom AI accelerator.”

As with much in the increasingly “surveillance economy” where personal data is king and seen as gold in your treasury, getting your tech in by making others pay for it thus “locking them in” at their own expense is one of those “High Five” moments usually only major Goverments get by abusive legislation and regulation as guard labour and night sticks are so last century.

However this has been Microsofts proto-business model since before the IBM PC, and most reading here had nappy rash the first time (look back at the Fritz Chip –named by Prof Ross J. Anderson– and similar that got watered down to Trusted Platform Modual (TPM), ceading more and more control thus power to Microsoft in a “Think of the DRM” dog whistle move).

What better way to get you onto Microsoft AI not Google or others than by controling the door?

That is put not “general” purpose but MS Optimized/Only hardware onto peoples systems (remember the fun of the FTD serial driver issue?). It gives you almost unimaginable control especially when your own files are effectively “held hostage” much as Ransomware does.

So, whilst called “Large” LLM’s are a little peculiar in the size department. Yes to get a base DNN built requires almost unimaginable resources, however as I’ve previously noted it is a little like geography. That is the initial build is the landscape foundational “Earthquake and Mountain raising” stage, you then move into errosion that smooth things and builds natural pathways. And then man with tools in his hands shuffels in to shape the landscape by adding roads, buildings etc. Thus the final user front end to an AI system can be put onto a laptop or personal computing device that then connects back to the big end “subscription service” of Microsoft’s Cloud…

Then a little feature rearangment down the road and you are tied in and trussed up like a turkey ready for the slow roast to hell.

But read the last part of the article,


The Register® — Biting the hand that feeds IT

AI + ML
3 comment bubble on white
Microsoft reportedly runs GitHub’s AI Copilot at a loss
Redmond willing to do its accounts in red ink to get you hooked
icon Tobias Mann
Wed 11 Oct 2023 // 00:34 UTC
ANALYSIS Microsoft is reportedly losing up to $80 a month per user on its GitHub Copilot services.

According to a Wall Street Journal report citing a “person familiar with the figures,” while Microsoft charges $10 a month for the service, the software giant is losing $20 a month per user on average and heavier users are costing the company as much as $80 every 30 days.

Announced in 2022, GitHub’s Copilot employs OpenAI’s large language models (LLMs) to assist programmers as they write and debug code in IDEs including Microsoft’s own Visual Studio Code. Copilot essentially suggests source to drop into your projects as you type out comments, function definitions and code, and other lines. In March 2023 the platform got an upgrade to OpenAI’s GPT-3.5 and GPT-4 models.

We’ve asked Microsoft for comment on the cost of running these AI models; we’ll let you know if we hear back.

Running products at a loss is a common tactic across the technology industry, with the aim of building a dedicated user base and increasing prices once users are hooked. Microsoft sells its Xbox games console line below cost and recoups that loss as players spend on software and other content.

The same logic could apply to AI — a market Microsoft is investing heavily in to secure a first-mover advantage.

It’s no secret that the hardware used to train and run most LLMs is expensive. Nvidia’s H100 accelerators sell for around $30,000 apiece, and we’ve seen them priced at $40,000 on eBay.

Microsoft employs tens of thousands of Nvidia A100s and H100s. This AI hardware and the servers it lives in guzzle electricity, too.

It’s hard to calculate the cost of Copilot’s ongoing operations, though OpenAI CEO Sam Altman has stated GPT-4 — the most advanced version of the company’s LLM — cost more than $100 million to train.

IDC: AI is a solution for a PC industry with a sales problem
SoftBank boss Masayoshi Son predicts artificial general intelligence is a decade away
Acting union calls out Hollywood studios for ‘double standard’ on AI use
When is a PC an AI PC? Nobody seems to know or wants to tell
One way that Big Tech has tried to control the cost of AI is with custom accelerators, such as Google’s Tensor Processing Unit and Amazon’s Trainium and Inferentia silicon. Now, if a report last week is to be believed, Microsoft may be about to reveal its own custom AI accelerator.

OpenAI is rumored to be considering working on its own custom processor for its ML workloads, too.

Microsoft’s current generative AI workloads are running on GPUs, largely down to the latency and bandwidth requirements of these models, which has made running them on CPUs impractical, Cambrian AI analyst Karl Freund told The Register.

As a result, these models benefit most from large quantities of high-bandwidth memory to hold all the model’s parameters. For really large systems, such as OpenAI’s 175 billion parameter GPT-3 model, multiple GPUs may be required per instance.

But it’s worth noting GitHub Copilot isn’t a general purpose chatbot like ChatGPT or Bing Chat: it does code. As we understand it, more specialized models usually can get away with fewer parameters and thus require less memory and therefore fewer GPUs.

As AI providers get a better grasp on the economies of scale involved, we could see higher prices for these features. Otter.AI, an AI-powered audio transcription service beloved by journalists and others, has raised prices and implemented new consumption limits on several occasions over the past few years.

Meanwhile, Microsoft and Google plan to charge a $30 premium on top of the regular Office 365 subscription and G-Suite plans to unlock gen-AI functionality.

It would not be surprising if Microsoft raised the price of GitHub Copilot once it has demonstrated its value to enough customers. After all, it’s not a charity.”

Or in anyway charitable, so you’ve seen a hint on the potential “lock-in and off-loading” they have planed for their alleged cistomers…

[1] You will find both “Death by a thousand cuts” and the more succinct “salami slicing” terms used for emphasizing “scams” when an opponent that has a monopoly or other power position takes every they think they can from you “one cut/slice at a time” repeatedly like an incessant drum beat. In effect it’s a form of “blackmail on a plan” or “easy instalmentd” only it’s not easy and the only plan is that of “drug dealers business model” of “get you hooked, then keep upping the cost”.

You will find various refrences to “Salami Slicing Scams” up on the internet, though few tend to mention it in terms of anything other than criminal behaviour… For a historical perspective,

https://www.nofreelunch.co.uk/blog/what-is-salami-slicing-scam/

Clive Robinson October 11, 2023 2:19 PM

@ Bruce, ALL,

Re : LLM power consumption and it’s costs and “global warming” effcts.

It would apper it is becoming –if you will pardon the joke– “rather a hot topic” all of a sudden.

https://www.theregister.com/2023/10/11/ai_processing_electricity/

Whilst LLMs sucking the juice out of the grid should not exactly be unsurprising, seeing as an “LLM Rig” is not that different to some “Crypto Coin Rigs” as I’ve mentioned before… It appears that you need a “Uni Paper” to get people to listen…

So the paper by Alex de Vries, who is a researcher at the Vrije Universiteit Amsterdam, has got some journos doing “The hot foot dance” all of a sudden.

For those that prefere their papers to be unfiltered by a journo’s multi-coloured glasses,

https://doi.org/10.1016/j.joule.2023.09.004

Reading it will show there is a heck of a lot of energy going in, to a not particularly efficient process, which means a lot of heat boiling off the cooling water. Which as that water may well be “potable” another heck load of energy going into that process…

Winter October 12, 2023 3:03 AM

@bl5q sw5N

Grossberg’s ansatz is ART, adaptive resonance theory

That is not the level I was concerned with. I was thinking about how complex individual neural connections, eg, synapses, are updated based on local information (membrane potentials, spike firings, neurotransmitters). Hebbian Learning is at the level of synapses and individual neurons.

ART works at a higher level. It updates the individual connections more like current back-propagating network learning. In back propagation, updates percolate from the “output” back to the “input”. This means the input layers have to wait for all the later layers to finish processing before they can update. That sounds very inefficient. ART also smells too much like how the eye’s retina works. I do not expect that to easily translate to a general mechanism for a complete brain.

Winter October 12, 2023 3:06 AM

@Clive

It would apper it is becoming –if you will pardon the joke– “rather a hot topic” all of a sudden.

We had some items in very popular programs that showed how big data centers were gobling up all our wind and solar energy. The result being that plans for a big data center for Meta was scrapped by the locals, kicking out the politicians behind it, as well as changing the rules at the national level.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.