Indistinguishability Obfuscation

Quanta magazine recently published a breathless article on indistinguishability obfuscation — calling it the “‘crown jewel’ of cryptography” — and saying that it had finally been achieved, based on a recently published paper. I want to add some caveats to the discussion.

Basically, obfuscation makes a computer program “unintelligible” by performing its functionality. Indistinguishability obfuscation is more relaxed. It just means that two different programs that perform the same functionality can’t be distinguished from each other. A good definition is in this paper.

This is a pretty amazing theoretical result, and one to be excited about. We can now do obfuscation, and we can do it using assumptions that make real-world sense. The proofs are kind of ugly, but that’s okay — it’s a start. What it means in theory is that we have a fundamental theoretical result that we can use to derive a whole bunch of other cryptographic primitives.

But — and this is a big one — this result is not even remotely close to being practical. We’re talking multiple days to perform pretty simple calculations, using massively large blocks of computer code. And this is likely to remain true for a very long time. Unless researchers increase performance by many orders of magnitude, nothing in the real world will make use of this work anytime soon.

But but, consider fully homomorphic encryption. It, too, was initially theoretically interesting and completely impractical. And now, after decades of work, it seems to be almost just-barely maybe approaching practically useful. This could very well be on the same trajectory, and perhaps in twenty to thirty years we will be celebrating this early theoretical result as the beginning of a new theory of cryptography.

Posted on November 23, 2020 at 6:04 AM7 Comments

Comments

Some Crypto Guy November 23, 2020 6:34 AM

Hi Bruce, maybe I’m wrong, but at a first skim of the paper it looks to me that the result has some limitations. Am I mistaken that they “only” achieve subexponential security? That might be OK-ish for a cryptosystem such as RSA that trades it off with decent performance, but for a behemot of complexity and slowness as iO it is clearly a big limitation (as you also point out). My question is rather if it can be considered a theoretical breakthrough at all: would you put it at the same level of Gentry’s FHE?

Also, are the hardness assumptions in the paper really-really standard? I think they mention “LWE with subexponential modulus-to-noise ratio” and “Symmetric eXternal Diffie-Hellman (SXDH) assumption on asymmetric bilinear groups” (in addition to other two more standard assumptions).

Chris Peikert November 23, 2020 7:56 AM

A quibble: it has only been one decade (not decades) since the first work showing theoretical feasibility of fully homomorphic encryption. As you say, during that time we have seen many orders of magnitude improvement. We can probably expect something like that for obfuscation as well.

bcs November 23, 2020 10:16 AM

I wonder is this result has anything new to say about compiler optimizations? That is to say, if those two programs being very large and slow is critical to them being indistinguishable, then an optimizing compiler able to substantially reduce their size and/or runtime would constitute a break. Going in the other direction, a proof that the programs can’t be distinguished within a given amount of work would also constitute a work bound on the problem of optimizing away the properties that proof depends on.

Clive Robinson November 23, 2020 6:14 PM

@ Sofakinbd,

From the article you link to,

“400,000 AI-optimized, no-cache, no-overhead, compute cores and 18 gigabytes of local, distributed, superfast SRAM memory as the one and only level of the memory hierarchy.”

If you look far enough back on this blog, it’s the sort of architecture –not the AI optimized bit– I was talking about oh about a decade or more back that I actually built research prototypes for.

Only I was looking at a million 8bit cores (it is only a hundred by a hundred by a hundred cores) with very local SRAM that got rid of the use of register files.

As I said back then two things to remember,

1, You can not beat the speed of light and it’s physical limits.

2, Because of (1) the future will be massively parallel at all levels for tiny cores into globe spanning (and beyond).

Which gives rise to an inhibiting observation,

3, Most humans including most programmers can only think in terms of “sequence” it’s why we call it “sequential programming”, and it’s a significant limitation.

Back in the early 1990’s I was looking around for a PhD research project to do. For various reasons I had strong interests in not just communications but the myriad of disjoint databases that supported them. Thus I wanted to do research on the base requirments for truley distributed parallel secure databases. As you can probably appreciate it turned out to be impossible to find “Readers/supervisors” that knew anything sufficient in all the areas required… Now soon to be into four decades later, the same issues I wanted to investigate are still mainly “open to debate” with people just picking at little bits all seperate from each other in different knowledge domains and sometimes coming up with almost opposite findings…

We need to be training the few who can naturally think in terms of parallel operation to be better at it, as well as training rather more people who do not do it naturally to be able to do it at some level. Otherwise we are just going to drop into a black hole of inefficient behaviours with only highly specialised applications taking efficient advantage of what parallel systems offer (and nature does fairly easily as a matter of course).

Ismar November 23, 2020 6:41 PM

” After all, if it didn’t, then if you put both programs through the obfuscator, you’d be able to tell which obfuscated version came from your original program.”

How about the other side of the coin here – when we do need to trace back the software to its origins – is this going to obfuscate that too ?

MrC November 23, 2020 7:40 PM

I occasionally get clickbaited into reading Quanta magazine articles and every single time I find myself disappointed by a very overhyped nothingburger. Sounds like this is just about the same.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.