## Indistinguishability Obfuscation

Quanta magazine recently published a breathless article on indistinguishability obfuscation—calling it the “‘crown jewel’ of cryptography”—and saying that it had finally been achieved, based on a recently published paper. I want to add some caveats to the discussion.

Basically, obfuscation makes a computer program “unintelligible” by performing its functionality. Indistinguishability obfuscation is more relaxed. It just means that two different programs that perform the same functionality can’t be distinguished from each other. A good definition is in this paper.

This is a pretty amazing theoretical result, and one to be excited about. We can now do obfuscation, and we can do it using assumptions that make real-world sense. The proofs are kind of ugly, but that’s okay—it’s a start. What it means in theory is that we have a fundamental theoretical result that we can use to derive a whole bunch of other cryptographic primitives.

But—and this is a big one—this result is not even remotely close to being practical. We’re talking multiple days to perform pretty simple calculations, using massively large blocks of computer code. And this is likely to remain true for a very long time. Unless researchers increase performance by many orders of magnitude, nothing in the real world will make use of this work anytime soon.

But but, consider fully homomorphic encryption. It, too, was initially theoretically interesting and completely impractical. And now, after decades of work, it seems to be almost just-barely maybe approaching practically useful. This could very well be on the same trajectory, and perhaps in twenty to thirty years we will be celebrating this early theoretical result as the beginning of a new theory of cryptography.

Some Crypto Guy • November 23, 2020 6:34 AM

Hi Bruce, maybe I’m wrong, but at a first skim of the paper it looks to me that the result has some limitations. Am I mistaken that they “only” achieve subexponential security? That might be OK-ish for a cryptosystem such as RSA that trades it off with decent performance, but for a behemot of complexity and slowness as iO it is clearly a big limitation (as you also point out). My question is rather if it can be considered a theoretical breakthrough at all: would you put it at the same level of Gentry’s FHE?

Also, are the hardness assumptions in the paper really-really standard? I think they mention “LWE with subexponential modulus-to-noise ratio” and “Symmetric eXternal Diffie-Hellman (SXDH) assumption on asymmetric bilinear groups” (in addition to other two more standard assumptions).