Quantum Computing and Cryptography

Quantum computing is a new way of computing—one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing “quantum-resistant” public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn’t. But even assuming aliens have developed the technology to its full potential, quantum computing doesn’t spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we’re working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we’ll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory—albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it’s easier to smash a plate than to glue all the pieces back together, it’s much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind—one-way functions and trap-door one-way functions—underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can’t be broken before the heat death of the universe.

With symmetric cryptography—the kind used to encrypt messages, files, and drives—that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I’m hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you’ll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer “breaking” an algorithm, what happened is that they’ve found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we’re trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn’t too bad. Grover’s algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor’s algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That’s not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don’t currently exist, and no one knows when—or even if ­- we’ll be able to build one. We also don’t know what sorts of practical difficulties will arise when we try to implement Grover’s or Shor’s algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don’t know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution—at scale—with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it’s possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I’m not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we’ll still have security.

It’s a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on—does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There’s one more future scenario to consider, one that doesn’t require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads—and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There’s an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

Posted on September 14, 2018 at 6:15 AM53 Comments


me September 14, 2018 8:12 AM

lmao for aliens

if i remember correctly zrtp (linux encrypted voice calls) made by Philip Zimmermann is quantum resistant.

Clive Robinson September 14, 2018 11:56 AM

@ Bruce and the usual suspects,

You look away blink twice and somebody tugs at your coat tails and says,

    Hey Mister, you want a 128qbit quantum computer chip? We get some in real soon.

Yup you read that right 128qbits on a chip,


Though what you can do with it is far from clear for various reasons. @Bruce mentions one of them above with,

    Error correction on a quantum computer could easily be an unsurmountable problem

But also says,

    My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent.

On aspect of the errors is something called “noise”. One of the reasons we can have 128bit conventional computers is that each bit is either a one or a zero, and thus designing each bit to reject noise whilst not trivial is comparitively easy. This is not the same with “Quantum Bits” or “qbits” for various reasons each qbit can hold multiple values, thus they have a much much greater suceptability to noise. This problem is also multipled by the way that the qbits interact with each other so getting the noise problem down is actually not just a much more difficult problem than putting more qbits on a chip it’s one that rises exponentialy with the number of bits under various circumstances. One of which is temprature. Thus as far as we can see for times to come Quantum Computing is going to also involve cryogenics of some form, which is also an awkward niche in both science and technology.

But one that there may be a solution for based on a hundred and fifty year old “thought experiment” by James Clark Maxwell in 1867[1]. Put simply he imagined two containers with a common barrier. In this common barrier he put a “demon” who’s job it is to operate a trap door. If a fast moving molecule or atom comes towards the trap from the left chamber it’s let through into the right. Likewise if a slow moving molecule or atom in the right chamber comes towards the trap it’s let through to the left chamber. As the temprature of each chamber depends on the average speed of the molecules or atoms it can bee seen that contray to what the second law of thermodynamics implies the left chamber will get colder whilst the right will get hotter.

Well a few years ago it was proved that the “Maxwell’s Demon” thought experiment was flawed and that the second law of thermodynamics did not apply in the way originally thought. So people started practical experiments to imolement Maxwell’s Demon, with not particularly encouraging results…

That is until last year Prof. David Wiess at Penn State appears to have cracked it right open,



Which whilst it is good news for the Prof and his team, it’s not necessarily good news for the rest of us.

As I’ve mentioned in the past “technology is agnostic to use” and that it is “the directing mind that decides how to use it”. Further that the decision as to if “the use is for good or bad” depends very much on “The point of view of the observer”.

It’s clear to most technologists that the Penn State results will be applicable to reducing noise in qbits, but how much improvment it could bring will be down entirely to practical experiments.

But there is another problem with Quantum Computers, you can not get the results out of them without a lot of complications. Not only do these effect the “error” issues it also requires the use of specialised conventional computing hardware.

Put –over simply– Quantum Computers need conventional computers to get out the results, which tends to slow things down a lot.

We already know of various ways of reducing “random noise” from certain types of signals via various DSP techniques. One such is to align a synthetic signal with the noisy signal in an adjustable matched filter. I won’t go into the mathmatics but it is used in many peoples homes right now in their Internet Modems.

Another way is to get the same signal multiple times and average them together. Provided they are correctly aligned the signal adds linearly whilst the noise goes down by the square of the number of samples.

This gives the interesting propersition that the Quantum Computer would have to repeat it’s search many many times to actually get the results out beyond a small degree of precision, which kind of brings it back to being limited to what classical computing can do…

This ultimately may be the fate of Quantum Computing, which would relegate it to being “just another one of those theories that don’t pan out in practice”.

Something we should kind of be used to by now in cryptography. Our algorithms “are fine in theory” but in practice the practical systems can not support the theory, because the theory is not practical or based on invalid or inaplicable assumptions.

[1] https://en.m.wikipedia.org/wiki/Maxwell's_demon

TheInformedOne September 14, 2018 12:01 PM

In order to instantly crack a 256 bit key with a quantum computer don’t they have to first generate and maintain 256 stable (useable) qubits? From what I understand, they are a few years off still. I’m sure whoever gets there first (probably the Chinese) will not tell anyone for years to come. Kind of like what happened at Bletchley Park w/Enigma.

Fred P September 14, 2018 12:11 PM

This paper: https://cr.yp.to/papers/pqrsa-20170419.pdf appears to claim that one can create 1 TB RSA keys that are very resistant to Shor’s algorithm. ” initial implementation results for RSA parameters large enough to push all known quantum attacks above 2^100 qubit operations. ”

Not particularly practical, but I’m not certain that “And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.” is accurate.

tz September 14, 2018 12:52 PM

Quantum computing may giveth as well – Imagine a one time pad transmitted by coherent states. There is already quantum cryptogrpahy.

Lawrence D’Oliveiro September 14, 2018 4:11 PM

Quantum computers have so far proven useless for number-theoretic problems. The only kinds of problems they have been able to tackle have been ones involving physical simulations, where they have proven faster than digital computers, but less accurate.

In other words, “quantum” computers are just the old analog computers reborn. Back in the day, they, too, could compute physical-simulation problems faster than the new-fangled digital machines, but with limited accuracy. As digital machines got faster and cheaper, the analog machines faded into obscurity.

With the demise of Moore’s Law, that may not happen this time. But they are not going to take over the computing arena by any means.

Impossibly Stupid September 14, 2018 4:37 PM

Today, only crackpots try to build general-use systems based on one-time pads — and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Well, then call me a crackpot. I think it’s a good idea to act sooner rather than later to better solve the problems with OTP use so that we’re more prepared to mitigate the risks associated with quantum computing should it come to pass that we do need that level of security. Let’s not shy away from solving the hard problems.

rustyal September 14, 2018 8:33 PM

quantum key distribution is a potential replacement for public-key cryptography. But come on — does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications?

“Specialized communications hardware” like an array of tiny beamforming antennas, a chip that compares 5 clock signals from space, or some other magic that’s in every portable phone? Wireless QKD over 25 km was already demonstrated. I’m not expecting that to be ubiquitous—but as was stated, it’s a weird future, and I wouldn’t be so quick to dismiss it.

Gweihir September 14, 2018 10:53 PM

With the consistent and decades-long failure of quantum computing to make any meaningful progress towards a product that can be used for more than tiny toy demonstrations, my take is more that those that work on “quantum resistant” crypto have more likely just run out of actually useful work and are now riding the hype-express. Makes some sense with regards to getting research grants (but is morally problematic, as that money did not grow on trees), but otherwise, this seems to not be more than a pretty persistent shared delusion.

echo September 15, 2018 12:58 AM


I find most discussion about quantum computing to be very boring lately. The discussion is more “boys toys” often badly explained and neglects to engage with the variety of possible discussion.

I get the sense most security discussion is like this too. I have seen it before in other industries.

John Doe September 15, 2018 1:23 AM

@Clive Robinson says: “would have to repeat it’s search many many times to actually get the results out beyond a small degree of precision, which kind of brings it back to being limited to what classical computing can do…”

I’m not sure who thought infinite precision would ever work in the real world… If a “qbit” is stored in something physical, it can’t have (input/output with) infinite precision without taking up at least one infinite dimension of space or time or something as well… Only in the theoretical can infinity exist (that is to say, that’s only where it can be “measured” so to speak, because you can just declare it a finished concept/idea and move on as if it were real).

Concern of the mental health variety September 15, 2018 1:30 AM


“The discussion is more “boys toys” often badly explained and neglects to engage with the variety of possible discussion.

I get the sense most security discussion is like this too. I have seen it before in other industries.”

Nothing about this is anti-female and there’s no reason to bring unrelated issues into this.
Get a grip, no offense.

echo September 15, 2018 2:03 AM

@Concern of the mental health variety

Thanks for the gaslighting. Organisations and individuals have their biases and habits and stereotyped thinking. My observations are on the whole fairly level and generally fairly well documented and even admitted in some quarters. I have found people who aren’t familiar with this or who aren’t being professional tend to go into denial and become aggressive, and in some cases personally insulting as your nom de plume suggests, which rather proves the point.

I know the seasons are changing and the mating season is arriving but could you direct your energies somewhere else and more fruitfully please?

echo September 15, 2018 4:49 AM

@Concern of the mental health variety

I addressed the industry issue sensibly in the appropriate topic. I perceive no need to continue discussing anything with you because judging by your responses you’re not the intended audience anyway.

yet another Bruce September 15, 2018 8:58 AM

Firstly, very nice writing. Kudos.

Secondly, anyone know anything about memcomputing? See for example, http://advances.sciencemag.org/content/1/6/e1500031.full

My understanding of this is that the authors claim to be able to run digital logic backwards and provide inputs consistent with defined outputs. Seems too good to be true but probaby would have implications for cryptography.

Anders September 15, 2018 9:27 AM

Superb essay!

I once had almost a quantum computer – at least hard disk was
a quantum disk – Quantum Fireball.
It was full of qubits – sectors of unknown state, for normal
computer understandable as a “bad sectors”. So no, i’m not
a fan of quantum computing any more 🙂

Curious September 15, 2018 1:06 PM

I wonder how well understood ‘quantum entanglement’ is by scientists, maybe it entails more than the things that is commonly thought about quantum entanglement today (not that I would personally be able to list such things). Presumably, anything “quantum mechanical” is relevant for any construction of quantum computers with their q-bits and related hardware.

Somehow, and I am certainly not very knowledgable about quantum technology so take what I write with the proverbial grain of salt, I can at least imagine how it might be problematic should quantum entanglement infer some kind of guaranteed backdoor structure, should quantum entanglement also work as a trapped energy potential due to time-reversal , however miniscule an effect, allowing tiny, but perhaps critical bits of types of locks, and switches/gates that could act as a backdoor, that presumably can’t realistically be found out if relying on a single tiny switch somewhere in a piece of hardware that I imagine could perhaps rely on smaller cluster of matter that has some quantum spin values hidden in time reversal physics, laying latent until sprung so to speak, with the precise manipulation of some system of Q-bits, really small, or perhaps being even a larger system.

Clive Robinson September 15, 2018 5:11 PM

@ Gweihir,

With the consistent and decades-long failure of quantum computing to make any meaningful progress towards a product that can be used for more than tiny toy demonstrations

Whilst true, it’s more likely to be a resources and political imperitive issue.

Quantum Crypto as Quantum Key Distribution (QKD) being somewhat less costly and having not just commercial but military purposes is moving forwards.

As @Bruce has observed above[1] and I’ve repeated over and over QKD in optical fibers is not realy much of a prospect as it’s “point to point” only. That is it suffers from the same problems monorail railways suffer from, switching is a mechanical issue that is mostly impractical due to amongst other things reliability. It is also known to suffer from a classical range-rate limitation.

However now we have state of the art timing, and genuine single photon emitters the range has improved greatly and thus narrow beamwidth signals transmitted from satellites has over come the P2P fiber impediment for what is currently “fixed locations” and some “mobile platforms” such as aircraft and ships. There is other ongoing work since around 2005 by Quintique and BBN to push this into wireless. Other developments are things such as techniques to get around the range clasical QKD limitations[2], that also have “cross over” with Quantum Computing (QC).

The point being that QKD has resources available and a political will thus progress is being made.

But there are other things to consider. Both nuclear weapons and liquid fuel rockets would not have come to fruition if it were not for the political imperative of WWII making otherwise unavailable resources available. Likewise man almost certainly would not have gone to the moon if not for the significant failings of the US missile development causing grave political embarrassment, thus the setting of a goal way beyond simple manned space flight and the considerable resources made avialable politically to make it happen.

Thus the question arises as to “what might give rise to a political imperative to divert resources to QC?”. Rather than some other military or Intelligence Community projects or other “big budget” line item.


    But come on — does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications?

[2] https://arxiv.org/pdf/1705.02174

Clive Robinson September 15, 2018 6:43 PM

@ yet another Bruce,

Secondly, anyone know anything about memcomputing?

Not under that name.

However there has been “neural network” research into the implications of shared content addressable memory for some time, originally for explaining how biological networks learn (Hopkin Model) as well as more recently digital neural networks.

The diagrams and text are very suggestive of being an “analog neural network with shared content addressable memory”. Which would be a halfway house between biological and digital neural networks.

RonK September 16, 2018 1:58 AM

Interesting post, but I personally find it hard to believe in an opponent who is advanced enough to be able to break all possible symmetric encryption algorithms yet unable to gain access to our one-time pads (for instance, by remotely hacking everything, or monitoring everything using autonomous microdrones, or lots of other things I cannot imagine, just as I cannot imagine being able to break all possible symmetric encryption algorithms).

Clive Robinson September 16, 2018 7:10 AM

@ RonK,

I personally find it hard to believe in an opponent who is advanced enough to be able to break all possible symmetric encryption algorithms yet unable to gain access to our one-time pads

Oh believe it, because it’s fairly easily seen when you consider the difference between “covert mass surveillance” and “covert targeted surveillance”.

Covert activity is best done at a distance, as it reduces your visable image. Ideally you want to be as fully out of sight as possible to your target(s) of interest.

Mass surveillance or “collect it all” is best done at the center of a “star configured switched communications network” not out at the leaf nodes. Where as targeted surveillance is best done out at the individual leaf nodes as “up close and personal” as remaining covert will alow.

If you as a potential target take even moderate precautions with your security and communications end points, mass surveillance fails for the attacker. Which means that the target either has the privacy/secrecy they desire and should legaly have, or they force an “up close and personal” targeted surveillance operation.

Such “up close and personal” is very resource intensive[1] at the best of times which alone would act as a limit on any attacking agencies power. Secondly the human brain is very very good at pattern recognition, once you’ve seen one supposadly covert operation you can get the feel of any others that might be around you, which makes the attackers ability to remain covert significantly harder by a very long way.

Another issue is people love to talk. Whilst it is moderatly easy for an attacker to remain covert to a single issolated target, they are effectively “hiding in plain sight” and covert behaviour when seen from anywhere other than the target side looks darn suspicious in many ways and thus people will see it and gosip. More than one covert operation has failed because the target chatted to the janitor or maintenance man on a regular and friendly basis, or the old lady who lives across the road etc.

Another issue for an attacker is the “You can not see what you are going into” problem. It’s relatively easy for a prepared target to have “instrumented their environment” in ways that can only be seen by an attacker after they have been recorded etc. Likewise there are so many random “tells” a target can setup an attacker can not catch them all. In essence the attacker is not just fighting Lockard’s Exchange Principle these days but also the “all seeing eye” principle.

The work the attacker has to do to spot/avoid what the target can put in place is so dam suspicious in it’s own right that the attacker would be lucky not to have the police called on them and neighbours talking, or worse avoiding the target, nothing smells worse to an alert target than changes of routien or behaviour in those around them.

Oh and technology is not the attackers friend many mistakenly think it is, because even at the best of times it’s not passive and the attacker has to get it in and out of the target zone. To understand why it’s an issue, firstly it will use energy and it can not be efficient enough to stop heat being put into the environment. Secondly it needs to communicate, in most cases that can also be detected by emmissions or “red eye” principle. Thirdly it’s not made from the same materials as it’s immediate environment which means it’s detectable in a number of ways. But for a target it’s an issue that can be dealt with “without looking”. Put simply the better something is hidden the more static it is which means that individually it’s of little use against an agile target. Simple probability then works against the attacker not just in “not seeing” but also in “being seen”.

In the main targeted surveillance fails more often than it succeeds especially against an alert or worse prepared target. It’s only in the movies that master villains talk about their evil plans close to the microphone in a cocealed object the attacker put in place five minutes before hand. It’s why the likes of “spike mics” or the equivalent acoustic wave guides are more likely to be used (which can be detected with Differential Time Domain Refectometery D-TDR).

Even fairly rank amatures can survive years of intensive targeted surveillance by not just LEO’s but also SigInt agencies working in close cooperation. Look up someone called Kenneth Noye from the UK who was involved with disposing of the Brinks Mat Gold, for which he is alledgedly currently on a “hit list”.

You can get a bare bones start with,


As you can imagine after murdering a police officer even after being aquitted he was a very high surveillance priority for the police, and the parallel construction about his time in Spain after his second murder is fairly obvious as is the cover up in court over what actually happened on the slipway and why is now fairly common knowledge (drugs deal gone bad).

I suspect various people are hoping that the underworld “offs him” now he is in open prison.

[1] Even in the best of circumstances targeted surveillance is at what is an eye wateringly high cost. To get an idea see the leaked figures for the fairly low key Ecuadorian “Operation Hotel” surveillance on the very static target Julian Assange which is a fraction of what the acknowledged UK costs are just for LEOs. What the UK and US sigint costs are, are not publically known but it will be several orders of magnitude more than the “Operation Hotel” costs are.

Suresh Ponnusamy September 16, 2018 11:45 AM

Well written article without the technical details. None can put is better than this. The last hope is the OTP, but without a proper key exchange and agreement, it’s going to difficult isn’t?

Hmm September 16, 2018 7:40 PM

Seconded on the well-written part, it filled in a lot of ground there on expansive topics.

A lot to think on and resist making unfounded sweeping prognosticating statements about.

Weather September 16, 2018 10:18 PM

Just would like to bounce a cipher off you lot

Say you start with encrypted data
The first key could be
Door-AAAA= something
Say the first key equal
Htre-AAAA= you then can use a second key
Rinse and repeat with as many keys as you want to find the plaintext (door) the number of how many keys you used and maybe what they were, sorry would be easy to write a C program but can’t explain it

Albert September 16, 2018 10:30 PM

@ yet another Bruce & Clive Robinson

About memcomputing I also discovered that the authors founded recently a company (http://memcpu.com/).

Moreover, they published many papers to explain the tech, however they use mainly frameworks from physics therefore they are not easy to read.

For what I understood, they started form some abstract non-turing machine that in principle can solve efficiently NP-complete problems. And they claim that this can be realized with some sort of self-organizing circuit. It looks like more an hardware solution, however they say that they efficiently simulate it. Which I do not get, since if you simulate it, you are using a turing machine… I need to read more. Has anyone further information about this?

Nevertheless, I am skeptic about it, it looks like to me too good to be true and very complicated to understand. However, since it looks like they are trying to commercialize it, I will definitely look into it and follow the evolution. You never know, maybe they really found something serious.

Chris mobley September 17, 2018 4:15 AM

Excellent article, as always.

Niche applications – like industrial control systems in the critical national infrastructure? They only talk to a very small number of controlled devices, they are installed and commissioned by trusted individuals, they are left alone to do their work for 10’s of years…no need for pki – symmetric cryptography using ppk’s (otp) is a reasonable answer. They can be built with high quality key management and physical security built in. See Bruce’s articles on physically uncloneable functions.

There are quite a lot of niche applications I can think of, where there’s a one-to-one or few-to-few predetermined comms channels that need protecting. Symmetric cryptography and ppk using a otp is a perfectly reasonable solution.

Weather September 17, 2018 4:17 PM

This explains it better

Encyted data jfxh

Input jfxh
Key llll
Outputplain door
Outputchain wkyt

Input wkyt
Key wvxd
Outputplain door
Outputchain nnxz

Input nnxz
Key poll
Outputplain door
Outputchain tyhj

Plain text door,plus other info depending on the key holder you ask

Gweihir September 17, 2018 8:14 PM


Quantum entanglement is actually not well understood at all. It is pretty well described for simple set-ups with low numbers of entangled particles, low distances and simple, short chains of manipulations. It is however entirely possible that this whole thing has an even worse behavior when scaling up than is already observable in practice (my takes is it definitely scales sub-linear at this time) and it may have a hard limit too, where entanglement cannot go over a certain, not very high, number of entangled particles at all or with what is physically possible in this universe.

The thing is that Physical theory is still badly incomplete at this time (no GUT) and that what we have is inconsistent, i.e. it is known that either Quantum Theory or Gravity or both are wrong (to simplify things grossly), because as they are modeled today they do not go together. Yet both are exceptionally well supported by measurements. This just means there are some not yet known effects at play and they may well kill Quantum Computing entirely, because they have to get pretty strong under circumstances we do not know at this time.

Making grand predictions on theories known to be incomplete is a rather unscientific thing to do, but classical “hype” behavior. We see this all the time when people confuse “technology” and “magic” and the Quantum Computing hype is no different. Now, we cannot rule out that Quantum Computing, strong AI, flying cars, etc. will eventually manifest in a meaningful way, but it seems rather unlikely and it will certainly not happen anytime soon.

@Clive Robinson:

Well, I certainly agree on the political, and the hype aspect makes it very hard to have experts stay out of it, because it means they get less visibility, less opportunities and less grant money. Something similar is currently happening with “AI” (which is decidedly >50 years in the future for actually meaningful “strong AI” and may well never materialize). But if you want a research grant or sell your product, you better write “AI” on the description, not “Fuzzy logic” or “statistical classifier” or “automation”.

Hence I am not actually blaming Bruce for writing on this and I notice that he does not actually make any predictions on when or whether this will become really relevant as a threat.

Weather September 17, 2018 11:17 PM

The numbers aren’t limited, just the distance.
Imagine a flat surface, above that you have things like motion, gravity, em wave eg,all those things pock a hole through the surface, they go down away from the surface, then decay to the left and right,any place within that decay gets that information, the distance is limited to the strength at a point,having multiple qbits that are in sink can transfer information below at a strong enough value to reach a second layer,that can transfer the information even further, more qbits will make a stronger signal at distance, but the distance is limited, so you lose focus or detail, but it makes it clearer because that information about the qbit states swamp out the gravity, em,motion stuff,but a cool thing,you could say increase the temperature of the external qbits or any other value,make it a rf transmitter,etc

Clive Robinson September 18, 2018 10:26 AM

@ Suresh Ponnusamy,

The last hope is the OTP, but without a proper key exchange and agreement, it’s going to difficult isn’t?

I’m far from sure the last hope is the OTP.

In effect an OTP is a stream cipher with keystream generator that is “unpredictable to an observer” of either the key stream or the keystream and plaintext mixed together as a ciphertext stream (with inband signalling after authentication and synchronization).

It’s entire strength rests on this “unpredictability” by any observer, but importantly any third party observerving the first and second party communications. It is effectively a random oracle, that only two parties can directly access as they chose. A third party only gets access to what either of the first and second party alows them to, which if the first and second parties follow the rules gives the third party insufficient information to get access to the plaintext.

Interestingly in military and diplomatic systems except under rare circumstances the OTP is only used for “super encipherment” that is the plaintext is first encrypted with a high strength cipher prior to being put through the OTP system. There are a number of reasons for this which I’ve mentioned in the past.

Also the OTP system is generally only used between an out-station and a central hub home-station with all OTP traffic going through the home station not out-station to out-station. That is it is generally only used in a Star Configutation Communications systems. Again there are several reasons for this, not least because it minimizes KeyGen and KeyMan issues with it’s complex auditing. But more importantly it puts control of information at the home-station. Which for Mil/Dip traffic is desirable.

Which importantly makes the home-station a single point of failure and in the case of the Internet a single point of attack for various criminal entities including Government Agencies.

Thus the general use model for OTP systems is a very bad fit for the Internet.

Thus to make use of it for the Internet three basic problems have to be solved,

1, The authentication of parties that are unknown to each other.

2, Giving only the first and second parties access to the Stochastic process that is the Keystream generator / random oracle.

3, Removing the issues of a central hub.

Appart from the first that is currently susceptable to Quantum Computers and needs some kind of central hub (CA etc). The other two are currently open issues under the Internet model where nearly all communications are “one time” between parties unknown to each other and there is no “trust” between them.

I’m not saying that the OTP is unworkable on the Internet, but under the current Internet model it is.

But then so is the use of symetric cryptography under the current Internet model, because you still have both the first and third problems.

My honest feelings are we need to look at methods to replace the current Internet model with one that has two major changes,

1, No centralized or hierarchical security required.

2, No Quantum Computer sensitive methods.

Neither of which is a problem if the two parties have a secure side channel prior to first communications over the Internet.

It is first this problem we need to solve, but to most this looks like a “turtles all the way down” or “lesser flea” problem. But nobody has proved this is the case under all considerations…

Alain September 20, 2018 7:04 AM

“[…] but we will still be able to communicate securely using one-time pads. There’s an optimism in that.”

Yes, optimism, because there is no way to prove that any one-time pad is really random. There are just to many possibilities to generate apparently random white noise to exclude them all, not even with quantum computers. In other words, nature may not generate true random data, but data shaped by yet unknown new physics – of which Aliens might well be aware… 😉

Dr. I. Needtob Athe September 20, 2018 8:18 AM

“So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.”

This is incorrect. It’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times AS HARD to break, but only 170,141,183,460,469,231,731,687,303,715,884,105,728 HARDER to break.

Clive Robinson September 20, 2018 12:08 PM

@ Alain,

Yes, optimism, because there is no way to prove that any one-time pad is really random.

There is as I’ve pointed out no usefull definition of “random” let alone “really random” and many arguments are in effect circular.

So in effect the word means what ever you want it to mean.

As an observer of the output of an alleged “random generator” there are three things you can tell “within the bounds” of your observation,

1, It appears determanistic.
2, It has determanistc characteristics.
3, It appears non determanistic.

In other words the best you can hope for is you can show it is determanistic in some way within the bounds of what you can measure, if your measurment assumptions are correct.

It is very easy for me to design and build two generators each using a long sequence determanistic process and a thermal noise generator, mixed together by an analog switch.

The switch has two inputs and one output. One input is the signal input the other is the control input. The countrol input opens or closes the switch thus the input signal will appear (closed) at the output or not (open). Thus it acts as a modulator or half mixer (or AND gate with digital signals).

This means I have two basic options, in one generator the long sequence drives the control input in the other the the thermal noise generator drives the control input.

However what you see at the output also depends on the frequency range of the two input signals. If the control input switches faster than the input signal then the output will reflect the control input short term (high frequency component) but the signal input long term (low frequency component).

From this you should be able to see how the way you measure effects you perception of the generator.

Depending on how you “band” your generators into N frequency segments you have N^2 output combinations to consider. Which with bounded measurments covers all three observable conclusions.

Have a look at “Gold Codes” and “JPL Ranging Codes” to see how even very short determanistic generators can be coupled together such that the resulting output will have a sequence too long to measure in a human life time that will not show any real auto correlation over the measurment period a human might make. But will fail other tests if you know which ones to use, so your determination is also based not just on measurment time but type of measurment.

You can go on and show how each measurment type you use will only detect a very very limited subset set of determanistic sequences. It’s why we have so many tests, and so many algorithms that pass all of the tests…

An argument can be made –via Cantor Diagonal Argument– that there will always be determanistic sequences for which no test exists unless you know the exact algorithm and state.

So actually I’m not overly fussed about “Aliens”, because they are unlikely to have the time to find the algorithm in use or the method to detect it, except by the “blind luck” of probability.

As our host @Bruce should know from his work on random generators you can mix many generators together such that the Aliens will never get time…

Unless of course they have not Quantum Computers but Tachyon Computers 😉 The problem being that for such Aliens they would not be part of our universe, as no entity in our universe could see the results… So we are never going to be able to interact with them so they have no relevance for us…

If you want me to take the tounge in cheek further I can B-)

But… It would get to the point where like the joke about the cat with to little mu it would not be funny to anyone but a physicist thus would creat friction :-S

Cincinnatus__SPQR October 17, 2018 9:56 PM

@ Bruce Schneier

Bravo! That was the most positive statement you have ever made about the “meme” of the one-time pad, as you once called it.

For us crackpots who are absorbed by true random number generators and cherish the STASI Tapir, we will take your comments as good news. We knew you would come around!

As Whitfield Diffie said, if you can generate truly random numbers, then you can have private communication. Yes, that is true. But, the problem is, it will perhaps not be allowed in the future, just as we do not see a fully robust form of PGP publicly available today.

Ludovic F. Rembert May 6, 2019 3:10 AM

What a fun read. 🙂

“One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications.”

Reminiscent of reading Cryptonomicon on my first speaking tour of Manila 4 years ago. My how times have changed since then!

“Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.”

A fun thought experiment: which will change first, mathematics (ie our understand of it) or the substance of which computers are made? I would guess the latter, but it’s a total guess. Until then let’s just assume that AES will remain secure.

Dear Bruce, more posts like this please.

Thank you kindly,

1&1~=Umm May 6, 2019 6:58 PM

@Ludovic F. Rembert:

“Until then let’s just assume that AES will remain secure.”

In Theory or practice?

Whilst ‘AES the algorithm’ is assumed by many to be secure, even the NSA only rate it for ‘Secret’ with ‘data at rest’.

Because in practice, in practical implementations of the AES algorithm, there have been time based side channels through which more than sufficcient information leaks to easily enable the key to be worked out.

As has been said on this blog several times in the past, if you want even moderate privacy then AES should be used ‘OffLine’ on an air-gapped PC and only the ciphertext files sneakerneted or equivalent across the air-gap for transmissions or reception on an entirely seperate,’OnLine’ striped down PC connected to the communications channel.

Aside from the AES implementations time based side channels, there are other reasons for running a two air-gapped computers, which has also been mentioned several times before on this blog. And that is ‘end-run attacks’ where an attacker uses weaknesses in the commubications channel and OS software to install ‘I/O Shims’ in the device drivers below the OS kernel. Such shims ‘Tee-Off’ data from the HCI ‘plaintext interface’ and relay it across the communications channel.

You might want to upgrade your doc, as shim run-around attacks have been used against supposedly secure babking apps in the past. Encryption programs / apps are no different to the banking apps with regards this sort of vulnerability. Oh and those alledgedly ‘secure messaging apps people have on their ‘mobile devices’, they are as vulnerable to end-run attacks.

sophia July 1, 2019 4:35 AM

In this article, it is basically bringing a valid concern regarding encryption in all the digital media based platform. To get more updated information can check from Sign In 0x87dd0006 that will be effective for them.

LG TV Repair July 22, 2019 6:57 AM

Good article and right to the point.this is really the best place Quantum Computing and Cryptography. If you want Tv repair we provide best Tv repair service with customer satisfaction. We have a good team of Experience technician. Our main objective is to provide good technical support and quality service as per our commitment

gordo October 22, 2019 10:15 AM

@ Clive Robinson,

Thus the question arises as to “what might give rise to a political imperative to divert resources to QC?”. Rather than some other military or Intelligence Community projects or other “big budget” line item.

If valid, then 5G integration of quantum entanglement by the Chinese may well represent an end to the era of US signals intelligence dominance:

‘You can never be China’s friend’: Spengler
Asia specialist and distinguished columnist David P Goldman is convinced the US and Europe stand a chance against the Red Dragon – but the clock is ticking
Interview David P. Goldman, October 20, 2019

A very senior Cabinet-level US official told me recently that the Chinese were way ahead of us before we figured out what was going on, but now we’re catching up. That statement is wrong on two grounds. First, they haven’t figured out what is going on. Secondly, they’re not catching up. Two years ago, the US Intelligence community realized that what 5G would do is not only give China a great deal of economic power, which by itself is a national security concern, but it would also, within the next several years, eliminate America’s advantage in signals intelligence.

Can you explain how?

I actually broke the story in July in the Asia Times. It’s since been discussed in various other media. The Chinese have pioneered a communications technique called “quantum communications” which uses the entanglement of electrons at a distance to create a communications signal. The quantum system is such that if you interfere with it in any way, the signal disappears. The quantum state is destroyed. So, it’s like a letter that disappears the moment you look at it. It’s theoretically impossible to hack. The 5G bandwidth is so powerful that you can integrate quantum communications into ordinary 5G communications and make it standard.

We already know that the Chinese are using quantum communications for sensitive data transmission inside China through fiber optic cable. But there are a half-dozen major groups working on embedding quantum communications in 5G. SK Telecom is working on it. Toshiba is working on it. There’s a group at the University of Bristol, which claims very good results. So, the result is America’s ability to eavesdrop on everyone else will disappear in two or three years.


Clive Robinson October 22, 2019 4:18 PM

@ Gordo,

From the article,

    The 5G bandwidth is so powerful that you can integrate quantum communications into ordinary 5G communications and make it standard.

Oh dear, take a bunch of “sort of truths” mix them up then add a soupçon of hyperbole (or is it the “hyperbole of suspicion”)…

And you end up with what is in effect compleate twaddle in our current frame of technology.

What the man is hyping up is the capabilities of “Quantum Cryptography”… Which people have yet to get to work in a switched network or at any distance. Even in the “Free Space of Space” using a satellite to generate entangled pairs the data rate of what is the equivalent of an OTP is low, very low compared to that required for data transfer in a 5G network.

Even if it were to be used some how by the Chinese, it would be ridiculously easy to stop it dead in it’s tracks (it’s called an optical repeater).

But then from the article it’s clear he has no real idea what he is talking about technically. As can be seen with,

The humiliating thing is we invented the semiconductor. We invented the displays. We invented the optical networks. Every single component to the digital economy was an American invention.

Might sound great to other technically illiterate Americans but it can be shown that not “every single component” was developed by Americans or in America[1].

You also have the flip side, of a UK TV Personality and journalist looking into US claims of American inventions and comming up with the conclusion the only thing that America invented was “condensed milk”.

Both views are in effect cherry picked to some degree. However the difference being that as the UK journalist produced his findings for a BBC Documentry, he had to satisfy an independent board of reviewers it was factual. Where as the gentleman being interviewed in the article does not have such constraints. So make of that what you will.

[1] For instance the liquid crystal was first extracted from carrots in Europe in 1888, for the next seventy odd years they remained a curiosity at best, some european research had show that some of these organic compounds had light polarising charectistics that made them of some use in laboratory experiments. Some time before WWI some one in Europe discoverd that when subject to high tempratutes that were between the two melting points and when a very high potential was applied the polarisation would change. Then in the late 1960’s a US company sponsored PhD candidate who had got a bit of a bee in his bonnet about liquid crystals and showed at high tempratures and other disadvantages that it might be possible to make a new display technology. The sponsering company concerned effectively buried it as they had other technology and the researcher went off to other employment. Various people tried to improve the idea but they were using the wrong type of liquid crystals. For various reasons the British Government, fed up with paying hand over fist to US companies for technology that had been developed in Britain but extorted by politicians to the US decided it was time to do something about it. Thus seeking to redress the balance they pushed forward with investigating liquid crystal technology. And as a result the first liquid crystals that were actually practical for real world use were invented in Britain by British researchers at Hull University, and they patented them in amongst othe places the US. The US corp who had other major television interests thus had long since ceased it’s work on liquid crystals, in the main because they could not solve some very basic and fundemental problems of temprarure range, solidification, high spoilage due to water vapour and high ionic currents making not just the displays very short lived but battery life very short as well. Eventually after some years realised they had dropped the ball and challenged the US patent the British had been granted, and for entirely frivolous reasons had the patent revoked by a US court. Thus the work in Hull by George Gray and down in Poole by BDH and across in Malvern by the Royal Radio Establishment got effectively stolen… As was normal at the time the UK Government got “the special relationship” treatment thus serious political preasure was applied via the supposed “war debt”. So much so that those who had had their work effectively stolen were not even alowed to talk about it for several decades, effectively after they had retired and were not so easy to intimidate… This kind of nonsense continued untill the 1980’s when “Mad Maggie” Thatcher finally put a stop to it.

gordo October 23, 2019 7:52 AM

@ Clive Robinson,

Your Thatcher reference reminded me of “Ronnie ‘Star Wars’ Raygun”, the SDI ruse and all the money spent on what would require, IIRC, from criticism of the program in the 1980’s, a couple of breakthroughs in physics on the order of Einstein’s relativity theory. Maybe the same applies to feasible Quantum Communications on a global scale. A couple of hours of thumbnail research, if one can call it that, shows there is absolutely nothing in existence that comes close to what the article’s author claims. You mentioned satellite-based quantum systems – daylight presents a basic problem there. Anyway, one of the reasons the SDI program was shut down was due to whistleblower reporting of waste, fraud and abuse.* Maybe the lesson is that ruses, too, cost money and eventually outlive their usefulness.

  • Physicist Aldric Saucier blew the whistle on gross mismanagement and waste of funds associated with the Strategic Defense Initiative (SDI), otherwise known as the Star Wars program. The project’s biased and incomplete research was funded by misleading Congress about its efficacy. Saucier’s disclosures were validated, and substantially eroded political support for the SDI program.

Clive Robinson October 24, 2019 3:55 PM

@ Gordo,

The satellite that might have got him to think it was possible is talked about rather superficially in this article from a couple of years back,


However the satellite was launched a year earlier,


Which means it took the better part of a year for the Chinese to actually get the system working over that 1000Km path (possibly due to waiting for new technology for the earth side equipment).

If I remember correctly in essence what the Chinese did was put a photon quantum entanglement generator on a satellite in low earth orbit. It sent one half of each entangled pair off in one direction the other off in a different direction. Thus one stream of photons off to one chosen target, and the other off to the second chosen target. For every ~10^7 or so pairs generated only one pair would make it to both targets…

These vary rare pairs would in effect make a “One Time Pad” which could then be used to send a single bit of secret information via an earth side communications link…

Yes it’s absolutly secure if used properly but the data rate is so low the best you could use it for is sending keys for the likes of a Quantum Computer proof symmetrical cipher.

The best quantum photon distance in fiber optic cable back then was 200Km and any small mechanical noise on the cable could reduce the number of successful bits dramaticaly…

Quantum Communications systems had been commercially available for a decade or so before that but the range has always been like the data rate low at any practical “out of campus/town” range. Worse it’s also a “one to one” point to point system, with people still trying to work out a practical way to “switch” the qiantum stream. Which is possibly what made the Chinese think of putting it in space in the first place.

But unless somebody had come up with a Nobel Prize winning break through and some how kept it secret the technology is no where near “prime time” usage, nor do I suspect it will be for quite some time to come, by which time 5G will like 2G is today, be something that is well past it’s “Best Before Date”.

Clive Robinson October 24, 2019 4:15 PM

@ gordo,

The [SDI] project’s biased and incomplete research was funded by misleading Congress about its efficacy. Saucier’s disclosures were validated, and substantially eroded political support for the SDI program.

I remrmber it oh so well, I even commented on this blog about it years ago.

The story was that congress were truthfully told that SDI needed powers in the 10^12 range, yet so far they had only produced devices in the 10^6 range. But the math whizzes that congress were at the time assumed that the scientists were half way there, not one millionth of the way there…

The reason why all that power was needed is that nuclear war heads are designed for short route reentry which tends towards maximum friction power. The friction power of minimum friction reentry is up in the ten to hundreds of killowatts thus a megawatt or so of laser power is not going to make much difference…

The real problem though that came out later was the targeting mirrors and collimators. They tended toward exploding or melting on each firing, but certainly due to non uniform heating effects could not be controled back in the 1980s, so would have missed or been well out of focus anyway even in the low megawatt power range.

gordo October 24, 2019 5:11 PM

@ Clive Robinson,

Yes, as with so many endeavours, “the temptations will thrill you, but the details will kill you.”

Clive Robinson October 24, 2019 6:27 PM

@ gordo,

“the temptations will thrill you, but the details will kill you.”

+1 🙂

But “Back when we were young, were we not invincible, to love, not cry, to live, not die?”.

gordo October 25, 2019 8:50 AM

@ Clive Robinson,

Don’t be gettin’ all wistful on us, now! There’s a reason the grey-hairs are grey! 😉

Clive Robinson February 2, 2020 3:22 PM

@ Shai Machnes,

Is this a case of bad phrasing, or is QKD capable of providing authentication?

Your question is answered by Bruce’s next sentence as both an answer and question,

    But come on — does anyone expect a system that requires specialized communications hardware and cables to be usefull for anything but niche applications?

The specialized hardware and cables provide an optical communications Shannon Channel. Which without going into all the details was accepted from [BB84] as being “tamper evident” which in it’s self is authentication.

The problem however is not the optical quantum channel, but the classical channel which is used to build the effective OTP. It’s this channel that needs authentication as it’s not inherently “tamper evident” or “tamper proof”.

Which is why authentication has to be done at a higher level in the stack. Which is a problem as far as authentication goes it reduces QKD to the level of any other crypto system using the same type of authentication.

Thus with the limited authentication of the clasical channel and the significant limitations of the “hardware and cables” of the quatum optical channel, the question,

    Does anyone expect … to be usefull for anything but niche applications?

But it gets worse, that quantum optical channel only works from one point to another, which means you would need not just relays but switches as well to make QKD of real use. And if you used conventional optical relays and switches, the security offered by the Quantum effect disapears. Thus you needed either a non scalable 0.5(N^2-N) links or an agile link which is maybe why the Chinese put a quantum entangled QBit generator in a satellite, but still the classical channel authentication is not upto anything above what can be done with any classical channel.

[BB84] https://en.m.wikipedia.org/wiki/Quantum_key_distribution

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.