More on NIST’s Post-Quantum Cryptography

Back in July, NIST selected third-round algorithms for its post-quantum cryptography standard.

Recently, Daniel Apon of NIST gave a talk detailing the selection criteria. Interesting stuff.

NOTE: We’re in the process of moving this blog to WordPress. Comments will be disabled until the move is complete. The management thanks you for your cooperation and support.

Posted on September 8, 2020 at 6:12 AM22 Comments

Comments

Q September 8, 2020 5:33 PM

The “talk” link isn’t very helpful. There is mostly a blank page with a blind “Download Now” button that does nothing.

Where is the talk? Is it a pdf, mp3, mp4, what?

Lawrence D’Oliveiro September 8, 2020 7:22 PM

Has number-theoretic “quantum”‡ computing come anywhere close to leaving the vapourware zone yet?

‡I don’t really understand the relevance of this term. All of electronics is rather crucially based on quantum effects such as superposition of states. Perhaps this should be called “entanglement” computing?

jcb September 8, 2020 7:46 PM

@ Lawrence D’Oliveiro

Has number-theoretic “quantum”‡ computing come anywhere close to leaving the vapourware zone yet?

Short answer, no.

@ (previous blog entry, linked above)

Details are _here_. This is all excellent work, and exemplifies NIST at its best.

The slightly longer answer is more like there’s an excellent chef at a certain restaurant, and something’s on a tray with a silver dome over it …

Noah September 8, 2020 9:54 PM

@Lawrence D’Oliveiro
Standard computing doesn’t directly depend on quantum properties. Technically everything is ultimately quantum if you dig deep enough, even a relay based computer. Classic electronics can be understood without knowledge of quantum mechanics, as the quantum part is an abstraction level or two deeper than the physical properties being exploited (electromagnetism). I do my job as an EE just fine using zero quantum. A quantum computer functions directly at the quantum level and only can be understood using quantum mechanics.

metaschima September 9, 2020 2:51 PM

Actually a very useful presentation, wish I had the audio to go with it.

The German government seems to be ahead of everyone, they already picked Classic McElice and FrodoKEM, which are pretty good choices. I personally would go with Classic McElice, it’s solid. I personally don’t trust the lattice-based crypto, it’s very new and the shortest vector problem has had huge improvements towards being solved faster, so I would not invest in this. For digital signatures it’s a bit more challenging. SPHINCS+ is a solid choice but slow and hard to integrate, still it would be my first choice. Picnic also looks interesting and it has already been integrated into some security programs.

I hope the NSA doesn’t swoop in at the end and make a last-minute change to these “to improve security”.

Lawrence D’Oliveiro September 9, 2020 6:07 PM

@Noah

Classic electronics can be understood without knowledge of quantum mechanics …

No it cannot. Transistors depend rather crucially on quantum tunnelling for their behaviour, just as one small example.

Clive Robinson September 10, 2020 12:54 AM

@ Lawrence D’Oliveiro, Noah,

Transistors depend rather crucially on quantum tunnelling for their behaviour

Whilst true, that does not usually effect how an EE can design a circuit as Noah was indicating.

When I design even RF power circuits I generally do,

1, DC analysis
2, AC small signal analysis

In my head / back of a napkin as a first aproximation, along with a bit of “gut feeling” based on years of experience.

Which usually follows the “works 99 times out of a hundred” rule as you would expext with the usual negative feedback models[1].

However if I spot feedback phase or other stability issues I will then CAD model in a circuit simulator (actualy three different ones as experience has shown they are by no means infallible). Then I go to hard prototypes and full bench testing.

But as of yet I’ve not had to take into account any quantum effects doing discreet component design. You generaly only need to get that down and dirty if you are doing actual junction level design on semiconductor, which is something not even most “SoC” designers do these days as they tend to “place and track” “library blocks” and the Fab tools do the checking.

But to be honest I rarely do the DC analysis correctly I just use basic assumptions that make life simpler. That is I assume a transistor base current does not load the DC voltage bias circuit, Vbe is 0.7 on BJTs at operating temp and hfe is high enough that I don’t have to allow for the difference the base current makes in the collector and emitter circuits that is Ic=Ie. As a first approximation it’s usually “good enough” as long as you are aware as to when you have to do things more accurately (ie “current” as opposed to “voltage” base bias etc etc).

If you look at both graduate and undergraduate training, whilst quantum effects are taught they quickly get dropped in all but the likes of opto-electronics or other substrate level teaching.

I’ve not looked at US training recently but I suspect it’s the same.

But at the end of the day whilst you are aware there is something several layers deeper, the device manufacturers don’t as a rule give you the information you would need to be able to use it any way as they regard it as “proprietary” / “keys of the kingdom”. So you can’t design at the quantum level anyway in by far the majority of practical design work (optics for astronomy and similar scientific instruments being the main exception).

[1] Even in RF design these days devices are such that negative feedback models are often sufficient. It’s only when you start getting into needing long battery life or heatsinks where high efficiency is required do you start having to “get cute”. Most RF power design can be done with “load pull” design then tweeked with hard prototyping on the bench. Getting 5kW in a 1U rack can be done not just with power supplies these days but with LDMOS FETs into the low microwave bands as well.

Lawrence D’Oliveiro September 10, 2020 6:44 PM

@Clive Robinson

You may not need to understand quantum theory in order to design a computer, just as I don’t need to understand electronic circuit theory in order to program one. Nevertheless, all computers are already “quantum” in a very fundamental sense, which was my point.

Clive Robinson September 10, 2020 7:36 PM

@ Lawrence D’Oliveiro,

Nevertheless, all computers are already “quantum” in a very fundamental sense, which was my point.

In a “fundamental sense” we are finding out that much of sensory life is actually “quantum” including the human sense of smell amongst other things.

Which kind of implies at some point the teaching of quantum physics will hit the K12’s curriculum, much as Computer Programing does today but did not befor the mid 70’s.

By then I suspect that both of us will be classified as “dinosaurs” 🙁

Lawrence D’Oliveiro September 11, 2020 2:20 AM

@Clive Robinson

…much of sensory life is actually “quantum” including the human sense of smell amongst other things.

Everything. Even how matter hangs together, and why atoms do not collapse, can only be explained by quantum theory.

That should make it even clearer, should it not, that the (mis)use of the term “quantum” in “quantum computing” is really just a marketing ploy.

MarkH September 11, 2020 3:09 AM

@Lawrence, Clive:

Noah has it exactly right above.

All computing machinery (at least, to date) contains electrons, and couldn’t exist without them. Who calls a mechanical calculator, or a slide rule, an electronic computer?

All explosives contain atomic nuclei, and couldn’t exist without them. Who calls a stick of dynamite, or a firecracker, a nuclear explosive?

We have decades of analysis of the inherent limitations of Universal Turing Machines.

Quantum computers can do what classical computers cannot.

The essential character of QC operation depends on the non-classical properties of particle interactions. They could not be designed or understood without quantum theory.

In classical computers the quantum nature of the internal structure of their component transistors, or relays, or gears and levers, is incidental to their theory of operation, not essential.

Clive Robinson September 11, 2020 6:43 AM

@ MarkH

Who calls a mechanical calculator, or a slide rule, an electronic computer?

Many people used to call those that used mathmatical tables, or later slide rules or later still mechanical calculators “Computers”.

It was actually the name of a job of people who worked under the “comptroller” which is a managment or directorial position in an organisation responsible for the organisations finances.

Those who had the job title of “computers” did the calculation in the same way those with the job title of “typewriters” did what was effectively the printing of documents.

In both cases in time it was the machines that inherited as generic names the operators job titles.

Thus the use of “electronic” like “analogue” and “mechanical” are a legacy prifix used to distinguish the machines from the humans jobs.

The same applies to “vacuum cleaner” and various others such as “gas cooker” (which should actually be “gas stove/oven”).

MarkH September 11, 2020 7:21 AM

@Clive:

Of course slide rules are a kind of computing machinery. That’s not the point.

They aren’t ELECTRONIC according to any usage of that word. Calling them by such a name, is just as preposterous as saying they are quantum computers!

Seriously, have you ever heard — or even imagined — somebody calling a hand-cranked mechanical adding machine (they used to be a thing) an electronic computer?

Noah September 11, 2020 12:23 PM

@MarkH
Exactly. Any old hammer you get at a hardware store technically uses electrostatic forces to drive nails, but good luck marketing it as an electric hammer.

MarkH September 11, 2020 1:37 PM

@Clive:

When I was a schoolboy, my father sometimes kindly indulged my fascination with office machinery by bringing home one gadget or another so I could “geek out” on it for a couple of weeks.

I had a particular interest in mechanical calculators, which at that time were just on the point of being supplanted by their electronic counterparts … though it was a while before electronic calculators appeared which also printed their answers on paper, something almost all of the mechanical calculators did. [Those machines all had electric motors to propel their works.]

One day he borrowed from the office an especially advanced model of mechanical calculator, which to my amazement had a divide key! If I measured execution times with any precision, I don’t remember them … but it seems to me that large division problems took something on the order of two to five minutes to complete, with the mechanism clattering the whole time.

I always removed the covers, of course. The addition operation was very straightforward (typically they had ten numeral keys for each decimal place, which makes mechanization very much simpler).

I could see that these machines carried out multiplication and division by exactly the same algorithms we were taught in elementary school.

=======================

Not long after this, I became acquainted with a man whose business was precision optical fabrication (including λ/200 optical flats!).

As a hobby, he also delved deeply into telescope design, and along the way accomplished what I believe to be one of the most significant advances in amateur telescope design of the past 50 years.

Optical design requires crushing amounts of high-precision computation, increasing with the number of elements in the system. Before he could afford an electronic calculator, he spent many evenings extracting high-precision square roots with pencil and paper.

He told me that at least one photographic lens company had, in desperation, filled a room with desktop mechanical calculators. They invented their own Heath Robinson mechanisms for “extracting” the result from one machine (which typically would be a sequence of wheel angles) and transferring it to another (by depressing the corresponding keys).

By this means, this optical design group could automate a long sequence of arithmetic computations.

As I try to imagine this assemblage, it assumes a nightmarish character … I remember him saying that the Mean Time Before Failure was as short as one might expect, with the monstrosity jamming up within a not very large number of minutes. But the labor savings (and perhaps more important, elimination of operator clerical errors) made it worth the trouble.

=======================

Naturally, my optical friend was eager to acquire as much computing power as he could afford to invest in his hobby, so he bought an early S-100 computer, and was excited when an obscure company called Microsoft offered a BASIC interpreter which necessarily including floating point math operations.

Perhaps readers will not be surprised to learn the this original Microsoft BASIC had defects in its floating point implementation, which caused substantial errors in results and made it nearly useless for optical design computations.

In those days, there were Microcomputer User Groups, and the then obscure Bill Gates visited my friend’s local group. He took the opportunity to tell Gates about the defects in Microsoft BASIC, in the vain hope that Microsoft would be interested in correcting them.

Mr. Gates didn’t seem at all concerned about software defects. He only wanted to talk about his next new product.

Sound familar?

Clive Robinson September 11, 2020 3:21 PM

@ MarkH,

Mr. Gates didn’t seem at all concerned about software defects. He only wanted to talk about his next new product.

As I’ve mentioned befor I had the misfortune to attend a meeting in which Bill Gates was present. Aside from the issue you mention I noticed a number of other traits,

His monomania style of fixating on what he wanted to talk about and his sulkiness when others kept reverting back to the agenda.
His strange body movments that made him look like a “weeble” toy rocking back and forth.

3, His distinct aroma that was not at all pleasant.

4, His almost “child in the playground” like animation when chalenged.

Needless to say the meeting was not at all productive and I was whole heartedly glad to get out of it. He might have been a child prodigy but he also had diva like behaviours, not a good combination in engineering type discussions to resolve issues.

But getting back to mechanical calculators, whilst many know that Nobel Laureate Richard Feynman was involved with the WWII development of the bomb and all the practical jokes etc he played whilst there and the sad story of the illness and death of his first wife, few actually knew what he did there.

He appeared reluctant to talk about it but one thing he did own up to was being in charge of large numbers of “computers” of the human variety and IBM tabulating systems all of which he turned from piecemeal work into a departmental work flow system that was not just way more efficient but also made significantly less errors. I’m assuming that part of that work was on hydro-codes and fission products.

Which your story about the chaining of mechanical calculators reminded me of.

There are several stories from WWII about organising individual workers into large systems for complex brain powered tasks. In a way it was the logical progression from “pin making by process” through Henry Ford’s using humans as cogs in mechanical production on assembly lines. I sometimes wonder where it would have got to if Tommy Flowers and Conrad Zeus had not shown that what we now call computers were practically possible. I used to know someone who had designed parts of the “Lyons Tea House Companies” LEO1 the chalanges that arose were not just technically advanced as in “bleeding edge”, they had to invent solutions that probably would not even occure to modern design engineers or even be legal to make these days (six foot long tubes of mercury and the like). Even in the late 1960’s the idea of Read Only Memory was on a bit by bit basis, just imagine hand assembling nearly thirty three thousand such bits to drive an ALU made of hundreds of two input nor gates. Have a look at “Rope Memory” and why main memory is called “core” memory.

But getting back to mechanical calculators you might have seen one of these,

https://m.youtube.com/watch?v=ckCHGbxtoGg

Lawrence D’Oliveiro September 11, 2020 7:19 PM

@MarkH

Quantum computers can do what classical computers cannot.

Speaking of vapourware in the present tense, just like a true marketroid.

In classical computers the quantum nature of the internal structure of their component transistors, or relays, or gears and levers, is incidental to their theory of operation, not essential.

Feel free to concoct a theory of the operation of a transistor without using quantum mechanics.

MarkH September 12, 2020 1:53 AM

@Lawrence:

Those who have read my comments here in recent years, will likely have noticed that probably no participant here is more skeptical toward practical quantum computing than I.

My estimation is that the dreaded large-scale QC capable of breaking RSA, DH and DSA is unlikely to exist by 2050, if indeed such a contraption can ever be made.

That being said, quantum computers exist (at tiny demonstration scales) and appear to behave as physicists have predicted.

Real, yes. Useless, yes.

Because of obstacles to scaling — which perhaps are insurmountable — no quantum computer (as far as I know) has yet performed a useful computation that a classical computer can’t do at comparable speed.

However, QCs can exploit superposition to simultaneously evaluate numbers of alternatives, with the potential to scale to astronomical problem sizes.

Classical computers have no corresponding ability.

====================

Quantum physics explains aspects of semiconductor behavior, but as Clive explained, no such consideration appears in circuit design.

Many people mistakenly believe that the theory of nuclear fission depends on special relativity. In fact, relativity explains the tiny “mass defect,” but a purely non-relativistic understanding of fission is completely sufficient to design a nuclear power plant or warhead.

General relativity provides a magnificent framework for universal gravitation. Are children’s swings and yo-yos “relativistic” toys?

In any case, nothing about classical computers inherently requires the use of transistors.

Any type of switch or toggle which can be operated (deterministically caused to change state) by a similar switch, is sufficient to build a classical computer.

Although cost and performance would be appalling, a Universal Turing Machine could certainly be built from electromechanical relays.

I invite you to take a look at Turing’s 1937 paper “On Computable Numbers …” If you find that his delineation of the capabilities and limitations of stored program computers makes any appeal to quantum physics, do inform us!

MarkH September 13, 2020 12:08 AM

“Transistors depend rather crucially on quantum tunnelling for their behaviour”

I attempted to verify this assertion. Insofar as I discovered, it is not generally true.

In present-day computers, quantum tunneling is an undesired “parasitic” effect which can degrade transistor performance.

It would seem that the number of CPUs which depend on quantum tunneling in their component transistors, is zero.

Flash memories commonly do depend on tunneling … but not the transistors in logic circuits.

Because tunneling effects grow in magnitude with decreasing transistor size, they are projected as a limiting factor to future miniaturization. For this reason, intensive work is in progress to develop FETs which actually are based on tunneling.

As far as I’m aware, such TFETs are years away from appearing in any product.

When tunneling transistor computers are finally realized, they will be classical computers whose computations are fundamentally different from those of entangled qubits.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.