Two NSA Algorithms Rejected by the ISO

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they’re backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It’s like examining alien technology.

EDITED TO ADD (5/14): Why the algorithms were rejected.

Posted on April 25, 2018 at 6:54 AM44 Comments

Comments

Neal April 25, 2018 7:58 AM

From the article: “NSA officials, refused to provide the standard level of technical information to proceed.”

Wish there was more detail here. Sounds like a an important technicality.

Did someone drop the paperwork or not get the message that it needs to be filed properly? Was it held up because of some document that needed declassification? (I was told that just writing up the report on the wrong type of computer could cause this kind of headache due to the air gap. Unclassified documents need to be written on unclassified computers.)

Perhaps there was a misunderstanding of the type of information that needed to be filed, or where it should be filed? (It’s ISO and they over-design everything, so I kind of doubt that they under-specified the reporting requirements.)

Or was there some specific effort to not release the required technical information?

Considering that the algorithms have been known and available for review since 2013, I kind of think this is more due to someone dropping the ball rather than intentional malicious deception.

echo April 25, 2018 7:59 AM

As a thought experiment I wondered what Bruce would come up with if he had access to the same teams and knowledge the NSA and GCHQ et al had at their disposal. We may never know the answer for certain nor be able to simulate this but I find this intriguing.

Cassandra April 25, 2018 8:17 AM

I suggest that what might be of greater importance than the specific algorithms used is for there to be an ISO standard that ensures that compliant (IoT) devices can have a choice of cryptographic primitives available to them, which can be updated to remove deprecated ones and add new ones under the control of the owner of the hardware.
Assuming that one (or two) algorithms will suffice for the foreseeable future seems short-sighted at best, and quite possibly brave.
Anything that obligates the owner of the device to cede control to the manufacturer ought to be a compliance failure.

Cassandra

Nicholas Weaver April 25, 2018 8:20 AM

I think its good they were rejected for two reasons.

First, the parameters included options that were just not secure with too short key lengths. One thing that time has taught us is that you don’t give programmers options that include explicit “shoot self in foot”: Always error on conservative.

Second, I still don’t get what the point is. AES is cheap! Yeah, Simon and Speck are a bit faster/cheaper, but if you are building hardware AES is as good as free (the hardware design is very compact), and many many processors these days also include instructions that make AES good.

Bauke Jan Douma April 25, 2018 9:09 AM

Could it be that the NSA was more interested in the ‘where, what, how and when’ of the rejections than getting the algorithm passed?

Peter April 25, 2018 9:17 AM

Please tell more about how it is to examine alien technology – Do they really have a (U)FO and dead aliens @
Area51 ? Is the CPU really stolen alien technology ?
Also, after Snowden, you still trust the NSA with anything ? Why??

Bobo Smith April 25, 2018 9:25 AM

@Nicholas Weaver That’s how I would do it if I were the NSA: include dumb insecure options in an otherwise secure system.

Make sure those supplied to your government are secure, and let others shoot themselves in the foot to reap the benefits. ECB mode anyone?

scot April 25, 2018 9:46 AM

“And I always like seeing NSA-designed cryptography (particularly its key schedules). It’s like examining alien technology.” So that’s where the US government keeps all the aliens, working for the NSA?

Wombat April 25, 2018 11:48 AM

@echo
“We may never know the answer for certain nor be able to simulate this but I find this intriguing.”

Bruce Schneier Facts is an accurate simulation of what would happen, were he to have those resources.

Nikke April 25, 2018 12:35 PM

The algorithms should be available for the public to decide. Or at least the public should be asked. — Whether the buyer wants to pay an extra two dollars for a more secure processor and algorithm or not. Best would be to have both models available: More secure and less secure. And the buyer decides.

Alejandro April 25, 2018 1:14 PM

I doubt there is a specific, detectable, baked-in backdoor, also.

Now, as for a very clever exploit which can be trivially managed from afar…..yeah, probably. NSA are code breakers not creators when all is said and done.

Let’s remember these famous words of wisdom:

“There’s an old saying in Tennessee — I know it’s in Texas, probably in Tennessee — that says, fool me once, shame on — shame on you. Fool me — you can’t get fooled again.”

― George W. Bush

Security Sam April 25, 2018 2:01 PM

From those who never say anything
Simon says while others speculate
Attempting to locate the back orifice
Against the tide must circumnavigate.

Fool April 25, 2018 7:10 PM

“It’s like examining alien technology.”

Sure, by aliens you know are trying to pull one over on you… They’ve stated it, it’s been proven, and it’s their very goal… so why does everyone keep thinking maybe they’ll suddenly turn good this time? Isn’t the definition of insanity repeating the same thing over and over and expecting different results?

John Smith April 25, 2018 8:10 PM

from Fool:

“…They’ve stated it, it’s been proven, and it’s their very goal… so why does everyone keep thinking maybe they’ll suddenly turn good this time? Isn’t the definition of insanity repeating the same thing over and over and expecting different results?”

Speaking generally, well-brought-up middle-class people find it very, very hard to get their heads around how the real world works, contrary to the official and comforting narrative. Cognitive dissonance.

Edward Snowden made the comment that, as someone with sysadmin access, he saw how the NSA really worked. He made the point that NSA workers will less access can continue to believe in the “mission” because they don’t see all the instances where NSA betrays the “mission”. They dismiss the possibility that the betrayal is a feature and not a bug, because not to dismiss it leads to some rather uncomfortable questions.

Despite his family’s military background, Snowden’s repeated exposure to the underlying reality led him to a point where he could no longer deny it.

But for us well-brought-up middle-class people, yeah, we can deny it. All day long, Hell yeah.

Spooky April 25, 2018 10:45 PM

Plenty of oldtimers still think back on those peculiar NSA-provided alterations to the DES substitution boxes (that arrived without comment or explanation, apart from the general assertion of increasing the algorithm’s overall resistance to attack). In hindsight, we now know that it did actually help to improve resistance to differential cryptanalysis (classified knowledge at the time, well-known today). Of course, they also knocked the key size down by half, presumably making it susceptible to hardware (ASIC) brute-force attacks. What one hand giveth, the other taketh away. They do have some really bright people working for them but as with any national intelligence organ, if they come bearing gifts, it makes sense to hold those offerings up to the highest levels of scrutiny. Nothing comes for free…

Cheers,
Spooky

Cassandra April 26, 2018 2:26 AM

@Spooky

I remember what you are talking about. The NSA has a schizophrenic mission: on the one hand, secure the USA’s secrets, on the other, enable the discovery of other state’s secrets. This leads to an apparent problem: if an algorithm is cryptographically secure enough for the USA, then the same algorithm makes it difficult for the USA to decrypt other state’s secrets. However, one key point people miss is that NSA don’t just certify algorithms, but also implementations. This means it is in their interest that algorithms can be difficult to implement well – e.g. open to side channel attacks. They will encourage adversaries to use uncertified implementations in the hope that mistakes will have been made.

A case in point is the fact that the proposed key sizes for SIMON and SPECK are the same as for AES. There’s nothing wrong with that as such, but that happenstance makes it possible for sloppy implementations to re-use a key between AES and SIMON/SPECK. Of course that shouldn’t happen, and a good implementation won’t, but the mere fact of it being possible means that if SIMON and/or SPECK are easier to attack, even by brute force, than AES, and you have shared a key, then a (quite possibly) good AES implementation on the same system has been compromised. It doesn’t take many seemingly innocuous things like this to make a lot of holes in the Swiss cheese line up, as those in the aeronautical industry say occasionally.

We already know that many commercially available cpus have an on die ‘Secure Enclave’ which contains keys and code that are not under the hardware owner’s control, as well as having unrestricted access to the cpu’s resources, including memory and network access. It is not unreasonable to expect the NSA to have extreme interest in such a thing, so it may not matter how good an algorithm you use if the key can simply be read by code you do not control in the enclave and smuggled out by using random numbers (that are not as random as you may think) in the random padding of some Internet protocol or other.

I am not going to say that every algorithm should have a different key length to all other algorithms: that would be madness. However, standardising algorithms should also look at trying to make implementations more robust in the face of sloppy use – if nothing else than by recommending ways to avoid known problems when implementing them. SIMON and SPECK are intended to be used on low-end hardware where the physical cost of standard side-channel mitigations may be too high to implement, which immediately tells you that the implementations are going to be a great deal more vulnerable to adversaries who use precisely those side-channel attacks to extract secrets. I’m certain that ‘low-end’ crypto will be either be used inappropriately or implemented badly somewhere, leading to an exploit of more valuable systems. Some people argue that some crypto is better than none, but if it give people a false sense of security, then I am not so sure.

Cassandra

Clive Robinson April 26, 2018 3:14 AM

@ Fool, others,

Sure, by aliens you _know_ are trying to pull one over on you…

And that is the point that makes it interesting.

If Simon or Speck contain a backdoor of some form then finding it gives the warm feeling of doining a crossword or Sudoku successfully, plus if you do it right a claim to fame.

It’s been clear for a while that the closed community of the NSA goes about their designs somewhat differently to the open academic community.

The disadvantage of this is that everything the open community ever learn the closed SigInt community get to know in short order (even if it does involve the bugging of academics and engineers communications as has happened). We rarely get to see what the other side knows, except through what the inadvertantly leak in various ways.

I looked at the two systems some time ago and I was not overly happy with them to be polite. Because I can see all sorts of issues arising not just with their implementation as a primitive, but also as a building block in systems. They also have a strong wiff of fast implementatiin time based side channel issues about them.

One area I moan about occasionally is “fall back” from “auto-negotiation” these algorithms have trivial sizes at the bottom of the list that can probably be broken with a network of PC’s in short order. As these are part of a standard, then to have an implementation standards compliant it has to support those trivial sizes thus putting the posability of a fallback attack in place.

Which makes an auto-fallback MITM attack in effect built into the standard from the get go…

Thus at the very least I would say they were trying to pull a trick or three at both the standards and implementation levels.

BullRun April 26, 2018 4:57 AM

ah! some folks perhaps remembered a few facts about “government” and decided to relieve some cognitive dissonance just once.

RonK April 26, 2018 7:52 AM

After a cursory inspection of the linked document, what caught my attention was:

One might argue that the amount of data encrypted by a single lightweight
device during its functional lifetime will be tiny, and data to which an adversary
has access will likely remain small when this tiny quantity is summed over all
devices using a common key. In addition, for devices that can’t be secured
physically, practical (side-channel, reverse engineering) attacks will likely take
precedence over cryptanalytic one.

Since I’ve dabbled in trying to invent secure algorithms which can be run manually, a la Solitaire, I’ve had strikingly similar thoughts.

Bobo Smith April 26, 2018 10:26 AM

@RonK

Yep, smaller devices with less sophisticated CPUs, seem like they’d be more likely to be subject to timing attacks and they’d be less able to be updated as well. So you have IOT things “encrypting things” that people care about, but they forget about the underlying devices. When an attack does come, are people really going to be able to replace the devices faster than they are exploited?

Clive Robinson April 26, 2018 2:42 PM

@ RonK,

Since I’ve dabbled in trying to invent secure algorithms which can be run manually, a la Solitaire, I’ve had strikingly similar thoughts.

As you might know I’ve mentioned “manual” or “Paper and pencil” ciphers as a way of extending the security end point beyond the communications end point for some time now.

The reality of electronics in all it’s forms is it is “inefficient” and the excess energy has to go somewhere. Usually by radiation or conduction, thus providing “communications paths” out to the wider world no matter what precautions you take.

The big problem is what information is impressed or modulated onto this radiated or conducted energy as an implicit behaviour of it’s normal operation. Which leads to the big question of “Can the information carried by the radiated or conducted energy be used by an adversary who is passively monitoring?” To which the answer is almost certainly yes unless strict precautions have been made.

Such precautions are not taken with consumer or commercial equipment and only occasionaly with proffessional equipment used for test and measurment in laboratory conditions.

Which brings you to the inescapable conclusion that all electronics “used normally” is in effect hemorrhaging information to an adversary invisable to you and the electronics you use… Thus crypto is only of use on the non user side of the security end point or for information at rest.

It’s actually worse than described above because both radiation and conduction channels are in general bi-directional. Which means that an adversary can do such things as “illuminate” your electronics to encorage information to be cross modulated onto the illuminating signal. Or send a modulated signal that will induce faults into your electronics that will cause information to leak. Oh and a number of other tricks, some of which I’ve mentioned in the past.

So unless you know the how to turn consumer grade electronics into laboratory grade instrumentation, or the precautions required to render the radiated and conducted energy bandwidth insufficient to carry usefull information then your best mitigation is not to do crypto on electronics that almost always will have the communications end point “end running” the security end point thus making plaintext or keytext available to an attacker.

I happen to like “card shuffling” algorithms andva deck of cards is not exactly a suspicious thing to have around your person.

The big problem I’ve found with them is that of being “observed in use”. There are many ways to shuffle cards manually and with a little practice they have a certain fluidity due to efficiency of opperation. That is they appear to flow naturally to an observers eye. Many of the shuffles suggested for doing crypto use “marker cards” to act as pointers etc, it is actually quite difficult to move these around with a “natural flow” which makes the shuffle look awkward to an observer thus somewhat suspicious to a “hinky thinker”.

So I’ve looked into using those near “endless patience games” like “Madman’s patience”. With small variations these can become endless games as well as effectively shuffling the deck without need for overt thus suspicious marker cards.

Oh and Solitair was not the first playing card cipher used in a book . For instance Robert A.Heinlein wrote four stories for his “Assignment In Eternity” series the first of which is “Gulf” where the protagonist called “Joseph Gilead” at the time meets “Gregory Baldwin” in what is temporary imprisonment, but under observation. Baldwin teaches Gilead a very simple card game so they can pass messages back and forth covertly whilst otherwise talking trash talk. It’s in effect a simple substitution cipher where a red card replaces a letter of the alphabet so as secure as Pig Pen or similar. The story was first published back in the Oct-Nov 49 edition of Astounding Science Fiction. If you can dig out a copy of the story it’s still fairly good as science fiction and has stood the test of nearly seven decades of time.

Mike Spooner April 26, 2018 4:11 PM

As far as avoiding timing-side-channels and reducing power-side-channels on even very-low-end hardware, as well providing a bit more diversity-of-algorithm, together with no inscrutable magic numbers and considerable public cryptanalysis, surely ChaCha20 would be on the selection-list?

Thoth April 27, 2018 12:55 AM

@all

Their (NSA representatives) pushy and bullying approach is the true deal breaker.

If they could clear things up and be very transparent on Simon and Speck and not use their usual coercive approach, they might have a higher chance of success but he fact that they have to stoop so low as to call names and accuse others, become coercive and kept the design decisions until they are forced to discuss, those are all signs of blowing their own foot up – guaranteed.

Cassandra April 27, 2018 2:46 AM

@Mike Spooner

I recommend reading all the thread of [PATCH v2 0/5] crypto: Speck support on the linux-arm-kernel mailing list, which discusses many of the potential candidates.

Eric Biggers writes:

We really wanted to use ChaCha20 instead. But it would have been used in a
context where IVs are reused (f2fs encryption on flash storage), which
catastrophically breaks stream ciphers, but is less bad for a block cipher
operating in XTS mode. Thus, we had to use either a block cipher, or a
wide-block encryption mode (pseudorandom permutation over the whole input). Of
course, we would have liked to store nonces instead, but that is not currently
feasible with either dm-crypt or fscrypt. It can be done with dm-crypt on top
of dm-integrity, but that performs very poorly and would be especially
inappropriate for low end mobile devices.

Paul Crowley actually designed a very neat wide-block encryption mode based on
ChaCha20 and Poly1305, which we considered too. But it would have been harder
to implement, and we’d have had to be pushing it with zero or very little
outside crypto review, vs. the many cryptanalysis papers on Speck. (In that
respect the controversy about Speck has actually become an advantage, as it has
received much more cryptanalysis than other lightweight block ciphers.)

Again, I recommend reading the whole thread for background, even if you don’t agree with the conclusions.

Cassandra

justinacolmena April 29, 2018 2:03 PM

@Neal

From the article: “NSA officials, refused to provide the standard level of technical information to proceed.”

How much “technical information” do you want? Do you need it spoon-fed to you? Do you really need someone to hold your hand when you cross the street? “These algorithms were both designed by the NSA and made public in 2013,” and Bruce blogged about them at that time, and linked to the old blog post and the original paper

https://www.schneier.com/blog/archives/2013/07/simon_and_speck.html

https://eprint.iacr.org/2013/404.pdf

404? What? Did the paper disappear and reappear? Still in its original form?

Okay, sure, there is obviously enough technical information to implement or program the ciphers in your computer programming language of choice, but the NSA cannot necessarily be expected to provide all the details of their attempts to cryptanalyze the ciphers or of any actual weaknesses they may have found.

They are “lightweight” block ciphers. The NSA and NIST may have simply felt that there was not enough margin of safety to recommend them officially as standards. Have people actually tried to crack them? What ever happened to the cypherpunks’ mailing lists? Has anyone published an actual report of cryptanalysis of these ciphers with a technical opinion of their strengths and weakness?

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors.

On this point I seriously disagree with Bruce. NSA’s cryptanalysts and code-breakers may indeed be aware of some theoretical weaknesses in these ciphers of which the general public is not, but it just goes too far and rubs a person the wrong way to call out “NSA-designed backdoors” in these ciphers without any evidence to back that statement up.

In all reality, the Russians, Chinese, Japanese, or Germans are just as likely to have broken these or similar ciphers as the NSA, regardless of who publicly proposed or allegedly designed them.

The ciphers were rejected for fitness for a particular purpose, namely the Internet of Things. Things that have the propensity to spy on us, and moreover cannot simply be updated or patched if they contain bugs.

justinacolmena April 29, 2018 5:39 PM

@Thoth

Their (NSA representatives) pushy and bullying approach is the true deal breaker.

I don’t see that approach in any official press releases from the NSA. Granted, there is a lot of pushing and shoving and bullying, and that behavior is oftentimes a real deal-breaker.

It is more the general corporate bullying and run-around. Remember the old Microsoft-vs.-Linux fight? That was not NSA. Remember when you couldn’t get fired for buying IBM? That wasn’t NSA, either.

The bullies are corporate proprietary interests, not government classified interests. They are aggressively protecting their copyrights and trade secrets. It’s intellectual property. They secretly own your brain and you need a haircut and blah blah blah.

Clive Robinson April 29, 2018 6:49 PM

@ justinacolmena,

How much “technical information” do you want? Do you need it spoon-fed to you?

Those are questions you should be asking of those who made them as statments for rejecting the NSA algos, not onlookers.

As quite a few suspect there is quite likely a “political message” being sent.

However as I’ve noted above I’m not keen on these ciphers or other ARX ciphers in general for various reasons.

Whilst in theory ARX operations do not leak key or data[1] information by “time based side channels” there are other side channels that need to be considered. There is however a another issue with ARX instruction usage, in that as a result the ciphers tend to use a vastly increased number of Fiestel rounds, which leaks other information in a similar way to traffic analysis.

With regards,

Have people actually tried to crack them? What ever happened to the cypherpunks’ mailing lists? Has anyone published an actual report of cryptanalysis of these ciphers with a technical opinion of their strengths and weakness?

Both yes and no. The actual algorithms have been looked at mathematically / logically etc as algorithms quite extensively. However there are few if any analyses of “implementations” for side channels. But worse the proposed standards have an obvious series of “to weak” variations where block size and key size are way to small to be considered secure. As I’ve mentioned above to be “standards compliant” an implementation would have to be able to work in these “to weak” variations, which opens up a large can of worms with regards Man In The Middle “Fall-back” attacks. For that reason alone the standard should be “deep sixed” and resubmitted without the “to weak” variations.

But there is another fly in the ointment,

NSA’s cryptanalysts and code-breakers may indeed be aware of some theoretical weaknesses in these ciphers of which the general public is not, but it just goes too far and rubs a person the wrong way to call out “NSA-designed backdoors” in these ciphers without any evidence to back that statement up.

The simple fact is that what became GCHQ and what became the NSA have a very long history of “finessing” in all manner of ways. The point is that aside from the still open question of DES all algorithms they have produced from the earlirst mechanical ciphers through to modern mathmatical ciphers have had some sort of “fix” built in. Mostly it is a matter of putting in forms of hidden backdoors to make breaking either the algorithm or the system easier. But in atleast one case (clipper chip) it was so that they could make the “Key Escrow” recovery process by the Law Enforcment Access Field useless to law enforcment. There were two seperate failings with LEAF the first found by Mat Blaze in 1994 the second found by Yair Frankel and Moti Yung the following year. These supppsed failings would alow the US SigInt agencies “in the know” to put themselves in effect above the law of the land. Thus people need to remember that “NOBUS” is “dual use” in meaning (something you hardly hear mentioned at all). But the NSA even know how to “put in the fix” on other algorithms they did not design… As I’ve indicated in the past they quite deliberatly put the fix in on the AES competition. The result was weak implementations that could be broken on a PC via observing network packet timings. Some of those weak implementations are still out there on the Internet and in Industrial Control Systems (ICS) today and may well still be so in twenty to thirty years.

So yes people who have been around for a while or have studied the BRUSA / UKUSA countries SigInt agencies histories are deeply suspicious of what comes out of the NSA et al as they all have “previous”.

[1] The theory is that the ARX instructions of “Add Rotate and eXclusive or” are not just “atomic” but “fixed execution time”. Unfortunatly whilst Xor is a logical or bitwise instruction, Add is an arithmetic instruction and has a data dependent carry function across the CPU word. If you examine the fine power consumption information it will to a certain extent be data dependent. In turn the power consumption effects the EM power spectrum which can be detected in a number of ways (you could look up Differential Power Analysis from the late 1990’s to get an idea from an earlier similar problem). One thing embeded microcontroler designs are frequently known for especially new IoT devices is their Radio Frequency Interference (RFI) issues due to “cost savings” on the likes of decoupling capacitors. Capacitors with the required low ESR and inductance are more expensive on a same value of capacitance basis. Thus if decoupling capacitors are actually used, they will be the cheaper higher inductance and ESR capacitors which have much lower self resonant frequencies and lower effective Q’s. Worse they will often be used on too long a length of PCB trace. Thus high frequency signals get radiated as RFI that at best interferes with communications as well as having the reciprocal effect of making the devices more sensitive to EM radiation. The point is whilst it may be regarded as “interference” in the general sense, it actually has information impressed / modulated upon it, that can be detected (see TEMPEST / EmSec). People need to remember it is not just time based side channels you need to be concerned about but EM side channels and their power spectrum as well.

Clive Robinson April 29, 2018 7:39 PM

@ justinacolmena, Thoth,

The bullies are corporate proprietary interests, not government classified interests.

Sorry the NSA has “previous” on bullying members of standards committees. Quite shamefully in fact. The UK GCHQ likewise has lots of previous, they just tend to be more subtle about how they go about it.

I’ve been in international standards meetings when members of the Five-Eyes were pushing an agenda via a “think of the children” style emotional argument loosely wraped up as a “health and safety issue (the same argument was later used to force all those GPS chips in your mobile phones, to make tracking you oh so much easier, also to be able to remotely turn on the phone microphone to turn it into a bugging device).

As I said the various processes by which they “put in the fix” is called “finessing” and they have been doing it since before the current SigInt agencies were even thought of.

Have a look at the history of “Post Master Generals” and their equivalent governmental ministers and the strange oddities that are in the likes of Telephone Standards going back before we had industry standards bodies…

Oh and then there was the other little fixes, such as making access to shielding components for radio equipment difficult[1]. Which eventually got flattened by the shear number of problems it caused which forced the EMC standards to be brought in. But if you look at them with an engineers eye you can see where the game has been played yet again…

[1] Have a look at the history of how the German Radio Service quickly tracked down SOE radio operators. Further how the same trick was used by MI5 to find Russian Counter Survailance operatives[2]. The British Government used the same technique to find those who were not paying wireless / televison fees and in the 1970’s the shock caused when it was demonstrated how you could use the RFI from early computer Visual Display Units (VDUs) to rebuild the image on another monitor screen as much as 150 feet away.

[2] You will find a description of this in the first half of ex MI5 scientific officer Peter Wright’s book “Spy Catcher” along with finding out that his thoughtfull assistant was Tony Sale, who went on in later life to save Bletchly Park from being bulldozed into extiction. Tony and I had one or two chats about the funnier side of such issues, such as an early counter surveillance operation that got blown by a local resident who was retuning their TV. One of the operatives even though nearly six foor tall had been disgused as a “nanny” with a Silver Cross Pram, inside of which was an early black and white TV camera connected up to a transmitter that used the metal hoops supporting the rain hood as an antenna. Even though the transmitter was designed to work outside of the then TV broadcast band due to insufficient technical measures in the control vehical the signal effectively got “re-broadcast” in the band and the resident picked up the pictures and recognised his neighbours house. On looking out the window and seeing the very odd nanny he realised it was a bloke and assumed correctly he was upto no good and proceaded to come up to the nanny and start hitting the unfortunate bloke with a stout walking stick. The pram got over turned in the scuffle and valve equipment tends not to be either light or bounce. The police became involved as did copious quantities of paper work.

Clive Robinson April 29, 2018 8:30 PM

@ echo,

Not to mention turning your electronic device into a radar capable of detecting movement (such as the movement of hands which may beyond the electronic device end point reveal key presses or writing).

Yup, and a couple of years back on this blog when talking about making your own defensive area using unsuspicious house hold items I mentioned a way to deal with the problem to a certain extent.

What I suggested was an oscillating desk fan onto which you connect “streamers” that have a metallized or foil conductive component which make them a resonant length at the various likely frequencies[1]. The signal from a randomly moving resonant conductor is going to be many many times that of your moving fingers.

Further I don’t know how old you are but back in the early 1980’s a firm in Mitchem Surrey built a device called the “Microwriter” it had six buttons and a small green calculator style display. It alowed you to type quite rapidly with one hand as the keys formed a “chording keyboard” and only fractional movments of the fingers is required. I nearly ended up working for them, however other world events changed that. Whilst the original is nolonger made or realy remembered there is a modern version called the CyKey after one of the original designers,

http://www.cykey.co.uk

You kind of put your hand on it like it is a large mouse and your fingers rest on the buttons so your finger only depresses about an 1/8th of an inch, which would be difficult to pick up with wall penetrating radara or WiFi back scatter systems.

Mad as it might seem at times, any advancment in surveillance has usually been in effect countered by other technology, usually quite unintentionaly. It just takes a “hinky mind” to see the possabilities and thus stay a step or two ahead of the game 😉

[1] In principle you are going back to the war of the beams back in World War II where German radar was in effect jamed or decoyed by the use of thousands of small “half wave radiators” launched from a single air craft. It was variously known as Window or Chaff. Both the British and Germans developed it independently, but neither side used it originally due to fear of retaliation. It was only with the development of the cavity resonator magnetron by John Randall and Harry Boot at Birmingham University that gave us centrimetric radar that the British started using it to good effect. The US learned of “window” that they later named “chaff” at the same time as they learned about the cavity resonator magnetron with the Linderman mission to the US.

Keith April 30, 2018 1:44 PM

It’s kinda sad. I have actually played with both algorithms in question. They both are quite efficient. Having made an implemention of both simon and speck. Simon was quite impressive when implemented on an FPGA.

As for the security of this algorithms. They are a symmetric key algorithms so it strikes me as harder to put a backdoor that would not be noticed. My only concern is shorter bit-width keys.

I am curious for the actual reason of rejection? What technical information did ISO not have?

Rob May 1, 2018 8:55 AM

John Smith: ‘Edward Snowden made the comment that, as someone with sysadmin access, he saw how the NSA really worked. He made the point that NSA workers will less access can continue to believe in the “mission” because they don’t see all the instances where NSA betrays the “mission”. They dismiss the possibility that the betrayal is a feature and not a bug, because not to dismiss it leads to some rather uncomfortable questions.

Despite his family’s military background, Snowden’s repeated exposure to the underlying reality led him to a point where he could no longer deny it.’

Can you or someone else provide a link to that comment somewhere? I’d be grateful, because it’s hard to find.

Schizophrenic May 1, 2018 7:20 PM

@Cassandra

“Schizophrenic” does not mean whet you think it means. It has nothing to do with multiple personalities, which is how you use it.

Clive Robinson May 1, 2018 10:48 PM

@ Keith,

As for the security of this algorithms. They are a symmetric key algorithms so it strikes me as harder to put a backdoor that would not be noticed.

You left the word “block” out before algorithms.

It’s actually quite hard to put a backdoor into a symmetric key block algorithm compared to doining it to a symmetric key stream algorithm.

One of the tricks the predecessors of the NSA pulled in the past is was by having an algorithm that had a range of key strengths from very weak to sufficiently strong in mechanical cipher systems. Knowing that the enemy was likely to capture and make their own versions of the cipher machines or use those caprured, but without knowing about which keys were weak and which sufficiently strong. Thus if say 1 in 5 of the keys was very weak they would break 1/5th of the messages, but by analysing the content and format that would make the next two or three fifths much easier to break. This in turn along with the likes of traffic analysis would give them even the strong keys… To prevent the same fate happening to their own side they were also responsible for setting the key schedules, thus would only ever pick sufficiently strong keys.

It’s why after various wars etc mechanical crypto machines were sold with little or no restrictions thus ended up getting used by many countries in the Middle Eaat, Africa, South America, Asia and even Europe. Thus life was fairly easy for the Five Eyes SigInt agencies for three decades or more till the secret finally came out.

As we also know there was a secret agrement between AG Crypto of Zug Switzerland and the NSA to have atleast three different levels of crypto kit that would be sold into different markets, further aiding the SigInt agencies for another couple of decades.

It was in the 1990’s that MCUs available to all had sufficient powere to run more secure algorithms that the SigInt agencies could not easily break. However the problems with implementation alowed them to use side channels that leaked keytext or plaintext to get the information.

In fact if you look at the way the NSA rigged the AES competition through NIST that we ended up with “publically available code” that hemoraged keytext information through cache timing etc. This code ended up in “crypto code libraries” and thus the leaky code is still running on devices connected to the Internet and other networks the Five Eyes SigInt agencies have easy access to.

I’m guessing that other nations SigInt agencies are likewise taking advantage of implimentation side channels.

I’ve yet to see an analysis of Simon and Speck to various side channel attacks (not just time based) but it’s where I would put a small wager on where they might have “added a little pepper” to the algorithms.

Clive Robinson May 2, 2018 6:25 AM

@ Andreas, EvilKiru,

Is anyone aware that this is pretty old news … ?!

Yes and no. It’s been clear for some time there has been opposition to the NSA. Not just by NIST but other standards organisations, likewise other SigInt agencies and their “old guard” have been getting challenged.

It’s not realy supprising when you consider the dirt that is being dished against them.

But as with many things in life final events have preceading sign posts. Hence the old saying about “Three signs to disaster”,

    The first sign is only visable with hindsight. The second usually only to the more astute observers. With the third being more or less obvious to all but the losing players.

One sign that many missed was after the Ed Snowden revelations, that realy desperate letter from senior managment to all NSA employees. It was the confirmation that what was being revealed was not just the truth, but a portent of things to come. Thus some of the smarter NSA employees brushed up their C.V.s and scuttled down the morring ropes long before the rising waters got up past the Plimsoll line.

Many observers had a very good idea the two ARX ciphers where not going to make it for quite a while now. Which is why I suspect quite a few researchers looked hard for reasons to be the “Giant Killers”. Thus the observers,were just “waiting for the hammer to fall” officially, others desperatly hoping otherwise, but the hammer has struck.

The question now is of course where do things go from here, and the first step will no doubt be a number of enquiries / autopsies to try and find out what went wrong. Which gives the equivalent of “news talking heads” the opportunity to market themselves…

The reason given for the reject is sufficiently open ended if not ambiguous that the finger pointing could go on for a very long time to come. This may encorage some researchers to “stay with it” to find the killer reason, most however I suspect will now move on leaving it as “an excercise for the student/reader”, with the whiff of failure lingering long after as discouragement.

There are however down sides. For instance it may well tarnish ARX ciphers in general, discouraging further work in that area. The NSA may decide to avoid similar issues in future and not go down the standards route which has more than political issues at stake. That will have further knock on effects in that their brightest and best might decide to jump ship for industry / academia, whilst others just will not sign up in the first place.

As @Bruce has noted, we get very few chances to look in through the NSA back room windows, so the quality of what goes in the shop front window is a major source of information about their methods and research directions. If the NSA puts up the shutters then we loose that source of information.

Thus we need to see a non ambiguous reason for the NIST rejection made clear and public. Something tells me this may not happen any time soon if at all… Which kind of makes the rejection a double failure.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.