Did the FBI Plant Backdoors in OpenBSD?

It has been accused of it.

I doubt this is true. One, it's a very risky thing to do. And two, there are more than enough exploitable security vulnerabilities in a piece of code that large. Finding and exploiting them is a much better strategy than planting them. But maybe someone at the FBI is that dumb.

EDITED TO ADD (12/17): Further information is here. And a denial from an FBI agent.

Posted on December 17, 2010 at 10:49 AM • 58 Comments

Comments

BF SkinnerDecember 17, 2010 11:00 AM

Maybe maybe not.

It makes sense they if didn't do it in OpenBSD...what crooks use openbsd?

If the target is criminal misconduct then the target is (as it was with Clipper) the communications at the email ISP level and backdoors to Windows.

If the target is hackers ... makes more sense.

But the strategy from the feebies is "want it when I need it". Not need it, have to identify exploitable holes on the target, hack it, establish foothold, escalate priviledge.

Also more certain to get a sneak and peek warrent and plant a keylogger, then a root, on the end point during a series of burglary.

Cory DurandDecember 17, 2010 11:14 AM

It would not seem plausible that this would even happen, but I guess you never know. The sad thing is that if all the documentation is true and does point to this, I would guess the next question to ask ourselves..."Is there other secret government deals out there like this and if so where."

I find this reminiscent of an Orwellian society by which the government wants to know everything about its citizens more then trying to catch some unknown possible threat. Using one of the many exploits that are out there is a faster way then this crazy plan the FBI had and using one of them would be more secretive and less of a trail back to them.

Clive RobinsonDecember 17, 2010 11:31 AM

It could be argued that as side channel attacks where known prior to IPSEC that the whole of IPSEC is a back door...

I've said it before, the problem with a general purpose computing platform is "efficiency".

Unless great care is taken (clock the inputs clock the outputs) then any efficiency measure will almost certainly lead to information leaking via timing information.

Obviously the lower down the stack you go the more important efficiency is and the more easily visable any timing differences.

IPSEC is twords the bottom of the stack therefor any implementation is likley to be hemareging information via timing side channels unless great care is taken.

Do the Open BSD programers know how to take the right preccautions?

Well time will tell...

Carlo GrazianiDecember 17, 2010 12:03 PM

Supposing, for the sake of argument, that this Perry guy isn't a fantasist (which he may very well be):

Could someone with the technical chops to evaluate these claims explain to me whether the described weakness comprises two different kinds of embedded exploits (a backdoor plus a side-channel leak), or are they part of the same weakness?

Also, what sorts of side-channels are plausible here, leaking what kind of information? And what sorts of side-channels were known and exploited at the time this stuff allegedly happened?

dave-ilswDecember 17, 2010 12:10 PM

Or the author of the email, who hadn't communicated with the OpenBSD person in 10 years (and the OpenBSD person would have preferred the silence lasted longer, implying that the lack of contact was over some disagreement between them) may simply be laughing his head off over the disarray into which the OpenBSD community has been thrown by his immature revenge email over whatever it was that caused the rift between the two 10 years ago.

basement catDecember 17, 2010 12:18 PM

How do we know Bruce Schneier isn't working for the feds, claiming this is a just a hoax?! If Bruce doesn't say it's insecure, it must be good!

(yes, that's parody and sarcasm.)

JohnsonDecember 17, 2010 12:38 PM

This is FUD, likely part of a vendetta.

Read the article Bruce posted in the "Further information is here" link. It says:

"The person I reported to at EOSUA was Zal Azmi, who was later appointed to Chief Information Officer of the FBI by George W. Bush, and who was chosen to lead portions of the EOUSA VPN project based upon his previous experience with the Marines (prior to that, Zal was a mujadeen for Usama bin Laden in their fight against the Soviets [...]"

So, the guy who told de Raadt that the FBI backdoored OpenBSD also claims that the CIO of the FBI was a mujadeen who fought with bin Laden. Nobody, NOBODY should be taking this guy seriously.

+1 to Theo for being open about these claims, despite their ludicrous nature.

Nick PDecember 17, 2010 1:18 PM

@ Clive Robinson

Yeah, one could say they deliberately kept things vulnerable. Things that resist side channel attacks or have formal verification are generally considered COMSEC items, subject to handling and export controls. Hence, if the government backed it for the public, it's rarely secure. That wouldn't help them in their, err, *other* goals. ;)

On the backdoor claim, I think it's FUD. It is possible though. The obfuscated C contest is the ultimate proof that systems can be deliberately weakened in an undetectable way. Crypto is also complicated and hard for non-experts to evaluate implementations. However, I see this as more likely threat for Linux and other BSD releases. OpenBSD has the most thorough bug hunters in the open source world. I'd say it's almost definitely *not* backdoored.

ThomasDecember 17, 2010 3:07 PM

@basement cat
"(yes, that's parody and sarcasm.)"

Maybe.
On the other hand it could be a clever triple-bluff with a twist

mr.t.December 17, 2010 5:10 PM

unlike you guys i expected this to happen one day - i'm neither surprised nor shocked. i'm posting this from an iphone (with multiple known and likely some unknown backdoors), so who cares.
bruce: what exactly is supposed to be risky when planting backdoors? it isn't. for any stakeholder. think about it again. it's a very weak argument to rule out manipulation.

Dan WeberDecember 17, 2010 5:32 PM

While his accusation doesn't violate any laws of physics, it requires so many outlandish things to happen that I can't believe it for more than 5 seconds.

CarlDecember 17, 2010 7:18 PM

Cant imagine there is any truth to it, but of course it all depends on your definition of "back door". Some type of "unlawfull" intercept capability? no way.. 100's of people looking at the code, bug fixing, maintaining it across releases..
IF he did, he would be the dumbest FBI agent ever hired, the upside is zero, the downside is infinite.

Fake George Perry December 17, 2010 9:48 PM

Hi I just wanted to let you all know that in light of my recent revelations I will be offering consulting services to identify which products have not been back doored. Especially for companies looking for security for vmware vsphere deployments.

Merry Christmas

Richard Steven HackDecember 17, 2010 10:03 PM

Zalmai Azmi's history in this article:

Zalmai Azmi | From New York City to Afghanistan to the FBI
gcn dot com/articles/2006/09/07/zalmai-azmi--from-new-york-city-to-afghanistan-to-the-fbi.aspx

He emigrated from Afghanistan with his family as a teenager. No details on what he or his father were up to at the time bin Laden was being hosted there.

PackagedBlueDecember 18, 2010 12:07 AM

Who needs a backdoor when the FBI can do an OpenBSD rig, or rather any OS on stock hardware?

This late after 9/11, you just know that some serious hardware issues are forced in.

#IPSEC....

anonDecember 18, 2010 6:42 AM

@Clive Robinson, it was only changed recently but OpenBSD is now taking special care about timing (specifically in comparing memory, there is a timingsafe_bcmp function to avoid exiting early and leaking information by timing)

Geek ProphetDecember 18, 2010 9:14 AM

@mr.t.

There is plenty of risk. Ignoring several possible risks, look at the one that supposedly happened: somebody in the know betrayed you.

Now, investigating the code for flaws is something you are supposed to do. So of course there is nothing suspicious about this, and if you are reported, people will just look at whoever rats you out funny. However, if you ratted out about having *put in* a back door, there is backlash.

And that is only one of the increased risks involved.

@PackagedBlue

I don't buy it. Implanting hardware flaws in equipment *used by your own side* is stupid. Even the NSA didn't put back doors into DES when they theoretically could have, because once the secret gets out to your enemies, you're screwed.

It would be like protecting yourself from terrorists and criminals by making everybody wear explosive collars, so you could just call them up and tell them to turn themselves in or you will blow their heads off as soon as you are ready to make an arrest. Sooner or later, the criminals or terrorists will get the code for the collars, and....

Clive RobinsonDecember 18, 2010 2:42 PM

@ anon,

"... it was only changed recently but OpenBSD is now taking special care about timing"

Thanks for hat it's been a little while since I gave OpenBSD source a look at I shall make a little time to re-aquaint myself with it.

Clive RobinsonDecember 18, 2010 3:26 PM

@ Geek Prophet,

"Implanting hardware flaws in equipment *used by your own side* is stupid."

Not always strange as it might sound it depends on a number of factors and what flaws you add.

"Even the NSA didn't put back doors into DES when they theoretically could have, because once the secret gets out to your enemies, you're screwed."

They may not have put it in DES but it is strongly suspected they did put deliberate flaws into mechanical cipher systems.

Although rumoured for a number of years beforehand a sales agent of Crypto AG ended up telling a tale of NSA involvment in some of Crypto AG's products (he was subsiquently charged under Swiss comercial law).

However the original rumour goes back to the C34 field cipher used extensivly by the US military. Subsiquent analysis done in the 1980's shows that part of the story has some truth to it.

To understand why it is not only possible but even plausable you first have to realise that the NSA has two functions in life. The first is to secure all US classified communications, the second is to read the secrets of foreign countries (including all it's allies without exception). And this dual role was also the case for it's predecessors, right back to before Riverbank.

The story goes something like this,

It is a well known and accepted axiom that the "enemy knows the system". However as many people realise they might know sufficient to reproduce it but do they realy have the mathmatical and logical skills to realy understand all it's strengths and weaknesses.

The reality is probably not. And this gives rise to an intresting idea to solve the problem of securing your own comms without the enemy being equally as secure using a "knocked off" copy of your system.

The problem with all mechanical ciphers and most electromechanical ciphers is once captured their basic workings become known to the enemy who can easily build their own system based on it.

This can be seen in that the German Enigma and British Typex where essentialy the same system, and the British used lightly modified Typexes to decode German Enigma traffic.

Thus if you put out a strong cipher into the field (to forfill the first requirment) it will inevitably be captured and copied and thus the enemy has the oportunity to bring their comms security up with yours (and negate the second requirment).

How do you reconcile the two requirments.

Well one way is to design a system with a large key space. But importantly some parts of the key space are strong, some of medium strength and some weak (even DES had some weak keys).

Now if you know which keys are strong you ensure that your comms only uses those keys. However if the enemy does not possess sufficient analytical skills and knowledge then they will use keys randomly from the whole key space. Thus a certain percentage of their traffic will be easily read.

What is not often appreciated is that the "technical breaking" of a cipher is not the routien activity of Government cryptoanalysts reading the traffic is. To get "daily breaks" consistantly requires a good knowledge of the traffic you are trying to read. Thus the more traffic you have read the easier it becomes to decode other traffic or associate meaning with traffic flow analysis. So not being able to read all traffic is not of necessity as bad as it might seem at first, even reading as little as 10% with time will enable the context and content of other messages to be inferred and thus "probables" be deduced that significantly short circuit the brute force cryptanalysis (it is one of the reasons this sort of directed brut force attack is known in some circles as a "British Museum" attack).

RobertTDecember 18, 2010 8:44 PM

@ NickP
"OpenBSD has the most thorough bug hunters in the open source world. I'd say it's almost definitely *not* backdoored"

Nick, I definitely respect your opinion but saying that a large block of code is free from any backdoors, when you don't even have a hint as to the nature of the supposed backdoor, well that's quite a claim!

On a single core microcontroller with no cache, no interrupts, no instruction pipeline, no optimizing compiler, no observability for time, power, voltage, current, no observability of power supply Buck PWM width, and with all RAM dedicated for specific tasks (absolutely no reallocation of RAM), I think I could maybe write an absolutely secure code block.
Unfortunately for any system, more complicated than that mentioned above, I'd always believe that a backdoor exists. As you well know, the backdoor does not need to give you the full Key, it only needs to leak sufficient information that brute forcing the rest is do-able.

As Clive has pointed out you can even encrypt the leaked information, so that it is only you can access the backdoor...


Vincent DeffontainesDecember 19, 2010 11:23 AM

Very interesting news. In a way, it brings major suspicions on IT security globally. In another way, it brings nothing new : OpenBSD has alread been one of the most audited systems in the world ; what will we learn from another audit ? Not that found flaws can be proven to have been written intentionnally ; nor that all flaws have been found anyway.

I see one interesting, though partial way out of this crisis : Mr. Gregory Perry should publish all technical details he knows (and can) about these backdoors. If he cannot publish them, maybe he could let Theo (or the auditing team) know about such information.

HeightenedSuspicionsDecember 19, 2010 1:33 PM

Whether the allegations pan out or not it is interesting how much deference they got. We now live in a society where the default position is to fully expect our institutions to subvert our rights and invade our privacy.

That is the real damage here.

Nick PDecember 19, 2010 1:44 PM

@ RobertT

Good points. There are certainly possibilities for covert channels all over the place, so I really can't say there's no ingenious backdoor based on those. Perhaps I should temper my claim down a bit and rephrase: the auditing process, by volunteers and government users, makes the probability of a backdoor low compared to most software. By backdoor, I mean intentional crypto-defeating flaws in the source code, protocol-level design, or normal usage. I figure these would be noticed.

More esoteric attacks are still possible, but these are often hardware specific. The AES and RSA timing attacks were patched so there's no more free lunch for remote attackers... I think... Grr... Anyone in doubt should use an EAL6-7 class system with custom hardware in a tempest-class safe in a room guarded by machine-gun toting zealots and excellent perimeter security. That might prevent exploitation.

Dirk PraetDecember 19, 2010 7:00 PM

By default, I don't exclude anything. As pointed out in several other comments, there are a number of hypothetical paths to get it done. But if Mr. Perry wants to be taken serious, let him come forward with whatever proof he has, or point out the particular code affected to Theo and the debugging team telling them what they should be looking for. Before that happens, chances are more likely that this is some sort of vendetta or an effort to discredit OpenBSD nobody should be taking too serious.

RobertTDecember 19, 2010 9:47 PM

@NickP,
OK that's something that I can accept.

I know a lot of readers look at this "side channel" stuff and discount it as, esoteric stuff, that is strictly for spooks. However I think it is important to see side channels in perspective.

The "side channel" attack has grown in relative importance as the typical key length has increased. The relative importance change occurred because the "side channel" information leaks have remained somewhat constant, in terms of leaked information, while the key lengths have increased exponentially.

When the world of Cryptography was happy with 32 bit keys, there was no focus on anything but the most obvious side channels. Because you really did not gain that much information. Today we talk openly about the necessity of 256bit keys and really want to see 1024bit Keys for the most critical information.

From a cryptography and mathematics perspective I can definitely understand the argument for longer keys, however from a hardware perspective it doesn't really make much sense. Physics unfortunately interferes with the process. Specifically multiply functions require a certain quanta of energy per calculation. So increasing the Key length increases the energy required per multiply. What's unfortunate for cryptographers is that it is possible to externally observe this energy usage. If you limit available power than the difficulty of the multiply is reflected in the length of the calculation.(resulting in timing attacks) If you use dedicated 1024bit Wallace tree multiplier units than the multiply difficulty is reflected in peak current used.

Peak current usage can usually be directly estimated by observing the CPU power-supply ripple. The ripple "On" period to "Off" period ratio is a linear measure of the current used by the CPU and thereby reflects the difficulty of the calculations. So it contains EXACTLY the same information as a timing observation.

What does all this mean for the effectiveness of cryptography? Well specifically it means that from a practical perspective it is impossible for ANY algorithmic cryptography system to achieve anything like the randomness implied by the algorithm and the Key-length, without the use of dedicated special purpose crypto hardware. Only these special purpose chips have the hardware designed in such a manner as to make the observability of the algorithms calculations difficult.

Now before you accuse me of pushing hardware I'll add the caveat that most of the available crypto hardware is worse than useless, because they use known PN sequences to obscure information. This operation is basically the same as encoding the desired information with a PN spreading key. So in effect the special purpose hardware transmits exactly the desired information only in a manner that improves the Signal to Noise Ratio for the remote observer. (Google: processor gain for DSSS receiver)

Makes you wonder who wrote these crypto hardware specs, and what information leakage they were specifically looking for.

David DaysDecember 20, 2010 2:19 AM

I am pretty interested in this story, and, while I agree that it is unlikely, there are a few things to consider:

--The mindset 10 years ago was a little different, remember. The whole RSA-PGP thing was fresh in everyone's mind, with the government perceiving itself as on the losing end. It wouldn't be too far from typical for someone at the FBI to try to get ahead of the encryption game by arranging a backdoor to help out CALEA (Communications Aiding Law Enforcement Act) efforts.

--Also, in the wild world of 2000-2001, computer and encryption technology were evolving so fast that people thought 10 years was an eternity...and that any efforts you put in now would be obsolete in 5 years or less. Thus, a 10 year NDA for civilian contractors wouldn't be completely out of the question.

--Along those lines, I'm not sure if the FBI actually dealt (deals?) officially with truly classified information. The average FBI joe doesn't automatically get to see Top Secret stuff...they just make the world think they get to. It's all court records and law enforcement, not military and national security secrets.

Of course, things are different nowadays, but when you consider that the CIA tried to poison Castro's cigars, a backdoor in encryption isn't so far-fetched.

Clive RobinsonDecember 20, 2010 4:03 AM

@ NickP, RobertT,

It would appear Mr Perry's Email and comments are getting around they are also posted to the Financial Crypto blog,

https://financialcryptography.com/mt/archives/001301.html

One thing Mr Perry says is interesting,

"I left NETSEC in 2000 to start another venture, I had some fairly significant concerns with many aspects of these projects, and I was the lead architect for the site to-site VPN project developed for Executive Office for United States Attorneys, which was a statically keyed VPN system used at 235+ US Attorney locations and which later proved to have been backdoored by the FBI so that they could recover(potentially) grand jury information from various US Attorney sites across the United States and abroad"

If you filter the junk out you get left with,

1, I left NETSEC in 2000,
2, I was the lead architect,
3, which later proved to have been backdoored by the FBI.

Depending on how you read it he appears to be saying that he was not involved with the backdooring just that at a later point it became known to him potentialy via a third party who he used to work with.

Which potentialy means it was not actually "backdoored" but "exploited" because the third party informing him might have said "The FBI are reading the traffic by a side channel backdoor in the code". The expression "backdoor" is sufficiently ambiguous in usage to allow either meaning...

This issue is developing a strong set of legs and potentialy may get in the non technical press who will no doubt look for the lurid angle.

Oliver GriebDecember 20, 2010 4:06 AM

@Clive

"Well one way is to design a system with a large key space. But importantly some parts of the key space are strong, some of medium strength and some weak (even DES had some weak keys)."

This sounds quite interesting, now i know of the few weak DES keys. Do you have any pointers to some research where this has been tried - maybe even successfully to some point - for recent/current systems?

anonDecember 20, 2010 7:05 AM

I've read a few stories about this, and I still don't understand how an NDA for a technical counterintelligence operation can expire. *Maybe* a developer working on part of a GOTS system would work in the unclassified domain and therefore be under a commercial NDA, but anyone doing a black bag job like this would have to be cleared, and therefore under a perpetual NDA. Unless the FBI was very, very sloppy.

BF SkinnerDecember 20, 2010 7:25 AM

@Clive " may get in the non technical press who will no doubt look for the lurid angle. "

Well from this the issue becomes not "FBI plants back doors" but FBI performed(s?) routine on-going surveillence on DoJ US Attorneys office.
The bad old days of J Edgar returning.

Kinda believe there's a violation of the Grand Jury Statue there or the Attorneys rights themselves.

AMDDecember 20, 2010 9:04 AM

Dude I think, You should watch what the FBI did at WACO on youtube before saying anything about'em.

Cause dese niqqaz r dead-stupid.

They lack the inteligence in their letter I.

Besides they do not think like you. They think in stupid snoopy ways that is the way they were taught. They are legit spies after all.

Clive RobinsonDecember 20, 2010 9:29 AM

@ Oliver Grieb,

"Do you have any pointers to some research where this has been tried - maybe even successfully to some point - for recent/ current systems?"

Yes and no.

The actual key space issue is one that has receded into the past (sort of) due to the actuall key space size in modern systems and the much more clued up open research communities.

We saw some designs that had "related key" issues but these where generaly found quite quickly.

We do know that back in 1993 NIST and the NSA designed under the "Capstone Project" the skipjack algorithm that was to only go into chips (clipper etc) and had the built in LEAF to enable US Gov key recovery.

Skipjack was supposadly secret (and it's design assumptions and background theory still are). However this caused a public backlash and a few selected academic cryptographers where invited to look at the design and they declared it to be acceptable.

Importantly their report leaked some information that the algoritm was actually designed back in 1987 and made up from primitives that had been under intense investigation for over 40years prior to this...

However the project collapsed and the Skipjack algorithm became public in 1998. Part of this was even by then it was obvious 80bits was going to be insufficient within a short period of time.

More importantly many eyes started looking at it and it is a very interesting design.

Firstly it only just meets the 80bit criteria that is there is no margin for safety under the then known publicaly attacks. Importantly new attacks developed against it come in almost bang on the 80bit spec.

Secondly the design is extreamly brittle even what appeare to be quite insignificant changes or even apparent optomisations significantly weaken it.

Now two things arise from this. Firstly SkipJack like DES before it only just meets key bit size criteria which is odd in of it's self and sugests that much more is known about cipher system evaluation than the NSA has revealed (as with DES).

Secondly and perhaps more importantly the very brittle nature of the design shows the NSA is still walking the "dual function" line quite well.

The we come onto AES, the design selected was quite odd few people would have rated it's chances even at round one of the competiton. This because at first site it's Bricklayer algorithm just looks to linear and the key scheadual algoritm likewise just does not look up to the job, thus it just feels wrong (and still does).

However for some reason neither NIST or the NSA considered or remarked on the practical demonstration implementations that have turned out to be very very weak due to side channel leaks via such things as timing issues.

Now I just do not believe that these issues where unknown to the NSA (in fact I'm sure they were not, as similar EmSec / TEMPEST attack vectors where well known back in the 1970's).

So why did the NSA keep quiet?

Well maybe even they have given up on trying to limit the "theory gap" and have started working in different areas.

As I've said before the three main attack spaces we should be considering are,

1, Protocol failures
2, Side Channels (timing etc)
3, Known Plain Text attacks.

Of the three the last is the "bread and butter" of working cryptoanalysts not your theoretical cryptoanalysts. That is other than "messages in depth" the academic community has paid little attention to the issues involved. And importantly we know that the likes of the NSA and GCHQ have been working very hard in this area since before WWII...

If I was trying to backdoor an opositions systems witha long term vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv from the outset I would go for issues in the protocol design

Clive RobinsonDecember 20, 2010 9:59 AM

@ Oliver,

Sorry the LG mobile smart phone I use keyboard driver took a walk in the park, and I had to hard reboot.

As I was saying if my longterm mission to backdoor everybodies sytem so I could get access to their comms then I would start with the protocol.

Putting subtal flaws in would not be difficult and importantly if you also provide the first implementation you can leverage "edge effects" by effectivly forcing everyone else to be compatable with your implementation.

The other advantage is that due to legacy systems that cannot be changed and general market malaise protocol errors will stay long long after they have been discovered.

Why?

Well because of "fallback" when two systems try to communicate the first thing they do is work their way down their respective features list untill they find a compatable mode.

Now almost without exception the fallback process is compleatly transparent to the user and they don't see (or care) what protocol they use.

Likewise for a quiet life software suppliers will include all protocols broken or not simply due to "backwards compatability" with systems that cannot be upgraded for whatever reason (poorly implemented and/or embedded).

Even if two systems have protocol 2.x they will almost certainly have the insecure 1.0 protocol. So all a Man in The Middle Attack has to do is force a fallback during the negotiation phase so the systems communicat using protocol 1.0.

If done sparingly the chances of the target noticing are about as close to zero as you can get. And might not even become suspicious when secondary effects make it obvious the oposition are party to your secrets. As with the German's in WWII you are way way more likley to look for a spy than crypto faliure.

Now that I have mentioned this potential MiTM attack vector I expect to see it in use against financial systems oh within the next year or so depending on how much the financial institutions toughen up their existing totaly inadiquate systems.

Nick PDecember 20, 2010 9:10 PM

@ Bruce Schneier

Marsh Ray claims on his blog to have found a vulnerability corresponding to the dates and people in the allegations. In his analysis, he believes the vulnerability was due to lack of testing and deadline issues. He gives very specific information about the exploit, which affects IPSec's ESP mode in early OpenBSD releases. You might want to post this as an edit.

Extended Subset OpenBSD bug link
http://extendedsubset.com/?p=41

Oliver GriebDecember 21, 2010 3:37 AM

@Clive

Thanks for your answers. I have heard related key attacks mentioned before, guess I will start reading about them a bit to see what they have been up to there. Personally i never thought any of the classical cryptanalysis approaches (like linear/differential cryptanalysis) any practical, but this sounds interesting and could be worthy to get a better understanding of.

You like to mention AES and side-channel attacks, my understanding and experience is that any naive implementation you can get your hands on leaks as hell. But this seems to be true for any such system - linear or asymmetric - but they (at least the company i work for) have begun working on hardened implementations. The timing issues are more or less the easiest to get rid of, power and radiation are harder. But lots of work has been done there, too.

Regarding fallback systems, yes it seems to be quite a problem. The GSM network is one good(bad) example there.

Sasha van den HeetkampDecember 21, 2010 9:30 PM

I do not think it is dumb. Bugs -and more specifically- vulnerabilities are getting more exotic. It might be true ten years ago, but an incredible amount of vulnerabilities have been patched. So, subversion of code is in the end, the last straw to make sure you have a way in.

Black PantherDecember 22, 2010 7:55 PM

I don't see why this shouldn't be plausible?

As already mentioned further up, 10 years ago the world was a different place.

Another question that needs answering is assuming it is true, then why OpenBSD? Was it just a convenient target at the time or did the FBI know something that meant they had to infiltrate the development?

Clive RobinsonDecember 23, 2010 4:15 AM

@ Black Panther,

"Another question that needs answering is assuming it is true then why OpenBSD?"

It could simply be one or more of,

1, OpenBSD crypto code is unencumbered and thus gets migrated into many products sight unseen.

2, OpenBSD crypto code tends to be one of the first unencumbered implementations.

3, BSD network code is the granddady of nearly all network code in use.

For instance if you look at Microsoft Network code you will find that it was all pretty much lifted from the BSD 4 and earlier source code at some point, likewise Sun OS, most *nix and just about any OS with pretentions at being more than just an embedded gadget (and yes many of those use BSD derived code).

Look on OpenBSD crypto/network code as being "the one ring that binds them all".

If you are planning on "backdooring the world" then the place you start is with,

"Protocols"

The thing about comms protocols is they hang around your neck like a millstone once they are in use (and it is only long after that the "bugs" get noticed).

Protocol analysis hasn't had the level of academic research they should have, thus anything more serious than a simple point to point ack/nack protocol is going to have holes in it...

The next problem is even if made theoreticaly secure can practical efficient implementations be secure (side channels)?

If it is possible to make a secure practical implementation via custom hardware etc, is the same true also of software only implementations?

Especially in high "efficiency" high throughput multi tasking multi user systems?

If not then you have step one of a backdoor....

We have seen that AES has this problem. That is AES's theoretical security is a lot better than we currently need. However practical software implementations especialy on multi tasking multi user systems through timing side channels can give up the key in as little as a couple of hundred encryptions or decryptions under the same key (so one MS Word Doc for instance). This is with an unprivalaged user program running on the same box, or a few thousand encryptions or decryptions just by observing packets on the network with some software only AES implementations.

Now even if a protocol is secure in normal conditions there is the question of "interoperability" it is an accepted fact you cann't document everything in advance of practical in depth and long term testing.

So what generaly happens is somebody produces the first implementation that matches the spec "sort of". And every one else works to that implementation for "interoperability" reasons. This means the first implementation can if produced by people with the required expertise put in a load of "edge cases" where by they effectivly add a backdoor entry to the protocol "sight unseen" even though the protocol it's self may be theoreticaly secure and tested as such.

That is the implementation is secure under normal operation but give it subtal timing or other issues via a a Man in The Middle device such as a router and you open up a side channel.

The NSA for instance is happy to use AES for "data at rest" with the likes of the Inline Media Encryptor (IME). BUT apparently not for data in transit in equipment they have designed and built and remains clasified (and usually they provide the keymat for).

Now an interesting little research problem for a PhD, we know "theoreticaly" that AES keys are broadly of the same strength.... But what about in a practical implementation. That is in any given software implementation in major use are there some keys that are very weak and others that are very strong when seen from a side channel (timing or power spectrum)...

wamDecember 29, 2010 12:08 PM

"Another question that needs answering is assuming it is true then why OpenBSD?"

So we could do kernel level hardware support for crypto.

I designed the crypto hardware at netsec.

Clive RobinsonDecember 29, 2010 2:45 PM

@ wam,

"So we could do kernel level hardware support for crypto."

Yes but...

First we need a framework that is going to lend it's self to being both flexible but also side channel resistant (timing, power spectrum etc).

This is not easy in simple environments at the best of times (the task is of necessity multilayered and thus has some subtle issues). And I'm fairly certain Nick P and others would have advice to chip in (like "argh, don't do it way X").

The simplest way to do secure comms is with fixed rate signaling (Mil have done it this way since WWII for a reason ;) However it is not going to work well in multi-user (therefore potentialy multi-key multi-mode) systems.

As a minimum in a multi-user environment any system would have to allow for multiple hardware insatances. And the hardware would need to support multiple keys and different modes on the same hardware instance.

But... the hardware must still keep the keys, plaintext and cipher texts isolated from each other ie not reliant on kernel only security (and this is a hard problem).

Kernel to hardware communications would need to be constant to stop some side channel attacks however this causes issues with various crypto modes when not supplying constant data streams (ahh the joys of constant rate packet switching over collision detect and variable rate back off protocols such as Ethernet etc).

Usually the issue boils down to using tried and tested CommSec techniques where the design can be proved and basicaly at best these are inefficient. That is single comms channel per hardware instant, full seperation between inputs and outputs, clocking the inputs and clocking the outputs at an invariant rate and have long backoff times on error as a starting point. None of which is. realy compatible with multi-user systems...

wamDecember 29, 2010 5:05 PM

Ok - first - I'm a hardware hack - I design hardware. Exception handling for me has been done in nature for about 14 billion years...

As to why OpenBSD:

Actually, looking at the issues of side channel, diff power analysis, etc.. weren't considered.

Jason and I designed the OWT for that situation...

again - we used OBSD due to the fact that:

- we had developers friendly with the team and Theo,
- we had access to a great coder named Jason Wright,
- our SUNA box (single user gateway appliance) needed hardware acceleration
- and the HiFn supplied HSP didn't get the thruput we wanted.

That simple.

RobertTDecember 29, 2010 9:31 PM

@wam,
"I'm a hardware hack - I design hardware. Exception handling for me has been done in nature for about 14 billion years..."

Dedicated crypto hardware does not always reduce the risk of a backdoor, to be honest it probably increases the risk.

Think about how many great coders exist and divide that number by 100, now you have the number of great coders that really understand the crypto strength implications of their coding style. Now divide this number by 1000 to get an idea about those that could design an optimized digital chip. Unfortunately even a perfect digital implementation of the algorithm leaks information through side channels associated with timing, power, RFI, heat, optical effects, vibration..... So your guy also needs to be a good analog chip designer, with a detailed knowledge of side channel attacks.

Skill list
1) Crypto algorithm strength expert
2) excellent coder / protocol expert
3) Digital logic expert (needs to be a commercial product)
4) Excellent Analog and RF chip design skills.

Now thats a good starting list, but BTW don't be surprised when the person you find (with all these skills) already has a great job, probably working closely with some 3 letter agency.

Now here's the real conundrum, if you find this magical person, what makes you believe, for one second, that they wont intentionally backdoor the system? They are probably a narcissist, so they'll feel compelled to add a complex backdoor, just to prove how smart they are!

From my experience the best system backdoors communicate in unexpected ways.

Examples of potential on chip hardware backdoors that I have seen, I'm still not sure if they were intentional....

- Crypto Multiplication power enevelop caused main clock PLL jitter (thermal feedback) error was fixed by isothermal location of the PLL charge-pump relative to Multiplication unit. This was externally observable as side bands on the exact timing of data clocked out of the chip. In this case every system parameter was within spec but the exact timing of the output signal still leaked information about the calculation.

- Noise floor modulation of an RF codec. System used a 2nd order sigmadelta modulator where the additive dither was based on the same RGN as the crypto RNG. The RNG source could be recovered if you knew the sigmadelta noise shaping coefficients.


wamDecember 30, 2010 5:44 PM

I'm not really saying anything about the possibility of covert channels or back doors, or some other third thing (spongebob movie) that may have occurred during that time.

I was one of the early employees at netsec - number 16 I think. But I was not privy to most of went on there - way beyond my pay-scale.

But I will say that the only reason that I recall Jason being hired, Angelos splitting the stack, and the credit that NETSEC has in the source (multiple OBSD source files thank NSTI - check crypto.c) was that we were using OBSD, wanted hardware accelerated crypto, and needed to do it in the kernel, since the tests of userland drivers/code were not giving the performance we needed.

I recall about 11 Mb performance with software, I recall getting close to the PCI limit when Jason, Angelos and other OBSD developers did their thing. I recall sitting in on the meeting where Jason suggested we get Angelos to split IPSEC to support hardware crypto in the kernel. I recall Jason writing a PPort routine to program the EEPROM (my brandy new B-K programmer smoked/DOA) that unlocked the HiFn chip.

I recall meeting with one of the FIPS labs, meeting the guy that was one of the team that authored the FIPS stuff. I recall meeting the CEO of Bluesteel whom flew in to meet us to support his chips (now Broadcom).

I recall the reason the OBSD was used by netsec was that Theo is very strict on who and what goes into the source tree. Many other hardware devices I designed also used OBSD. I recall a later design being limited by the threading model that OBSD used. Again not being even a network or software novice, I was told that this was due to security.

And as I mentioned in my first post to the question "why OBSD..." that's what I recall.

PaulJanuary 3, 2011 9:58 PM

Considering all the things we know this government is capable of (e.g. lying to get into war in Iraq, and many other examples), the plausibility of them wanting to put back doors in things seems very high. Whether that translated into anything real, and still existing, is another question altogether. I am doubtful.

I'm not up on this side channel stuff, but the idea that you can get information out of power supply draws is pretty laughable. There are such things as capacitors, after all. All you see is averaged draws; all information has been smeared out. But hey, there is always the tin hat if you are worried about such things.

I believe government has made its peace with generally available encryption. They can always resort to rubber hose cryptography if they need to. Anyway it's a lot more fun (for them).

RobertTJanuary 3, 2011 10:47 PM

@Paul
power supply as seen externally from the chip level are certainly difficult to get information from because of capacitors, however most processors are powered by Buck converters so average power is reflected in PWM duty cycle. Now PWM controllers are almost all current mode devices, so the change in duty cycle occurs a long time before the voltage load related ripple. (This is the whole reason for current mode buck converter design)

One other thing to think about is that as clock frequencies increase the power supply EMI inductor and bond wire inductance's also create an LC filter on the power supply. Normally it is difficult to get the external LC corner frequency of this filter above 1Ghz because typical ceramic caps have at least 1nH inductance, with PCB routing / bond wires adding another 2nH. The external cap is therefore operating above the supply system self-resonant frequency.

I think you also need to consider what point in the power supply chain you are trying to observe.

The other issue to keep in mind, is that you will never get the exact Key leaked by a side channel, what you get is a statistical data about power envelopes which you try to correlate with encryption keys.


Clive RobinsonJanuary 4, 2011 7:32 AM

@ Paul,

"I'm not up on this side channel stuff, but the idea that you can get information out of power supply draws is pretty laughable."

They are a very real and serious threat for many reasons.

Some side channels exist because we don't use energy efficiently. The wasted energy is not destroyed, nor can it be, it can only be temporarily constrained or disipated. Either way ultimately it ends up in the environment as "heat" (which is the ultimate form of polution).

The problem is that on the way to becoming heat via it's transportation mechanism the energy communicates information via the implicit communications channel of the transportation mechanism. How much (time) now quickly (bandwidth) and how far (efficiency) depends on the properties of the transportation mechanism and the form of the energy using it.

It was known back at the time of the First World War that electromechanical and electrical communications systems leaked information via electrical energy into the environment that could be picked up by the enemy "eavesdropping" (see "phantom phone circuits" for how it was done). And in the case of acoustic energy for many centuries before that which is where the term "eavesdropping" comes from.

The likes of Westinghouse and Bell companies where well aware of "cross talk" and the problems of "close coupling" and "inductive loading" to get long distance cable communications to work back in the 1800's when the first long distance wired communications networks (telegraphs) where setup.

Thus the ideas behind Emission Security (EmSec) TEMPEST and EMC have been known practicaly for well over one hundred years.

The cause of the EmSec side channel problem is as noted above that energy does not like to be constrained, it want's to leak away in any form it can as quickly as it can (hence the term "free energy").

To do this energy needs a transportation mechanism, and some are more efficient (radiation) than others (conduction) in various environments (insulators) but not in others (conductors). Thus you have a complex task when constraining energy in that any one measure that has advantages to inhibit one transport mechanism generaly aids another form of transport mechanism.

However even transportation mechanisms tend to suffer from inefficiency...

As you will see from the manufactures specification sheets all electrical cables act not just as low pass filters but have signal antenuation or loss with distance. This loss has two primary causes, the first is the DC or IR (current, resistance) charecteristics causing "heating". The second is AC frequency (w) related where impedence is more complex (z) but involves amongst other things EM radiation.

That is due to inductance and length all wires are effectivly "antennas". To travel sizable distances EM radiation needs to be in what is called a "plane wave" where the E and H fields are orthagonal to each other this is known as the "far field". However in a wire the E and H fields are generaly not orthagonal to each other and thus you get a transition region known as the "near field".

As a rule of thumb a straight wire suspended in free space will not radiate provided the near field is undisturbed (look up "G wire transmission line"). One way of achieving this is to use another conductor close coupled to the wire to act as a constraint to the E and H fields and maintain the relationship between them (look up balanced, twisted pair and coaxial transmission lines)

As a very rough rule of thumb the near field is a region of two wavelengths from the "wire". However the properties of the near field are effected by any conductors or dielectrics that are placed in it. The usualy seen example is a Yagi TV antenna, but in the microwave regions you will see "dielectric" antennas where plastics like expanded polystyrene are used to effect the transition of the near and far field in the same way as a lens does with EM radiation in the visable spectrum we see.

Less often seen antennas are "slot radiators" where holes in a conducting sheet act as very effective transitions for EM radiation.

The simplest way to encorage a wire to radiate is to put a bend it as this increases it's inductance at that point by distorting the E and H field relationship. Thus effective changing the impeadence of the wire at that point and encoraging radiation into free space. Thus as you continue to bend the wire it will eventualy form a loop, the larger the loop area compared to the wavelength the better in general the loop will radiate. However the relationship between the bend radius and radiation efficiency is not a simple linear one.

Thus the tracks on a PCB carrying information will emit an EM field which will couple energy into any adjacent conductors or travel more freely through some dieletrics. Which in turn will reradiate the EM field at some point. Even sheet metal will carry the energy of an EM field and in the process of re-radiation effectivly cause the EM field to be reflected, or if there is a discontinuation such as a hole for ventilating heat re-radiate from any point. The only way to reduce the radiation is to ensure no discontinuation in the conductor, it's impeadence, the E and H fields around it and source and load impeadence that matches the impeadence of the line etc. Which is a difficult job especialy where the signals have fast rising edges and thus high spectral content.

Putting a capacitor on a PCB track causes most of these changes thus although it might average out the low frequency component of the transitions frequency spectrum in the part of it that is a capacitor it causes the higher frequency components to be radiated from it's leads and the circuit it's connected to...

Thus a capacitor like any other component is a bit of a "Catch 22" device in actualy use, and energy that has not been averaged out as heat in resistance will get back out by whaterver transmission mechanism is available to it.

As mentioned earlier the transmission mechanism will take information with it effectivly as modulation on the EM spectral components. The only constraints are effectivly the bandwidth of the transmission mechanism and the level to which the energy has decayed due to the spreading of the plane wavefront with distance.

You will see the expression "-174dbm in a one Hertz bandwidth" given for the "thermal noise floor" you will also see the near field freespace loss given as 17dB from these and the energy in a given circuit you can (in theory) work out the minimum safe distance at any given information bandwidth and circuit energy level.

In practice the theory is ineffective due to the calculations involved and the fact that that even the near field can be impracticaly large. So other methods are applied.

The contrling of basic radiation mechanisms can be looked up in any decent EMC refrence.

However the world is a connected one where communications is the basic facilitator of our modern society. Thus the question arises of what happens when you intentialy radiate a signal such as down a telephone or network cable?

Not only do you have to consider the possability of signals unintentionaly "piggybacking" onto the cable but also the charecteristics of the signal you are transmitting.

This gives rise to another set of side channels based on timing changes. Back when the electromechanical relay was first designed it was known that it had timing issues via the "hold and relaease" times. That is if you know the switching time you can tell the difference between the action of a "normaly open" contact pair and a "normaly closed" contact pair.

You can arange relays in a circuit configurations that will give you all the basic logic functions of NOT, OR, AND and XOR. The last of which is used as the basic logic element in the "theoreticaly secure" One Time Pad or Tape cipher system.

One such system system was developed by people working at the UK's Forign and Commenwealth Office (F&CO) for use by the Diplomatic Wirless Service (DWS) and for various reasons lost to the sands of time called the Rockex System.

It was being used towards the end of WWII to "super encrypt" traffic going out on an unsecured telex line. Sadly it was discovered by the predecsesor to the NSA that by putting an osciliscope across the telex line pair the relay time differences alowed the super encryption with the OTP to be stripped off. So although theoreticaly secure it was not so in practice...

A later development of the Rockex was found to also cross couple signals from the power supply line to the telex line and also alow the super encryption to be removed.

The Rockex was then used in "off line" mode to do super encryption thus ensuring an "air gap" between it and the telex line.

Most crypto gear leakes information via the power supply and earth return loops (those things that cause mains hum buzz in your HiFi stack). Thus the earthing of such equipment is "prescribed in ComCen use" via various secret documents that originated from the descover of such side channel issues.

As you will have noticed the only earthing instructions that come with your PC are those tucked away under "safety"...

Thus I think you can take it as read that if your PC is powered via the "mains" and you alow an earth loop to form say via the monitor power supply lead, network or modem connection then it is going to leak information...

It is one ot the reasons TEMPEST knowledge and equipment is classified in the US (but not in some other countries) and it was only interferance issues we now call EMI that EMC became a requirment.

However their are some realy glaring holes in most EMC specs that alow the old game to be continued to be played....

Which is just one of the reasons why I say "beware protocols" (which standards are) as they are a realy good way to backdoor a system for a very very long time.

zorroJanuary 4, 2011 11:20 AM

@Clive Robinson:
One such system system was developed by people working at the UK's Forign and Commenwealth Office (F&CO) for use by the Diplomatic Wirless Service (DWS) and for various reasons lost to the sands of time called the Rockex System.

---
A propos "top secret" communication, does anyone know if scalar waves (such as demonstrated by this device by Konstantin Meyl) are used for restricted communication?

NickMay 2, 2011 12:07 AM

Sorry to post so late, but I was just reading a recent new post of an e-mail regarding this old topic. (Thereby leading me to this site.)


1.) Has anyone managed to explain yet, where if the referred to problem with the stack exists, what effect it would have for lay OpenBSD users?

2.) Is the spoken of possible issue with the BSD stack, one that would matter more for home users, or for various nodes and non-personal-computer hardware devices?


3.) With respect to power envelope/ side channel attack concerns, is that actually an example of a real world use for linear algebra?


I also really wanted to reply to a post by that Packaged Blue Guy... that response occurs inline.

>>Who needs a backdoor when the FBI can do >>an OpenBSD rig, or rather any OS on stock >>hardware?

>>This late after 9/11, you just know that some >>serious hardware issues are forced in.

Since most of the hardware is made overseas, wouldn't that mean that foreign governments's "F.B.I.-like organs" could have already done the same for decades now?

Nick

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..