Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AM318 Comments

Comments

Hugo April 9, 2014 5:11 AM

… you have to update your SSL certificate …

Update the certificate like in requesting a new one based on the same public key is not enough, because your private key might have been stolen. Renew your public/private key pair and then request a new certificate.

M Krohn April 9, 2014 5:25 AM

If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?

Mike the goat April 9, 2014 5:32 AM

I accidentally posted my commentary in the squid blog section – summary: openSSL is a bloated piece of junk and there are myriad alternatives that are coded with less lines. With OpenSSL it isn’t so much the number of lines but the terse nature of the code and comments. Very difficult to audit.

I will repeat my delight at a small virtual hosting outfit responding to my phone call this morning complaining that a client’s websites were vulnerable -,”we know, and we will have it patched in an hour”. Of course this is days too late but as far as corporate response goes it isn’t too bad.

Dimitris Andrakakis April 9, 2014 5:45 AM

“The real question is whether or not someone deliberately inserted this bug into OpenSSL and has had two years of unfettered access to everything.”

Has anybody taken a look at the source and its check-in history ?

Bruce Schneier April 9, 2014 5:59 AM

“If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?”

My guess is that they learned about this — and started exploiting — this bug yesterday, along with everyone else who is fast enough to do that.

offas April 9, 2014 6:00 AM

I wonder, how about all these OpenSSL FIPS compliant implementations? You can’t simply patch them as even the smallest change invalidate certification.

Bruce Schneier April 9, 2014 6:02 AM

“Update the certificate like in requesting a new one based on the same public key is not enough, because your private key might have been stolen. Renew your public/private key pair and then request a new certificate.”

Yes. I will make it clearer in my post.

Marco Tedaldi April 9, 2014 6:06 AM

@Boris ‘pi’ Piwinger

Fefe has a nice piece on that on his blog.
http://blog.fefe.de/?mon=201404

Basically:
It was added by a T-Systems employee (biggest telecommunication company in germany and mostly owned by the state… ok, he did not work there at the time he wrote that code, but still a nice theory).
The same person has also written the prposal for this heartbeat extension (where he admits that it does not need a payload but still implemented it for “flexibility”)

And the NSA is not the only secret service that has ever tried to plant backdoors, so maybe they just did not know about it or the people, that wanted the lavabit data did not find that exploit in the heap of other stuff they had… or maybe lavabit was running a version without the vulnerability.

Winter April 9, 2014 6:10 AM

This bug targets the roots of eCommerce and all online financial transactions.

If it would be found out that this was inserted deliberately by some official outfit, it will discredit intelligence services for years to come.

They could not be that stupid.
[/sarcasm]

Anon April 9, 2014 6:14 AM

Would it make sense to have things like passwords in a differnt process than usernames? Just to add one more additional layer of defense (which – as far as I understand – couldn’t have been breached by this bug)?

test April 9, 2014 6:17 AM

“why all the legal wrangling with lavabit for their SSL keys?”

Not everything uses vulnerable version of OpenSSL, maybe lavabit did not. Or maybe they wanted access even if lavabit updates OpenSSL one day or modify whatever they use. Last possibility: the group going after lavabit did not necessary knew about this even if other groups did. Those first weeks after Snowden might have been a bit panicky and a bit disorganized at NSA.

sshdoor April 9, 2014 6:20 AM

@Anon, the password should not have been send to sshd in the first place. Just a public key derived from the entry in /etc/shadow.

Benni April 9, 2014 6:21 AM

@Dimitris Andrakakis, see the comments here and the following ones:

https://www.schneier.com/blog/archives/2014/04/unbreakable_enc.html#c5351914

Perhaps it is indeed irelevant whether the bug was deliberately placed or not, since it may be used nevertheless. Also, people simply write stupid code sometimes. And a german programmer as an nsa agent sounds a bit far reaching.

I’m actually more disturbed by the microsoft closed source crypto libraries. They might contain similar bugs, but since they are closed source, there are far less chances that bugs in Microsoft Internet Information Server, Microsoft Crypto API, and Microsoft Schannel get fixed. With microsoft, the nsa even has an enormous advantage: Microsoft itself claims that it had to give important design information of the crypto libraries to the nsa for reviewing. Otherwise, microsoft could not export windows.
So the nsa might know the windows sourcecode, but we do not, thereby the nsa has it very easy when they make exploits for microsoft crypto functions.

I see that one can (illegally) get parts of the windows 2000 sourcecode on piratebay http://www.kuro5hin.org/story/2004/2/15/71552/7795. Has somebody looked at that? Perhaps by looking at that, one can tell what the nsa key really is for. In that Microsoft code seem to be comments like

private\inet\mshtml\src\core\cdbase\baseprop.cxx:
// HACK! HACK! HACK! (MohanB) In order to fix #64710 at this very late

private\inet\mshtml\src\core\cdutil\genutil.cxx:
// HACK HACK HACK. REMOVE THIS ONCE MARLETT IS AROUND

private\inet\mshtml\src\other\moniker\resprot.cxx:
//
goto EndHack;
//

private\inet\mshtml\src\site\layout\flowlyt.cxx:
// God, I hate this hack …

private\inet\wininet\urlcache\filemgr.cxx:
// ACHTUNG!!! this is a special hack for IBM antivirus software

private\ntos\w32\ntuser\client\dlgmgr.c:
// HACK OF DEATH:

So, which bugs can be expected in these closed source crypto libraries from Microsoft, if openssl has bugs like heartbleed?

Kenneth Michaels April 9, 2014 6:29 AM

“If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?”

Another guess – parallel construction. If collected evidence has to be presented in court, the agency can show a legal means by which it was collected, without revealing the vulnerability or the illegal act of exploiting it against a US company.

Yet another guess – US agencies only exploit this vulnerability outside of the US (lavabit was in the US), while non-US spy agencies exploit this vulnerability within the US.

Benni April 9, 2014 6:58 AM

Here is an interesting blog entry:
http://blog.fefe.de/?ts=adba343f

turns out that the programmer who wrote the heartbleed was, at that time, still at university (and he did not work for T-systems, which also maintains the computersystems of the german secret servoce bnd)

In the above blog entry, it is mentioned that the code was audited and accepted by this man here http://www.drh-consultancy.demon.co.uk/ who lives 100 miles away from gchq quaters in cheltenham.

@Kenneth
“If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?”

perhaps they just used an older version of openssl. The bug is not present, for example in OpenSSL 0.9.8 and earlier. The more secure systems are, the longer they usually wait with installing new versions that add new features.
It may also be that they thought pressing lavabit would make it faster and give them much more insight than this hacking.

Alex April 9, 2014 6:59 AM

Sorry for the quibble but I genuinely want to know what you meant. Did you mean to say ‘probability close to one’? (Odds close to one gives 50/50 chance.)

Matt T April 9, 2014 6:59 AM

I wonder just how big the certificate blacklist is going to be when all is said and done…and how many of those red screens about bad SSL certificates Chrome will be showing me in the coming days…

Alex April 9, 2014 7:08 AM

Maybe knowledge of this exploit was confined to a few people inside of the government, and the people who went after Lavabit weren’t in the know. Or maybe they went after Lavabit just to preserve the fiction that they didn’t have the data, in order to keep the exploit’s existence hidden.

If I had created this vulnerability I’d tell as few people as possible about it and try to act as if I didn’t have it, in the hope of preserving the exploit’s existence for as long as possible.

More generally, I think it’s hard to infer motives from the observed actions of intelligence services, and even harder to get an idea of what they know from those inferred motives and observed actions. We just don’t know one way or another.

Does anyone know about the circumstances under which the hole was found? One guy worked for Google. Is there any chance that Google is funding systematic crypto code audits in response to the Snowden leaks?

Trey April 9, 2014 7:09 AM

Looks like their site is down or is getting DDOSed. Someone must not want others to get that patch. Luckily I already have it.

Konrad April 9, 2014 7:11 AM

The question is not who introduced that bug. That is bad, certainly. The question, to me, rather is: who was responsible for auditing the change coming in (although there is a reference to somebody in the git commit) and how can it be that a buffer vulnerability like that can go (mostly) unnoticed for over two years and not be caught by any audit, quality assurance or any other process making sure that this code is sound.

Anon9119849 April 9, 2014 7:25 AM

“In the above blog entry, it is mentioned that the code was audited and accepted by this man here http://www.drh-consultancy.demon.co.uk/ who lives 100 miles away from gchq quaters in cheltenham.”

Britain is only about 600 miles long – him being 100 miles away from anywhere is completely irrelevant.

Evan Anderson April 9, 2014 7:26 AM

I had look at the RFC and original thesis where the (D)TLS heartbeat extension was proposed (http://duepublico.uni-duisburg-essen.de/servlets/DerivateServlet/Derivate-31696/dissertation.pdf page 66) and it makes me a bit frustrated.

re: the heartbeat payload the thesis says – “However, to make
the extension as versatile as possible, an arbitrary payload and a random padding is preferred, illustrated in Figure 7.1. The payload of the HeartbeatRequest can be chosen by the implementation, for example simple sequence numbers or something more elaborate. The HeartbeatResponse must contain the same payload as the request it answers, which allows the requesting peer to verify it. This is necessary to distinguish expected responses from delayed ones of previous requests, which can occur because of the unreliable transport.”

In the context of DTLS, running over an unreliable transport (UDP), I can totally see the rationale behind the heartbeat needing a payload. It seems like something more like a fixed-length sequence number and random padding would have been reasonable. I guess we want the protocol to be “versatile”, though, so we need an arbitrary payload. >sigh< Seems like unnecessary complexity to me, though.

In the context of TLS there shouldn’t even be a heartbeat! The underlying reliable transport can (and should) provide that functionality. There definitely shouldn’t be a payload, arbitrary-length or otherwise, because it serves no purpose. I would think, personally, that a security-oriented protocol like TLS should be kept as minimal as possible but, clearly, that’s not how it actually works.

This protocol extension shouldn’t have ever applied to TLS. There would still be bugs in OpenSSL, had this particular extension never applied to TLS, but at least this ‘heartbleed’ fiasco wouldn’t have had the impact that it has.

Mike April 9, 2014 7:30 AM

Question is, should we change our passwords? I use Sticky Password manager (www.stickypassword.com) and they told me to change passwords for sensitive accounts just for sure. Do you think it is necessary as well?

Rob April 9, 2014 7:52 AM

@Trey:

Or you could apply Occam’s Razor. Everybody’s trying to download the patch at the same time. That would have all the hallmarks of a DoS.

Johan April 9, 2014 7:54 AM

One thing that is not made clear by your post: Even if organizations like the NSA only found out about this yesterday they can still decrypt any data that they captured previously as long as they steal your private key before the vulnerability is patched (unless you use perfect forward secrecy).

My guess is that they are currently sitting on huge amounts of that they can now decrypt that they couldn’t before.

blaze April 9, 2014 7:59 AM

Maybe they knew exactly what lavabit had but didn’t want to tip their hand that they could get at it already, so they made a big public deal about this.

They probably do that a lot.

Andrei April 9, 2014 8:09 AM

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. ”
– how is that “anyone” can grab 64kb of data? Even with no physical access to the server?

Not really anonymous April 9, 2014 8:11 AM

Note that in this context replacing a certificate should include revoking the old one in addition to using the new one.

Rob April 9, 2014 8:17 AM

@Andrei:

You make a request the the server, the server helpfully sends you 64K of memory contents. Completely and totally unlogged, with the contents of the memory unknown.

The details are in the link posted at the top of the article.

Brian April 9, 2014 8:43 AM

@offas re: FIPS compliance

FIPS compliance validates the crypto routines which is not where this bug occurs. Unless someone’s product packaged openssl directly into their system and not as a component then it wouldn’t be impacted.

A bigger question is what about embedded devices that use openssl where updating is difficult to impossible?

Christian April 9, 2014 8:52 AM

The sad thing is that this bug is not just at fault of C, but also at fault of TCP.

Every protocol on top of TCP has to implement a heartbeat function if it wants to get notification on a broken TCP connection.

TCP has failed to do so/ TCP Keep alive is, with its 2 hours instead of minutes checking interval, worthless for the task.

Ted April 9, 2014 8:59 AM

How do we know you are even the real Bruce?? Maybe you are an imposter who has distracted us with a 95% legitimate post, and then injected one or two pieces of critical disinformation. What is even real anymore??

Mork April 9, 2014 9:01 AM

It’s worth noting that the commit was merged on new years eve, at 11pm. This is a strange time to be merging code, unless you are attempting to avoid scrutiny.

Mike the goat April 9, 2014 9:15 AM

Mork: I wouldn’t put anything past the NSA/gov’t. It seems their kind of style – reminds me of the Linux kernel one about a decade ago that allowed privilege escalation and it was so craftily done.

J April 9, 2014 9:17 AM

11?

That’s going a bit far.

As bad as it can get -which would still only be a 10), read only access inside a single address space is not a 11.

It’s easy to imagine something worse. It would be worse if this attack allowed an attacker to upload arbitrary code to be run at root privileges.

THAT would be a 10. This is an 8 or 9. Very bad, to be sure. Catastropic, sure. But not an 11.

1KB83B April 9, 2014 9:18 AM

@ M Krohn

I suggest this is one of the rare real bugs beside the intentionally implemented backdoors. In this case even the mighty authorities didn’t knew that they could save some NSL.

It’s so frustrating to see how buggy the architecture of the basis is, all the modern technology is based on. You can’t open a webpage because a manipulated picture can exploit a browser-bug to gain full control or active content does the same or whatever. And for the people who want to be informed about what is going on in the world of security-problems; you can’t trust your tools, the providers of websites & services and I think even automatic RSS-feed could be abused to execute a piece a bad code.

Maybe there are algorithms to realize secure (=unbreakable for thousands of years) encryption, but what does it matter, if the base of implementation is buggy/corrupt? You can’t even trust the firewall in front of you webservices, databases, applicationservers, etc.

My only hope of heartbleed is, that the security awareness gets pushed in the right direction and cybersecurity gets the needed management attention.

BrandonMarc April 9, 2014 9:20 AM

If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?

It makes a nice diversion. It also demonstrates (apparent) technical limitations of what NSA can and cannot do, which – if the limitations aren’t there after all – helps keep the world blissfully unaware.

Chris April 9, 2014 9:25 AM

Does anyone else find it weird the OpenSSL has a custom malloc function that doesn’t bother to clear out the memory before returning the pointer?

anonymous from switzerland April 9, 2014 9:37 AM

To me this brings down the paradigm that open source software is automatically more secure. Because it is open source and free to use, it is likely to create a mono culture and thus a single “disease” can have catastrophic effects. Until a way is found and applied for writing secure software – audits simply are no suitable on a level of globally used software – the only escape seems to be to do some proprietary/custom/secret things on computers connected to the internet, maybe still based partially on open source software, maybe not.

Latonya April 9, 2014 9:41 AM

So, Bruce, do you still think you can trust in open source? This bug was there for 2 years!

AnonDev April 9, 2014 9:49 AM

@ Mork • April 9, 2014 9:01 AM

“It’s worth noting that the commit was merged on new years eve, at 11pm. This is a strange time to be merging code, unless you are attempting to avoid scrutiny.”

Well heck, I will admit to working at 3am & 4am on Christmas day, multiple years, I hope those file date/timestamps won’t make what I did autmatically suspicious to the customers some years in the future. Some of us have missed a lot of parties and holidays, you know 🙁

anonymous from switzerland April 9, 2014 10:00 AM

Sorry to ask here – but maybe others with less ability to find out are also wondering: Are the 64k that are leaked each time protected by SSL or are they leaked in plain?

If the latter, how about the NSA (or similar) maybe having detected a pattern in web requests globally since some time and maybe even having found out that those requests returned private key material – looking for things like that might be something such organizations are likely to do anyway.

If the former, how about the NSA still detecting it because they likely know at least some CA public keys?

In both cases, the reaction would most likely have to be to make the vulnerability public.

Sad Clown April 9, 2014 10:10 AM

@Benni

“And a german programmer as an nsa agent sounds a bit far reaching.”

This seems like an absurd comment to me. Since when is the NSA NOT far reaching?

Anyhow, it is clearly an unacceptable risk to run anything on a secure system that is not mandatory. So much for leaving ssh open. Not that I have been doing that for a year or so anyhow. Just a creepy feeling I had.

So much for open source security. And closed source security.

Is that the point of this? To make us all feel insecure?

ChoppedBroccoli April 9, 2014 10:30 AM

So for the average consumer this is a nightmare. We have 10-100 major passwords and we need to change most if not all. But the problem is that you cannot change them all at once. You have to be aware when the service you use implements the fix and then change your password. Doesn’t seem likely this is going to happen at all.

Nicholas weaver April 9, 2014 10:49 AM

The payload in a Heartbleed attack may be encrypted or not: You can do the attack before key exchange (which would allow an observer to see the payload as well) or after (where the payload is protected).

Each call will tend to get a different 64 kB chunk of memory. Basically it works because the request is small, but the server is thought the request to echo is big, so it mallocs up a big block and then copies the “request” and the next 64 kB into the response and happily sends it back.

That people STILL, 20 years after I was first bitten by a C memory bug as an undergraduate, use C and C-derived memory unsafe languages (which, also, are type unsafe and have crap exception handling, the latter causing both “goto fail” and the GNU TLS bug) for crypto systems and other secure servers is ridiculous.

“Any suitably creative backdoor is indistinguishable from a C programming bug”. -Me

Is a quip of mine, but, really, the use of C/C++/Objective C at all is the backdoor.

Benoit April 9, 2014 10:56 AM

I really think that this kind of event was expected : Too many servers/systems are using OpenSSL, there’s not enough variety.

If all humans on earth had “the state of the art immune”, provided by a “science-fiction-mega-company”, everyone would be killed by the first unexpected bacteria.
Everyone should accept and encourage diversity, because diversity means robustness.

In the special case of cryptography the main problem is : “No everyone could/should write it’s own crypto-library”, nevertheless anyone should encourage the use of alternatives to OpenSSL.

Peter April 9, 2014 10:59 AM

I see that there has been plenty advice about password updates, but I would wait for a bit and only update a password after the relevant provider has confirmed that it has patched its services, otherwise you’ll end up with a new password exposed and a false sense of security.

What I would like to see is not just a patch, but also a detection mechanism that flags any attempt at exploiting this bug..

Andrew Wallace April 9, 2014 11:11 AM

It is the NSA who made sure Heartbleed was public when there was evidence the Chinese and other states had their hands on it.

It was previously just the NSA who knew about it.

The NSA sit on high value vulnerabilities until they are known and used by other states to target US interests.

Dr_b_ April 9, 2014 11:23 AM

“why all the legal wrangling with lavabit for their SSL keys?”

If they had access to lavabit, they would not want to make it seem like they had access, and thus put on a big show of requiring SSL Keys to keep up appearances.

wes April 9, 2014 11:28 AM

“Another guess – parallel construction. If collected evidence has to be presented in court, the agency can show a legal means by which it was collected, without revealing the vulnerability or the illegal act of exploiting it against a US company.”

I definitely agree. Legal wrangling is done for judicial ends, not technical ones. It seems almost foolheartedly optimistic to assume that the NSA only found out about this at the same time as the general public.

“It is the NSA who made sure Heartbleed was public when there was evidence the Chinese and other states had their hands on it.”

This seems more likely than not. Essentially we’ve arrived at the point where knowledge of software bugs is a strategic resource.

buggy April 9, 2014 11:44 AM

Is there are site compiling a list of commonly-used and/or high-value (e.g. banking) sites that have been bitten by this, ideally with a patch status? Average-shmoe consumers are going to need to react to this, and they are not going to weed through a tech analysis. They’ll need a simple to-do list, such as: If you are a Wells Fargo customer, don’t log in to WFB.com this week, check this page again next week, and if the site is showing as non-exploitable, then change your password.

Benni April 9, 2014 11:48 AM

@ Sad Clown:

“This seems like an absurd comment to me. Since when is the NSA NOT far reaching?”

Wellm he was a phd student at that time, writing his doctoral thesis on this heartbeat stuff. Please note that the budget of the bnd is much smaller than that of nsa. BND does not go as far as making intense recruiting efforts on computer science students.

The most likely reason for this bug is simply a weak phd thesis where a weak proposal is made.

BND is sometimes quoted for an alledged “project Rahab”, but Bernd Fix, the programer whose virus the BND alledgedly stole, has found not much information on this: https://hoi-polloi.org/~brf/rahab.html

Spiegel mentions that BND is sometimes ahead of NSA. Especially when it comes to tapping fibers. The Spiegel book “DER NSA COMPLEX” mentions that BND is helping the nsa by tapping fibers in war zones crysis regions and development countries, where the NSA can not go in. And BND even has a tapping method that has 100 times the cpacity of the GCHQ, they write in the spiegel book.

But hiring a programmer to weaken openssl is not the Style of the BND. It would, after all possibly damage german companies using ssl websites.

As long as there is no slide explicitely saying that openssl was deliberately weakened by the heartbeat option, I do not believe it.

But it does not matter. NSA IS running man in the middle attacks on websites using ssl: https://www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html they codenamed this operation “Flying Pig”. This attack is only possible if you have either stolen the certificate or if you can decrpyt the session with the help of some bug.

Anura April 9, 2014 11:48 AM

In the past, i’ve kind of wanted to fork and refactor GnuTLS, make it more modular, clean up the code, make vulnerabilities less likely, but keep a common interface so it can be a drop in replacement. Now I’m thinking maybe I should learn Ada, port it to that, and make C and C++ wrappers for it – which my research tells me is possible.

Bruce Schneier April 9, 2014 12:07 PM

“Sorry for the quibble but I genuinely want to know what you meant. Did you mean to say ‘probability close to one’? (Odds close to one gives 50/50 chance.)”

Yes: “probability close to one.”

Bruce Schneier April 9, 2014 12:11 PM

“So, Bruce, do you still think you can trust in open source? This bug was there for 2 years!”

I don’t believe that — as a matter of course — closed source bugs are found more quickly.

Yes, I still trust open source. I don’t think anyone ever promised that it was perfect, or that it is somehow magically and automatically more secure than closed source.

I wrote about this in 1999:

https://www.schneier.com/crypto-gram-9909.html#OpenSourceandSecurity

rumple_stiltskin April 9, 2014 12:22 PM

The principle of charity no longer applies to the NSA or any of the Five-Eyes intelligence agencies. There is no longer even a presumption of rationality because many of the Snowden leaks show that the intelligence agencies act with reckless abandon brought on by many decades of absolute non-accountability.

There is no reason not to assume that this bug was implanted. Whether by the Germans, the US, or someone else makes little difference because we already know that ECHELON-style cooperation is common among the intelligence agencies wherever it is convenient.

@Krohn:

As noted already, the intelligence agencies need a “legitimate” channel by which they could have seized information, in order to conceal the illegitimate channel by which they actually did seize information if they want to use that information to prosecute people. Furthermore, while heartbleed is a huge vulnerability, it is not exactly the same thing as just having the SSL keys. Finally, the presumption that NSA/FBI’s actions are rational is directly contradicted by many of the Snowden revelations. The intelligence agencies have become so reckless that they even act against their own interests. The phrase that comes to mind is “drunk on power”.

melina April 9, 2014 12:29 PM

Regarding key access- who outside governments can sniff networks outside the open wifi at the coffee shop?

Black Panther April 9, 2014 12:37 PM

“Open-source” does NOT automatically mean “secure”, nor does closed-source automatically mean “insecure”. When will this myth die???

I’ve questioned the security of OSS for a long time, not least because so many people are so complacent about its security.

This flaw sucks, but it might be the wake-up call the OSS camp needs to kick it out of dreamland and into reality. The current arguments that OSS is somehow better, and that “many eyes read the code” with the implication that it somehow makes said code better, are just bogus.

Bob April 9, 2014 12:45 PM

“My guess is that they learned about this — and started exploiting — this bug yesterday, along with everyone else who is fast enough to do that.”

I’m with you on Lavabit, but they could have had a longer window than everyone else if they intercepted and processed unencrypted email(s) Neel Mehta may have sent to the OpenSSL team or any shared amongst themselves.

Coyne Tibbets April 9, 2014 12:47 PM

@M Krohn
The US agencies would have demanded LavaBit’s key, even if they already had it by attack. For deniability, they need to be able to show that they obtained the key by “legitimate” means, so that no one will guess they can get it by attack.

@Anon9119849
The RFC is not specific enough: To some people, “random” equates to “uninitialized”. Which, of course, is the exact problem with OpenSSL. The standard should say something along the lines of, “The payload of the heartbeat MUST NOT reveal meaningful information. The payload MUST either be explicitly randomized or else initialized with a repeating non-informational constant or sequence.”

In fact, the omission is glaring: This RFC was supposed to be for a cryptographic product, but even though the author was supposedly versed in cryptography, there is no recognition in the RFC that the heartbeat might become a channel for information leakage. Glaring to me, and I’m not a cryptography expert.

@anonymous from switzerland
Exactly the opposite. Let’s set aside for the moment the question of how this bug was actually discovered, and consider discovery of bugs in general. For closed source products, the only means of discovery is “in operation”, while open source offers the additional channel of discovery by source review.

So now let’s consider discovery of this bug. So far as I know, the methodology of its discovery is not yet revealed, but I identify two methods above: “In operation” and “source review.” Let’s compare a product made by a large software company (LSC) versus an open source product (open). For “in operation” review, the chances of discovery are equal for LSC and open. For discovery by source review, the chances of discovery are much greater for Open than for LSC. This seems to indicate that open is better simply because there is an increased overall likelihood of discovery, for any weakness.

There is no perfect solution.

@Benoit
Sure, there’s a lot of servers and systems using OpenSSL, which increases the exposure. Shall we discuss the number of servers using, say, Microsoft? Or the number of network products using the Cisco systems? Why would you think the exposure for OpenSSL is greater than, say, one of these other systems?

Suppose we replace the current handful of products with a thousand products. Certainly that reduces exposure due to a weakness in any one product, but it likely increases the overall number of weaknesses. I’m not sure that constitutes an improvement to the current situation; in fact, I would go so far as to assert that it is better to have a few broadly used and very well debugged products. Even with the potential for a mess like this one.

Again, there is no perfect solution.

sshdoor April 9, 2014 12:51 PM

@Anura: “In the past, i’ve kind of wanted to fork and refactor GnuTLS, make it more modular, clean up the code, make vulnerabilities less likely”
Please in your new protocol, do not send, even encrypted, anything not needed for authentification. Do not send the password (even in an encrypted tunnel), only send a public key derived from the password. See my previous comments months ago.

You may rise money from Kickstarter for your ADA implementation.

Please, try to make it with at most thousands of lines of code.

And if you are in a hurry for a remplacement of openSSL, you may look at the sshd of openbsd: it does not depend on openssl despite the name of the latter. It instead depends on the hashing function of the libc, and on DJBernstein’s implementation of ed25519.

Matt from CT April 9, 2014 12:59 PM

Because it is open source and free to use,
it is likely to create a mono culture

Not at all.

Microsoft is expensive and closed source. As of February 2014 it has 32% of web servers v. 38% for Apache.

The OpenSSL bug hit 17% of servers (though my educated guess is that was lowered by a number of Linux web servers that were sitting behind load balancers / SSL offload engines that were not vulnerable — my company’s Ubuntu boxes were vulnerable, but only to an internal user; external users would have hit the F5 which used a non-compromised version of OpenSSL)

Sorry to ask here – but maybe others with less ability to find
out are also wondering: Are the 64k that are leaked each time
protected by SSL or are they leaked in plain?

A “web server” like Apache sits at what is called the (Network) Layer 7 known as the “Application Layer” — this is where it does something with the data that is useful to non-computers. The other Layers describe various ways of transforming data in ways they can be sent to other computers.

The encryption stack, like OpenSSL, operates on Layers 5 & 6. It receives the encrypted communication from the outside, decrypts it, and passes it to the web server up at Layer 7. Or in reverse, it takes unecyrpted data from the web server and encrypts it before passing down to Layers 4-3-2-1 to be delivered to the client.

The unencrypted information sits in memory as it is being passed from Layer 6 to 7…and Heartbleed was dumping out this memory 64K at a time. You couldn’t predict exactly what would come out at any given time, because what sat in the vulnerable 64K was constantly changing as it was passed on to other processes.

All machines with the Heartbleed bug would reveal this unencrypted information passing through memory by returning it back as plaintext to the exploiter.

My understanding is whether you were vulnerable to the private key being divulged depended on how your specific installation of Linux was set up to allocate memory; somewhere were secure enough to make it unlikely if not impossible to reveal the private key. Other systems were vulnerable do to how they allocated memory, and I believe these systems it was almost certain your key would be revealed.

Nick P April 9, 2014 1:20 PM

I think the continuing stream of such bugs makes a watertight case for writing these critical functions in a memory-safe language, eh?

Anura April 9, 2014 1:25 PM

@sshdoor

Just to be clear, I’m not going to assume using a different language is going to solve all problems. My original plan, if I ever decide to take the time (it’s a huge undertaking), was to refactor GnuTLS in C, into the following main components:

1) A stand-alone crypto-library, implementing all the basic algorithms.

2) A certificate library implementing ASN.1/X509/related components.

3) A TLS library implementing only the SSL/TLS protocols, referencing the crypto library for all crypto stuff, and the certificate library for all certificate stuff.

4) A command line tool for certificate stuff, that references the other libraries.

The idea is to make the components modular and independently verifiable, while also refactoring. On top of all of that, my plan was to refactor the code so that each function does exactly one thing, minimizing code reuse, minimizing things like GOTO for cleanup, implement a consistent style to improve readability and reduce the possibility of accidental errors (e.g. always use braces for all control flow statements), and implement thorough unit testing to reduce the amount of bugs that can make it into production.

This is a very long project if I take it on, involving learning of the crypto standards in a much greater detail than I do now, as opposed to just the ones I find the most interesting. If I did decide to do it, and decide to move to a memory safe language, I would of course make sure I have done enough in the language first to be comfortable taking on a large project with.

anonymous from switzerland April 9, 2014 1:34 PM

About writing more secure software, I am sure others here have said similar things than I am about to say (some in the comments above) and probably got it together better than me and have been thinking about this longer and hopefully more thoroughly, but still this – after all I am a man with maybe less inner barriers than most, not only because I am a physicist:

Open Source is in my view a very important tool for writing secure software, now and in the future. But ideally it would come in the form of small simple components that can be verified thoroughly, optimally even to a large degree using formal methods and formally proving as much as possible. Technically, such components could still be assembled to larger libraries, if separation between the components is good and the interplay can again be verified similarly, building things up layer upon layer of trust.

Openssl fails here for two reasons that have already been mentioned in the comments above, it is too complex in itself and it is written in C. A different language is needed that can be verified and then prevents things like random access to memory much more reliably.

Once you would have such a world of layered simple reliable components in the future, it might also come within reach at least for larger companies (resp. specialized smaller companies implementing such things for anybody willing to pay for it) to put additional checks and measures etc. on top of standard protocols. Maybe you would then again download specific clients for different companies (Amazon, Facebook, …) like today on smartphones instead of using a standard browser or maybe the world would solve things in a different way then (plugins or similar), hard to predict the future is. If I look at the trouble I had to access the Apple Store from just one laptop, such things are partially even here already.

Heinrich Rohrer who got the Nobel Prize in physics (as one of two) for inventing the Scanning Tunneling Microscope once said in a talk that people often overestimate what can change in 4 years, but underestimate what can change in 10 years. His basic idea behind that was that many things in technology evolve logarithmically and people tend to think linerarily.

At the time I considered this to be a bit naive, but now, a bit older, and applying this, one would expect the internet and computers 4 years after Snowden, i.e. summer 2017 not to be much more secure than now, but 10 years from now, summer 2023, things might be already significantly better. I hope I will be able to make a tiny contribution to that, here and there.

(And obviously, the NSA will still have an advantage then, at least that would be my current guess… 😉

anonymous from switzerland April 9, 2014 1:49 PM

I wrote:

Because it is open source and free to use, it is likely to create a mono culture

Matt from CT replied:

Not at all.

I got your point regarding MS and I agree regarding that.

But to be precise, my statement above as it stands – “Because it is open source and free to use, it is likely to create a mono culture” – is still something I consider true. That closed source necessarily produces less monoculture is not implied.

Fabrice Derepas April 9, 2014 2:04 PM

With my team we have just performed a formal validation of an open source SSL stack. This SSL stack is now immune to security flaws. I encourage you to have a look at formal methods and see how they can help a more reliable internet: TrustInSoft.

JoeBloggs April 9, 2014 2:07 PM

(as they say, talk to me like i’m stupid..)

exploiting HeartBleed would require access to the web server? (rootkits, or such)

Clive Robinson April 9, 2014 2:14 PM

With regards LavaBit I realy don’t think many of you are thinking in the right way.

From what has been said about NSL’s they have next to zero or less oversight and take about three minutes to fill out another twenty to get rubber stamped and a couple of man hours to serve.

The last thing the FBI want’s is people contesting them, so any one foolish enough to try will on principle be hammered into the ground and then ground down to destruction as publicaly as required for every one else to get the message.

It’s the Federal MO these days if you had not noticed, and a little over a year ago the DoJ made it very clear in a congressional enquirey that this is the policy from the very top over Aaron Swartz, there was no option he had to do jail time regardless for what was a political reason (supposadly face saving but more likely campaign fund related).

http://www.techdirt.com/articles/20130223/02284022080/doj-admits-it-had-to-put-aaron-swartz-jail-to-save-face-over-arrest.shtml

I guess people in the US should wake up and realise that the US is now more of a “banana republic” than it’s southern neighbour, or atleast that’s how many of us who have the luck not to live there view it.

Anura April 9, 2014 2:14 PM

@JoeBloggs

No server access required. Any website or service using an unpatched version of OpenSSL can be exploited by sending a malformed request. The server will then respond with contents of a somewhat random chunk of memory, which may contain keys, passwords, credit card numbers, or other sensitive data.

JJ April 9, 2014 2:40 PM

I am a bit upset that none of the big companies are making an announcement whether they were vulnerable to this or not, especially the big banks, paypal and gmail. They seem fine now according to the tools, but it is not clear to me if they updated their systems very early on or if they always used a different ssl implementation.

I don’t want to change my important passwords if I don’t have to.

DirectAttack April 9, 2014 2:57 PM

Fabrice Derepas wrote:

With my team we have just performed a formal validation of an open source SSL stack. This SSL stack is now immune to security flaws.

If you believe your software is immune to security flaws then you are naive, foolish, or careless to claim so. If you don’t believe it, then you are dishonest. None of those possibilities inspires confidence in your software.

Nick P April 9, 2014 3:00 PM

@ Anura

“Now I’m thinking maybe I should learn Ada, port it to that, and make C and C++ wrappers for it – which my research tells me is possible.”

That’s what AdaCore did for their Ada Web Server. Except they used OpenSSL. 🙁

Also consider using MatrixSSL. It has excellent properties far as complexity and portability go. You can do it GPL or commercial license. If it’s GPL program, I’d put Ada wrappers around that joker as a start. Then, piece by piece, implement a full SSL library in Ada with little unsafe code as possible, all checks turned on, and only safe defaults allowed (i.e. no NSA backdoor ‘features’). Could gradually reimplement core functions in SPARK, too. See the Skein in SPARK paper for tips on doing it fast.

Bootstrap the crypto part with something like Botan or NaCl. Maybe use an internal security kernel approach like Guttman’s cryptlib. The use of Ada or SPARK would make an assured pipeline approach work better than C/C++. Many possibilities for speeding development or improving security. Good luck.

Not A Anonymous Fool April 9, 2014 3:44 PM

Fabrice Derepas & DirectAttack:

Not to mention the fact that anyone foolish enough to click blindly on a link these days that claims to be “immune to security flaws” deserves everything they (might) get.

Dan April 9, 2014 3:58 PM

@M Krohn

If the US agencies knew of and exploited this bug, then why all the legal wrangling with lavabit for their SSL keys?

Have you read Cryptonomicon? If you have the keys to the kingdom, you don’t want to show your hand. They very well could be saving it for the few cases where there’s no other option, and then only acting (in a publicly visible way) on the minimal amount of secret information possible. And/or, try to create the appearance of another plausible means that they could have obtained it.

someoneElsewhere April 9, 2014 4:02 PM

We forget that this bug affects both sides of the SSL session. Malicious servers can now extract client side memory. In theory, someone can take over your ISP’s DNS server and proxy Google through his boxes while mining your machine. The NSA might be doing this at US IXPs. AFAIK, the only way to detect this is sniffing every single packet crossing your interface. :/

Luckily, ffox and chrome use NSS (right?). Curl, wget, mutt might be up for grabs.

OTOH if the Google crawler or any other crawler is not patched, one might be able to extract data from Google/Yahoo… Given that everyone is considering TLS servers, this might be a very plausible attack vector.

Just my two cents.

keithpeter April 9, 2014 4:05 PM

Remember that versions of the openssl library prior to 1.01 did not contain the bug.

Enterprise Linux 6.4 (RHEL, CentOS, Scientific Linux, Springdale Linux) for instance used an older version of openssl.

Lavabit may have been using CentOS or similar and the close-down occurred prior to the EL6 update to 6.5.

Ted Lemon April 9, 2014 4:45 PM

My big concern here at this point is what to say to all my non-geek acquaintances who are looking to me for advice on what to do next. The usual question is “should I change all my passwords,” to which the naive answer is certainly “yes,” but only if the site is secure now. In order for it to be secure now, they have to have fixed the software, updated their private key, gotten a new cert, and, very importantly, published their old cert on a CRL that I can get at. Without a list of sites that were vulnerable, I have no way to know whether or not I can safely assume that any particular site is now not vulnerable. And of course not all browsers support certificate revocation lists by default, which discovery surprised and dismayed me.

So I actually know of no way forward at the moment that provides any real assurance of trust, except perhaps rejecting any SSL cert that was issued earlier than, say, next week, starting next week, in hopes that everybody’s fixed by then. But that won’t work either because not every site needs a new cert.

Benni April 9, 2014 4:54 PM

By the way, a secure program that might interest you all, especially when you have to deal with large troves of powerpoint slides from Edward Snowden might be retroshare:

http://retroshare.sourceforge.net/

Retroshare is an open source application that first generates a pgp key (using, among other things, your mouse movement as partial entropy data). If you send your friends your public key per e-mail or snail mail, and if you have received theirs, you can add each other to your retroshare network.

Retroshare can then be used to securely chat, phone using voice over ip, and for exchanging large quantities of files.

All data that is communicated via retroshare is end to end encrypted, with your private key only stored on your pc.

You can make your shared files available to specific users or groups of users only. You can also create anonymous mailing lists and forums in retroshare, where you may only invite your friends.

You can also set it up such that your friends can download from other friends of yours. These indirect connections are then anonymized with a tor like network.

That means, If you are careless and add a policeman as a friend, the policeman may download from your friends, but he will not get their IP data. He will, however, get your Ip, since the direct connection is only encrypted, but not anonymized. But that could be seen as like the network is cleaning itself. Those who are careless enough to add people they do not know personally so that a policeman gets in their friendslist, are arrested and removed from retroshare. On the other hand, the persons remaining in retroshare have a real dark net that is end to end encrypted with no government access possible.

If Greenwald, Snowden, Appelbaum, Poitras, or Schneier would use retroshare, they would not need to fly to Rio everytime.

Retroshare is as fast as bittorrent. The encryption does not slow it down.

I’m usually using retroshare when I’m logged into the german research network Xwin.

Connected to a 1000Gbit/s fiber, I can say that you can transfer an entire 16 Gigabyte bluray in around 5 minutes from one university in germany to another one in the US. and all that is properly end to end encrypted, with no snooping possible.

Jacob April 9, 2014 4:57 PM

Coming out with such a bombshell on the day when Windows XP is no longer serviced (read: no Cert Revocation List updates) will sure send it fast to the dust bin…

NotNickP April 9, 2014 5:25 PM

Nick P all languages that don’t run in VM have the same vulnerabilities, it is just hidden, and less easy to do.

DB April 9, 2014 5:29 PM

To all those who think anyone ever claimed that open source was “automatically” more secure than closed source… um.. stop being so naive?

Anyone with a brain can see that open source is NOT automatically more secure… only that it’s POSSIBLE for it to be more secure. Can you see the difference? Both open source and closed source can (and do!) have serious bugs in them… Always have and always will, nothing is bug free. The difference is that it’s more easy to find and fix them in open source…

Ok, so here’s what happens as a result of it being more easy to find and fix: WHEN something is widely used and looked at and investigated… THEN you get more bugs found and fixed (which may give it the illusion of appearing LESS secure, since it’s much more public)… and then AFTERWARD, you get a more secure product, after lots of such fixes.

HOWEVER… merely being open does NOT guarantee that ANYONE EVER LOOKS AT THE SOURCE… EVER!!! So therefore being open doesn’t guarantee anything at all. The added safety of open source only kicks in after something becomes popular enough to get a lot of attention and eyes on it, looking for its flaws. And finding them (and flaws often come in groups, so keep looking boys!).. and you get big newsy stories like heartbleed… and THEN it becomes more safe than closed source (on average), after all that.

I’ve released open source stuff before that nobody cared about and nobody used (other than me). Was it “more safe” by virtue of it being open source? Absolutely not! Not in the slightest. It only had the POTENTIAL to be, someday, if everyone took a liking to it and looked at it, and told me where I was being a moron and wrote bad code, and if I listened to them. There’s a big difference there.

Of course my getting 100% code path execution coverage in the test suite made me feel pretty good, but even that doesn’t guarantee that I’ve tested every possible edge case (in fact, I know that I didn’t, just some I thought to test), only that I’m doing way better than most products out there.

Ok, the above only talks about “accidental” bugs… let’s talk for a second further about “purposeful” bugs, also known as “backdoors”… and how that relates to the open/closed source debate.

Since our human-rights-hostile government here in the USA can freely unrestrictedly secretly coerce any company, on pain of imprisonment, to purposefully put in backdoors into their products, and nobody can easily detect them without the source…. ALL CLOSED SOURCE MUST BE ABSOLUTELY FULLY 100% GUARANTEED *****NOT***** TRUSTED… BY DEFAULT… FOREVERMORE… until the broken laws/courts/congress/etc are all fixed. Can I make this any clearer?

So closed source is guaranteed untrustworthy (NOT guaranteed unsafe, we can’t guarantee that every company has been coerced, since it’s all secret)… and open source is possibly safer (NOT guaranteed safer though)….. Which do you think is better? We’re pretty much screwed either way really, but IMO we’re at least potentially less screwed with open source.

DB April 9, 2014 5:31 PM

@ NotNickP

VM’s have the same potential for vulnerabilities too, just they’re hidden inside the VM itself now.

Mark April 9, 2014 5:35 PM

As far as I can tell, the bug allows read access to up to 64K bytes of unspecified heap memory. Not to diminish valid concerns, there are probably numerous other software products with similar bugs. The contents of an area of a fragmented heap in an application that has been running for a while are effectively random. You are as likely to retrieve a picture of someone’s aunt Mimi as you are sensitive data. Aunt Mimi might be annoyed of course.

Anura April 9, 2014 5:47 PM

@Mark

However, you can keep calling it, and the more you call it the more likely you are to get the secret key, passwords, credit card numbers, etc.

DB April 9, 2014 5:56 PM

@ Mark

“The contents of an area of a fragmented heap in an application that has been running for a while are effectively random. You are as likely to retrieve a picture of someone’s aunt Mimi as you are sensitive data.”

Aunt Mimi’s picture is not random… it’s a picture, which is a clearly defined, non-random structure of real data. Random pieces of non-random data isn’t truly random is it. The concern here is that you can keep getting another 64K until you eventually find something useful… like the private key. And it won’t even take you that long to do it.

Jonah April 9, 2014 6:01 PM

@ Mark: Not to mention, it’s not 64 random kb in all of your memory, it’s 64 random kb of data in the heap of whatever process is using the OpenSSL library. Which is likely a smaller security related process where things like keys, passwords, etc are concentrated.

Nick P April 9, 2014 6:14 PM

@ DB, NotNickP

Languages are just abstractions over machine language. They all get converted into it. Safety/security features can be included at the high level language layer, the translation layer, or machine code itself. Without ability to change ISA, people tend to go with the first two. So, a language wanting extra safety might bounds check every array, be careful with stack manipulation, take measures for control flow integrity, etc. The tools handle that for the developer. Hence, for those trouble areas, the language is safer or more secure than the language that ignores such trouble areas.

Hence, I think I’m justified in recommending security-critical code be written in a language that prevents or reduces common classes of attack. And it doesn’t necessarily need a VM, either. That comes from the Java, Smalltalk, etc mindset. It might use just careful code generation (eg TAL), extra instructions for certain checks (eg Ada), a safer structuring (eg CFI), a safer interface (eg HLayer), or even a VM/interpreter (eg VLISP). Or a combination of above. The mechanisms vary.

Point is you can use a language that helps safely run your code or one that reliably helps the enemies run their code. Two of you seem to prefer the latter. Feel free to continue writing mission- or security-critical code in C/C++ while checking every function for potential code injections. Even worse, in unsafe languages you have to worry the most at the more common constructs: buffers, strings, etc. (Anyone else ever notice that?) My approach would have you worry less or not at all when doing common things through inherent safety of how language/compiler handle them.

Note: Empirical studies of C++ vs Ada done by defense/aviation industry a long time ago showed most Ada code has significantly fewer bugs while being easier to read. A recent report from Coverity put Python, also a safe[r] language, at a defect rate of 0.05 per 1k of code… one of lowest in industry. So, hard data backs my claim about better language design reducing defects and, hence, vulnerabilities.

Skeptical April 9, 2014 6:38 PM

Assuming the frequency of accidental bugs is much greater than NSA backdoors (10x greater? 100x?), the simple application of conditional probability suggests that this is very likely a bug.

Incidentally, notwithstanding this bug, the equities markets in the US rallied strongly, led by large gains in technology shares. Yahoo was up over 3%.

And of all the organizations and people who could exploit this, the NSA is the least worrisome. Focusing on the NSA in the context of bugs like this is a mistake, as it tends to reduce popular concern about security issues.

tz April 9, 2014 6:41 PM

If any of the other intel agencies knew about it, wouldn’t the first thing they would do is protect their own government and important infrastructure computers?

treasurydirect.gov anyone?

Anura April 9, 2014 6:57 PM

@Skeptical

And of all the organizations and people who could exploit this, the NSA is the least worrisome.

I wouldn’t say they are the least worrisome, but yes there are worse organizations out there, which is the entire reason why, with programs like BULLRUN, the NSA is a significant threat to national and international, public and and private security interests.

Focusing on the NSA in the context of bugs like this is a mistake, as it tends to reduce popular concern about security issues.

Popular concern about security issues seems to be on the rise since the NSA leaks.

Eric April 9, 2014 7:14 PM

I saw some information propagating re: Seacat (google cache here) claiming that they’d seen evidence in their logs of scanning for this bug as far back as March 23.

Needless to say, that’s a very worrisome proposition, if true.

Eli April 9, 2014 7:15 PM

For users, you have to wait until the site is updated to no longer be vulnerable to change your passwords. To determine that, you need to check the issue date of the SSL cert. If the date is before Monday, there is no point in changing your password: either the site was not vulnerable (due to using an old OpenSSL library or some other implementation), or the site is still vulnerable. In either case, updating your password isn’t needed for the heartbleed bug.
If the date is newer than that, the site has taken the steps it is going to take to fix the heartbleed bug, and it makes sense to update your password. (Hopefully they updated their private key and not just the cert, but a user isn’t going to be able to determine that.)

Steve Schneider April 9, 2014 7:47 PM

Bruce, what would be you theory as to why this isn’t being treated as stop-the-presses news? This is being covered as a “tech story.” This is a “tech story” only in the sense that Hiroshima was a “science story.” 99.99999% of internet users have visited a Heartbleed-affected website (not that all or even most of these users have been directly affected, but that’s part of the problem: we don’t know which of these 99.99999% of internet users were affected, or will be). You’re right, of course, that this is “catastrophic.” As in, a catastrophe has occurred. An end-of-the-world-type catastrophe? Of course not. But a seriously, seriously, seriously bad thing has happened, with potentially far reaching (and certainly unknown, perhaps unknowable) consequences.

What do you think of the coverage? I ask partly because you’re one of the go-to experts to whom journalists look when they want to understand the implications of this or that security/tech event. Do you think mainstream journalists understand this story? It doesn’t seem to me to be a particularly tough one for laypeople to understand: it’s been discovered that one third of the locked doors in the world weren’t actually locked at all; they just looked like they were locked. But this is being covered as if it were the ILOVEYOU worm or something…

Eli April 9, 2014 7:51 PM

@Steve: As a data point regarding the coverage, the Dallas Morning News had it on the front page of their Wednesday print edition, right beside the big photo of the Ukraine parliament brawl.

Mr. Paul April 9, 2014 8:19 PM

From a security review point of view, the RFC is seriously flawed on two counts that should have been spotted:

1) The payload. This begs to be a covert channel, and has a ill-defined (essentially none) use case. Unnecessary fields are the food of attackers.

2) Arbitrary content in the payload. This is where you are practically destined to have the problem occur. With no guidance to implementors, and no defined value to test for, you are almost guaranteed that at least one implementation will use uninitialized memory. Unfortunately that turned out to be a wildly popular one.

Although the implementation was, indeed, flawed the RFC is simply terrible for a security protocol. It should not have survived review, and should not have been implemented.

leo April 9, 2014 8:30 PM

I thought big players like Google, Amazon, Yahoo! would be using HSM (hardware security module) to protect their private keys. Am I naive?

Chris Abbott April 9, 2014 9:13 PM

@Bruce, @M Krohn

Could it be because the NSA wouldn’t want to expose that they had this capability to anyone? Even the FBI? It seems that this is something you’d want to keep under wraps totally. I can’t believe they wouldn’t have known about it.

Ben R April 9, 2014 10:04 PM

I don’t think the NSA has any interest in promoting the use of crypto they can break if China will also plausibly figure out how to break it. Dual_EC_DRBG is only breakable if you know the private key. DES was only breakable if you had enough computing power, and it’s much easier to predict your adversary’s computing power than to predict when they’ll happen to notice a flaw that might be noticed at any time by anyone.

I just can’t envision a scenario where the NSA would consider it a good idea to introduce something like Heartbleed, or keep it secret (for long) if they found it.

Bob S. April 9, 2014 10:13 PM

There’s always something isn’t there? SSL is trash today.

The other day Target stores lost millions of accounts. GCHQ is downloading data from children playing computer games. And NSA is bent on anything except making the internet secure.

I think it’s time to call Humpty-Dumpty-ville on the internet. Let’s sweep up the pieces and start all over.

I would bet there are some smart guys out there right now who could do it, and maybe have some good ideas already.

It shoulda’ started out with a secure foundation, but wasn’t. Seems the NSA had a hand in that too long ago.

Don Schreiber April 9, 2014 10:19 PM

I suspect that the computer security problem cannot truly be solved until we collectively abandon the Von Neuman Architecture and adopt the Harvard Architecture, both within individual computers and for the network.

Recall that Jobs and Wozniack’s “Blue Box” exploited the telephone networks sending both dialing commands and conversations over the same communication channel.

Shure a change from Von Neuman Architecture to Harvard Architecture will cost a lot of money. However, the change need not happen all at once, and the early adopters should attract loyal followers.

Both IBM in developing System 360 and the recent General Motors ignition switch recall suffered from the “it will cost too much money to solve the problem, and it will delay the product” reasoning.

The cheapest and quickest time to fix a problem is when it is first discovered.
However, that takes an act of humility on the developer’s part and an iron willed commitment to quality, i.e. “I made a mistake and I must fix it.”

foobar April 9, 2014 10:27 PM

Bruce, I am completely disappointed that you haven’t said what needs to be said.

The Internet Infrastructure based on OpenSSL is no longer secure.
And that includes everything. It needs to be butchered.
It was never a good idea to trust CA’s to begin with.

A few days back, a backdoor level bug was fixed in OpenSSL by Apple. And now this.
You have called this a “11”. At what number are planning on stating the obvious ?

This is the Engineering equivalent of a Ship sinking because someone knocked on your door.

The fact that OpenBSD has this bug should tell you something.

Do you know how many tests are run by Nvidia for its Graphic Cards ?
Software can no longer call itself an “Engineering”, when even
a telemarketing product can give you more guarantee than OpenSSL.

DB April 9, 2014 11:19 PM

@ Don Schreiber

Von Neuman vs Harvard debate won’t save you when it’s really a data leakage. This is not to say we shouldn’t do what you suggest, just that it’s not all we should do and it’s really more complicated than that… 🙂

@ NSA debate people:

I would totally agree that it’s not in the NSA’s interest to abuse a bug like this instead of fixing it… except that lately the NSA has shown a surprising propensity for doing irrational things!

@ Language debate people:

I agree that we should be using all the tools at our disposal to reduce bugs and enhance security. As a matter of principle this naturally includes higher level language features that protect us from mistakes, as well as many other things that fall into this general category. We shouldn’t fool ourselves into thinking that one thing (like changing languages) will solve all the problems, we need a layered approach. Defense in depth.

z April 9, 2014 11:31 PM

Some quick thoughts about this (I didn’t read the comments or any other articles)

1.) If you’re changing keys anyway, now is a great time to go with 2048 bit keys and finally rid the internet of 1024 bit keys.

2.) Perfect forward secrecy is worth it and should be used when possible

3.) I doubt this is the work of the NSA/GCHQ/Some-other-agency but I have a hard time believing they didn’t know about it. One of the best things the NSA could do for their reputation, government security, and everyone else would be to alert the community when they find stuff like this so it gets fixed. They are the best funded vulnerability hunting team ever. They could do a ton of good for security if they weren’t hell-bent on ruining it.

Nick P April 10, 2014 12:19 AM

@ Don

It’s funny you cite System/360 for your “cost too much” claim as IBM spent around $5bil developing it. Only project that decade that cost more and with similar number of people was getting to the moon. If anything, they put nearly everything on the line to make it, got plenty adoption, and enjoyed profitable lock-in for half a century as a result. Quite the success story.

Thoth April 10, 2014 1:40 AM

@leo:
You may sell the banks and huge organisations really good HSMs but the problem is whether the staffs in there truely understand their operations and nature. Some of them simply buy them and leave them there in default settings. Some of them have unrealistic expectations and unrealistic plans. A good HSM with good plans and knowledgeable people would be the case for good security. Otherwise it isn’t.

Regarding spying organisations, it’s their business model to ruin things. That’s what make sense and make bucks for them. They would have reports to show their bosses they did their jobs, they would have numbers to tabulate and get their pay rise.

The business model of the internet nowadays is to log down and somehow lay your hands on as much data as possible as data is the currency of the digital age.

OpenSSL went down sinking and now it’s dragging so many with it down and all the mistrust of ecommerce would be reinforced and strong feelings to stay away from the internet as well.

OpenSSL should have a reform of sorts. I don’t know. Something to make it more trustworthy and efficient. For now, look to other crypto libraries.

Security providing more mobility should be investigated as future plans for secure communications.

Jonathan Wilson April 10, 2014 1:45 AM

This is yet another reason why we need to throw away the entire mess that is SSL/TLS and come up with something new.

This new protocol should:
1.Be as simple as possible (to make implementations easier to write and validate)
2.Contain something like Diffie-Helman so that someone who captures information on the wire and then later acquires the private key is unable to decrypt the information they already have (my readings of Diffie-Helman indicate that it should be possible to implement it such that even with the private keys, its impossible to get the transaction unless you can man-in-the-middle the connection)
3.Use and require algorithms that have had a lot of peer review (and cryptanalysis) in the community such as RSA and AES and SHA2 and require key lengths that are as strong as possible (e.g. dont allow any RSA keys less than 2048 bits in the standard and require implementers to support longer keys for future proofing)
4.Limit the number of options for things like cypher suites as much as possible (to prevent the problem TLS faces where its hard for people to know which options to enable and which ones to disable in order to get the best possible security).
5.Where different options need to be made available for whatever reason, the most secure option should always be the default.
6.Completely replace the model of certificate authorities and certificates. In particular, the only information that this new system should verify is a link between the public key and the domain. The client should’t need to know that http://www.google.com is owned by Google inc of Mountain View California. All they need to know is that the public key they have is the one that was put out by the legal owner of http://www.google.com and not by an imposter. (something which things like the diginotar hack have shown the current system to be no good at doing)

Thoth April 10, 2014 2:12 AM

@Jonathan Wilson:
The flaws of SSL/TLS are long known but the efforts resisting such replacement are huge. There are so many protocols like DNSSEC and all that stuff out there that provides secure communications but the adoption rates are very slow or non-existent. Some reasons are due to commercial or governmental pressures to use weak protocols and mask them as being something good.

There are topics of such secure protocols cropping up but afterall, the weakest link is seldom the maths. It’s usually program errors like Heartbleed, human ignorance or deviance, pure carelessness and many more.

It’s not to say that we should not consider the creation of new secure protocols. What is more important is to speed up the rate of realization that existing protocols require some work. Most ordinary users wouldn’t care much about DH Key exchanges, AES ciphers and what not. They just want something to be secure out of the box and they are usually used in a forgetful and clumsy manner.

What is truely essential now is to figure out how to bring more awareness of personal security because most of the users are simply ignorant. They wouldn’t feel the pinch until something’s already hit them. Two outcome from Heartbleed – user becomes aversive of the Internet after that and users simply forget about it and return back to normal being clumsy.

If there is a greater awareness and greater demand from users to actually want to protect their self-interest, then the greater demands for more “proper security” would actually create certain mindsets and behaviours to drive such goals into reality. Otherwise all these secure communications protocols would be some novel projects sitting in the corner of the room.

Sancho_P April 10, 2014 2:27 AM

If they knew it would be treason.

In case they didn’t know:

Given the fact that they did not protect us:
The point of our national security agencies is highly questionable.

What is their duty?
To “analyze” the conversation between Angela Merkel and her hairdresser?
Probably the focus for their technicians and scientists should be changed?

Benoit April 10, 2014 2:43 AM

In my opinion, the simplest and easiest reaction this mess is : Web services should hash passwords on the client side, and never – ever – send them in clear form.

Using SSL to protect the channel is not a good reason the transmit a password in clear form : Facebook, Twitter, Gmail & Co should not know your password, you probably use the same one on many other services.
Password leaks is probably the most complicated thing to fix actually for IT teams (updating the openssl library and regenerating keys is not so complicated after all…).

I wrote several lines on this in my blog

Simon Capstick April 10, 2014 2:54 AM

Having patched, rebooted, generated new SSL key pairs and CSRs, installed new certificates, and restarted web servers…

There is no way for others to verify I have done this other than taking my word for it. The new certificates have been issued by the CA/reseller with the original ‘valid from’ dates, with only the serial number and fingerprints having visibly changed.

Worse is that nobody, without use of a non-openssl vpn client, can now safely use an untrusted network such as a WiFi hotspot. Anyone malicious in control of a hotspot could now perfectly impersonate any patched (or non-patched!) site that leaked it’s private SSL key over the last two years. Use of CRLs (Certificate Revocation Lists) on consumer devices may be patchy at best? CRLs themselves may not get updated for days/weeks.

B April 10, 2014 2:58 AM

I wonder if there is really a point in generating new passwords and certificates. After all, if an attacker has my old keys and passwords, shouldn’t I assume they installed a root kit on my server? So if I just generate new keys, they’ll immediately have them again via the root kit.

Robin April 10, 2014 2:59 AM

I’m actually more disturbed by the microsoft closed source crypto libraries. They might contain similar bugs, but since they are closed source, there are far less chances that bugs in Microsoft Internet Information Server, Microsoft Crypto API, and Microsoft Schannel get fixed.

MVPs are independent professionals who have access to all Microsoft source code. They are under NDA, but if they were to spot something, it’d be fixed.

Anura April 10, 2014 3:00 AM

@Benoit

Web services should hash passwords on the client side, and never – ever – send them in clear form.

Unless it involves some sort of asymmetric algorithm, there is no way that can protect you. At best, it means if your password has enough entropy and it is used for multiple things, then you don’t have to change it everywhere. Most passwords don’t have enough entropy. Hell, even if you use some sort of asymmetric algorithm, if your password doesn’t have enough entropy then it is just as brute forcible. Regardless, most people need to change their passwords.

anon April 10, 2014 3:06 AM

Bruce, you say “This means that anything in memory — SSL private keys, user keys, anything — is vulnerable.”

I don’t think this is correct. On Linux you will only get access to memory owned by your process id, so “anything in memory” is not true: your sshd private key is safe, for example.

Benoit April 10, 2014 3:14 AM

@Anura

Brute-forcing weak passwords will alway exists as long as we use password as a main security secret for general public.
The main problem here is that anyone can attack an unpatched server and randomly gather login/passwords. In this case your might find, by chance, pretty strong password.

Hashing them on the client side first with javascript could FIX this. Websites use ajax to setup complex protocol (fetch page updates, complete fields..), why don’t use the same technology to setup a password hashing with salt ?

anonymous from switzerland April 10, 2014 3:34 AM

About hashing passwords on the client side – maybe stupid to post without more consideration and research, but:

How about hashing something like this together:
– Password
– Current time
– “Service destination” (e.g. “https://www.amazon.com/”)
– A random salt value

The server thus gets a hash that is only valid for a certain time window and only for its service destination. This provides already some additional level of protection – a simple replay of the same hash won’t do.

But does that already protect the passwort sufficiently, assuming the user chose a really strong password (as many true random bits as the hash length)?

I presume most likely not for the example above in the generality that it stands, but maybe a more specific algorithm could be made as trusted as standard cryptographic algorithms like AES etc.?

Or is there a fundamental problem with this approach? I presume there is previous work into that direction and obviously I should just take a look…

BadMemory April 10, 2014 3:52 AM

Somewhere in the documents about the NSA’s ability to break encrypted internet traffic this ability was called “fragile”. Maybe because it was based on a bug introduced to open source code?

anonymous from switzerland April 10, 2014 3:54 AM

Sorry, that was imprecise near the end: Can you derive the hash for a different time and/or a different destination from the knowledge of the hash for a given time and destination? (If the password contains as many true random bits as the hash, there is no way to recover the password (possible weaknesses of the hash function set aside).)

anonymous from switzerland April 10, 2014 3:58 AM

Better forget what I wrote, stupid, what I proposed would also require the server to know the plain password.

Anura April 10, 2014 4:00 AM

The problem is that with any symmetric algorithm, the server has to retrieve both the submitted and stored vslue and store it in RAM to perform the calculations. If it is dynamic, then you have to store the value that the computations are performed on, so a compromise of the database means all accounts are 100% compromised. This is as opposed to letting the client send a static value in which a server can then apply to an algorithm like PBKDF2 or bcrypt to make brute force attacks more difficult in case of DB recovery. However, the value submitted can still be recovered in this case, buf only if it is in memory during the attack. My recommendation? Zero out all sensitive data immediately after you no longer need it.

To reiterate what I said before: client-side hashing might protect your users from having their passwords cracked on other sites, but not yours, and only if their passwords are strong enough to not be brute forcible.

Anura April 10, 2014 4:08 AM

@anonymous from switzerland

Don’t sweat it; I’ve gone through the same exact thought process in the past. I imagine every cryptographer who has ever considered password security has come up with that same idea at one point before realizing the obvious flaw.

Dylan Smith April 10, 2014 4:21 AM

@anon:

I don’t think this is correct. On Linux you will only get access to memory owned by your process id, so “anything in memory” is not true: your sshd private key is safe, for example.

No, it’s absolutely correct. When the OpenSSL library allocates a chunk of new memory, the VMM could assign that from anywhere in physical RAM. It could be that this new chunk of physical RAM was just freed by another process. If that chunk of memory happened to include something sensitive, bad luck.

Of course it’s far more likely that the memory you can read is something used recently by OpenSSL itself, but generally the stuff in OpenSSL’s memory is most likely to be just the stuff you don’t want to leak out to third parties.

Uhu April 10, 2014 4:48 AM

@Benni (RetroShare)
One of the first things it says on the web site of RetroShare is this: “…using [..] OpenSSL to encrypt all communication”. We are talking about a bug in OpenSSL, and as a solution you propose to use a product that uses OpenSSL? 🙂

From the Outside April 10, 2014 4:57 AM

Is it pure coincidence that the end of Windows XP support and the detection of this flaw fall together?

Skeptical April 10, 2014 5:15 AM

@Anura: No question that Snowden leaks have increased public attention on security, but most of the public isn’t concerned that NSA will be harming them. It was surprising to me how many people polled had, without following the stories closely, taken the worst possible interpretation of those stories as truth (literally that NSA is listening to every phone call without a warrant) and then shrugged (this is why Obama repeated that “no one is listening to your calls” btw; it’s not clever language, but an attempt to correct a perception captured by polling that much of the public believed that that’s what the metadata story actually reported). But that “shrug” is very consistent with the perception that the NSA isn’t actually harming anyone (I’m not condoning shrugging if one believes NSA is listening to every call, with or without a warrant).

So if they hear “a bug that lets the NSA break into systems,” the response will be “meh.” However, if they hear “a bug that lets cybercriminals grab their passwords, steal their money, and destroy their data,” then they will pay very close attention.

Regarding Ada: I dimly recall hearing somewhere that the biggest obstacle to its wider adoption is the lack of libraries (being very technically ignorant, I remain mystified as to how the availability of public libraries affects the adoption of a programming language). Is there any truth to that claim? And would its broader adoption resolve bugs of this nature, or is it more a matter of implementing programming practices and debugging software, irrespective of language, that can prevent and detect such bugs in advance of a release?

Chris W April 10, 2014 5:19 AM

I have a TP-link router, on which I immediately installed OpenWRT when I bought it. (good idea coz the original firmware apparently had a security hole, but that’s another subject.)

Checked ssl version 1.0.1e… upgraded to 1.0.1g, done.
The advantage of not having to wait for a supplier firmware update. 😉

Bruce asked about low-end non-upgradable devices. Unfortunately I don’t have one that have a interface allowing to check versions. But those devices might not even run openssl or older versions, in any case it’s gonna be difficult to determine since non-upgradable devices rarely reveal what version of software is embedded.

@Dylan Smith

Sry, that wrong. In any protected memory OS, newly allocated pages are zeroed. If that were not the case any malicious userspace process would be able to steal everything on the device.

And from what I understand this isn’t an malloc related issue, it’s a read buffer overflow. The attacker basically says “I’ve got 64k of payload I want you to send back to me.” while the actual payload is way shorter (packet = [message type, payload length, actual payload, some padding]). The code copies the received payload to the send buffer using the length specified by the attacked thus reading beyond the receive buffer bounds.
I’ve only skimmed the code, so I might be wrong.
Of course, like so many others said already, adding a payload to a heartbeat other than a fixed-size sequence number is just idiotic design.

Quirk April 10, 2014 5:44 AM

@Nick P:

I don’t believe moving to managed code would have helped at all in this instance. The bright sparks working on OpenSSL had customised the memory allocator to “improve performance”, and it was these customisations to the memory allocator that caused the trouble. It would also be possible to do custom memory allocation in managed code land, and that they felt they had to hack memory allocation to improve performance strongly suggests that anything that they would have been doing in managed code would not have been any safer. If anything, I’m happier with a world where people are working on such issues in C and C++, in a community where an understanding of the risks of raw memory accesses is now reasonably widespread, than I would be with a world where people struggling to torture managed code to deliver high performance on low level applications had to also understand the security implications of their optimisations.

The real killer was that they were using unsanitised user input to work out what should be given back. Having language features to discourage this would definitely be a plus, though it’s always going to be hard to prevent incompetent developers from being incompetent developers.

Clive Robinson April 10, 2014 5:48 AM

@ anonymous from switzerland,

    Better forget what I wrote, stupid, what I proposed would also require the server to know the plain password.

Incompleat not stupid.

Instead of password think token, then think of two different ways of sending it.

Problem one, how to get your token across to the server securely when you create/update it.

Use Diffie-Helman or equivalent to create a one time key, use this to encrypt the token and send it across securely to the server. The server decrypts the token puts it in it’s DB and securely junks the one time password as does the client. I’ll leave the issue of making the DB at the server secure as a design choice as there are known ways to do it.

Now go back and think through the token generation from the user password that needs to be done on the client side.

Then see if it makes your system work.

Clive Robinson April 10, 2014 6:17 AM

@ Chris W,

    Sry, that wrong. In any protected memory OS, newly allocated pages are zeroed. If that were not the case any malicious userspace process would be able to steal everything on the device.

.

When the OS allocates a new page to the process heap what you say is –usually– true. BUT now consider the case when the process heap already has sufficient space from a free to the malloc code…

Many years ago Sun had a bug in some of their code that in effect attached part of the password file to the end of a file, due to this malloc/free issue. They fixed it initialy by changing the call from malloc to calloc. That is calling free did not clear the memory, nor did calling malloc and the version of malloc they had used a memory selection algorithm that tended to return the same piece of memory that had just been freed if the sizes were the same.

As others have noted OpenSSL had it’s own –supposadly more efficient– malloc. Now I’ve not looked at that code so I can’t pass direct comment on it. However I will say that in general when code cutters try to make things more “efficient” they tend to remove things they think waste CPU cycles, one such thing is zerroing large blocks of memory either before or after use, it is afterall the reason we have both calloc and malloc in the first place.

Vicky April 10, 2014 8:14 AM

One question I have is whether keys / certificate information have a recognisable structure? For example if I request 64Kb of memory and I get back a bunch of stuff with what looks like a JPEG header in it, I can make a good guess that it’s a JPEG and decode it accordingly and see the results. Similarly if there’s plain ASCII text in there, it’s easy to try interpreting it.

But wouldn’t key or certificate data held in memory just be indistinguishable from random garbage? (And presumably random garbage will in fact be most of what I get…)

Benni April 10, 2014 8:28 AM

@Uhu,
“@Benni (RetroShare)
One of the first things it says on the web site of RetroShare is this: “…using [..] OpenSSL to encrypt all communication”. We are talking about a bug in OpenSSL, and as a solution you propose to use a product that uses OpenSSL? :-)”

Thank you for your comment. I have reported this to the retroshare devs.

This was their answer:
http://retroshare.sourceforge.net/forum/viewtopic.php?f=17&t=4031

“Edit: it’s a good idea to update openssl though. If you’re on Linux a system update should do it. If you’re on Windows you might have to replace the openssl DLL in retroshare’s directory.

Edit: it’s vulnerable.”

When Retroshare was created, the developers were inspired by the way people living under oppressive regimes share information deemed “hostile” by their government (this can be books, newsletters, video and audio recordings, or even political jokes).

Because of the heartbleed bug, all retroshare users now need to first update their openssl library (system wide on linux, and in the retroshare directory on windows) and then create new user identities with new public and private keys, and add themselves again with their new identities.

yesme April 10, 2014 8:40 AM

The funny thing is that this bug is exactly what Snowden talked about when he said that he could read anything from anyone he was interested in.

The NSA analists don’t have to know how the bug works, but they can get all the data they want. And it’s because of bugs like this one.

I expect to see more of these bugs. (Apache is high on my list.)

It also showed that Bruce Schneier was right when he said that nothing on the internet should be trusted.

And don’t think that the particular bug has been created by a monkey who didn’t know how to code. It has been created by a PhD student and P2P reviewed. What (again) shows that the C language has serious issues.

However the biggest problem of today is bloat. But the bloat is also a result of the sorry state of IETF / W3C protocols.

There are at least 6 protocols all doing roughly the same: remote access / filesystems.

1 – NFS (used to be called: No Fucking Security)
2 – CIFS
3,4 – FTP / SFTP
5 – FTPS
6 – WebDAV

Am I the only one questioning why? We should not secure our protocols. We should have less and simpler protocols (think 9P) and one simple mandatory secure layer.

Celos April 10, 2014 9:00 AM

This whole mess highlights an underlying problem: Sloppy and incompetent software engineering. OpenSSL is rather obvious a security critical piece of software, why do they not have mandatory time-of-use boundaries checks? They cost nearly nothing in terms of computing power. They make it easy to reject patches from clueless people. As this one was. Honestly, I think the person responsible should have their PhD removed for having causes severe damage in their field of study.

Chris W April 10, 2014 9:09 AM

@Clive

“BUT now consider the case when the process heap already has sufficient space from a free to the malloc code…”

process heap is the operative word, that already belongs to your process. An malloc call will gladly preserve the data that was in a freed block, you already had access to that memory anyway.
Within a process heap/memory block you can do whatever you want. But you will never ever get data that belonged to another process without a major kernel screw-up.

@Jacob
A malloc protective wrapper is to add protective blocks around each allocation, these blocks are marked as unreadable (using mprotect, google it if you want). If a piece of code continues reading beyond the allocated block it will hit the protective block and get a big fat exception, which would’ve prevented the data theft.
(and without an exception handler the process would die allowing an attacker to DOS every server affected, but that’s another matter)
But such protective wrappers come at some performance costs. Whether it’s smart to use it for each malloc is debatable.
Probably would’ve been smart to surround every memory containing sensitive info (private keys and session keys for example) with those protective guards, which would been negligible in terms of performance cost and at least prevented the theft of those keys.
But hindsight it easy.

L April 10, 2014 9:17 AM

@Anura, @sshdoor

Leave GnuTLS/OpenSSL alone.

You don’t really want to refactor that, not in C, and especially not x.509.

OpenSSL has 25k lines of code just for X.509, GnuTLS has ~35k. X.509 should just be abandoned.

Also, this problem came just from OpenSSL reimplementing the memory allocation on all platforms, as was commented here:http://article.gmane.org/gmane.os.openbsd.misc/211963

I’m in the process of writing the code for a new crypto/auth protocol, I’ll launch some kind of crowdfounding campaign around the end of this year. The basic protocol and implementation will already be there by that time, and the money will go into checking everything and making it faster.

So my advice for now is to wait a little bit while I complete my thesis, and then give me a hand: rewriting OpenSSL is huge and kind of useless now.

KnottWhittingley April 10, 2014 9:18 AM

Clive:

That is calling free did not clear the memory, nor did calling malloc and the version of malloc they had used a memory selection algorithm that tended to return the same piece of memory that had just been freed if the sizes were the same.

Yes. That’s common, both to save CPU time in the allocator and to improve locality. You don’t merge freed blocks together too eagerly, only to split them up into the same size blocks again, if you keep allocating and freeing a moderate number of blocks of roughly the same size(s). And by and large, you re-use the recently-freed stuff first, so that you’re mostly reusing memory while it’s still in your caches.

As others have noted OpenSSL had it’s own –supposedly more efficient– malloc. Now I’ve not looked at that code so I can’t pass direct comment on it. However I will say that in general when code cutters try to make things more “efficient” they tend to remove things they think waste CPU cycles, one such thing is zerroing large blocks of memory either before or after use, it is afterall the reason we have both calloc and malloc in the first place.

Yes, and unfortunately it’s particularly common in “systems” code that’s supposed to be both efficient and very portable. Zero-filling modest amounts of memory is usually very cheap on most modern architectures if you do it right, because you don’t need to read the cache blocks before writing them and you can write 16 or 32 bytes in parallel with one store instruction, and most processors are just really fast these days anyhow. But very portable software can’t assume any of those things, much less all of them.

I would hope that such bugs would be caught by a code review, because there’s an obvious channel there, and people ought to think about what could ever be in that payload section.

Failing that, I’d have thought it’d probably be caught in standard regression testing. Is there not a brute-force test to look at all the outputs of high-level operations and see whether low-level information is leaking into them in the clear? (By e.g., hashing all the secret data at lower levels, and the data exposed at higher levels, and looking for matches?)

Codename Charlie Foxtrot April 10, 2014 9:24 AM

You can always learn a lot from skeptical’s NSA glavlit. You can tell what scares them most.

First they fear for their strawmen and lies, which Snowden just exploded by explaining how the NSA Stasi actually works, http://www.emptywheel.net/2014/04/09/fingerprints-and-the-phone-dragnets-secret-correlations-order/

NSA tax parasites are still trying to reason from their own daily affirmations that they are well-meaning and not criminal. Like there’s anybody left who doesn’t know that NSA will use every scummy NKVD kompromat trick in the book, if they decide they don’t like you, https://www.aclu.org/national-security/nsa-reportedly-sought-discredit-radicals-disclosing-online-sexual-activities

Then we have NSA trying to model how you should think, or rather shrug so what. Today’s Big Lie: ‘NSA isn’t actually harming anyone’ with its systematic sabotage of global communications infrastructure. Tell that to Canada, while their revenue is cut off by the threat to online tax filing. Now multiply that by the OECD membership – twice, subtracting one, for the network effects of NSA sabotage – and repeat for affected commercial concerns. Then apply the settled law of state responsibility to figure reparations, compensation and satisfaction. Alexander’s epochal fiasco will keep ICJ and arbitral panels busy for a decade, and beggar DoD.

Good luck proving your innocence. Anons and PMers have already got NSA cutouts Stephen Henson and Robin Seggelmann hopelessly, undeniably pegged. Surely you’re monitoring the forums. No, not those forums, the good ones.

KnottWhittingley April 10, 2014 9:56 AM

Reportedly,

The vulnerability in encryption software OpenSSL was discovered by Google researcher Neel Mehta and the security firm Codenomicon.

If NSA or some other intelligence agency knew about this, I’d expect them to eventually tip security researchers off somehow—either by just telling some compliant person at some security company, or by doing something to anonymously tip security people off, e.g., forcing server crashes that draw attention to the bug and look like consequences of, say, Chinese botnets trying to exploit it.

I’d expect that NSA would tip the security community off one way or another only when they notice that the exploit is being attempted by unfriendlies—the Russians, the Chinese, criminal gangs, etc.

So the normal thing would be that these things would eventually be “discovered” by some security researcher somewhere, and that we’d be none the wiser. Unless you investigate it very thoroughly, you’d usually have no way of knowing that the discovery wasn’t just a run-of-the-mill bug discovery.

I’d expect the NSA to often know about such things, because they have the resources to do more and better code reviews of open source and stolen source than the code developers and maintainers do, and because they presumably have an extensive set of honeypot systems that they analyze to see what the hell the “bad guys” are up to.

Am I wrong? Is there some way we could tell if this alert came from NSA or GCHQ—or any old intelligence agency in the Netherlands or India or Latvia or whereever—that didn’t think the whole Internet should be shot full of gaping security holes?

Bingo April 10, 2014 10:11 AM

@KnottWhittingley: Maybe a retrospect analysis of some saved internet traffic can answer your question? If mismatch between heartbeat package sizes (IN/OUT) stops now but can be detected for the past, somebody stopped using the exploit?

What I do not understand about heartbleed is: How can such an error be introduced in 2012? There was no new functionality, or was it? If so, why did someone change the code at all? Did they try to fix other bugs and on the flight introduced heartbleed? Or was this a complete rewrite for the sake of efficiency? Why then was it still written in C? Why is nobody publishing the comments on the commit of the first buggy version?

???

JJ April 10, 2014 10:25 AM

Thanks to those posting links to company statements and the log from Tuesday’s scan.
I am confused though that several big “advice sites” urge users to change their gmail password even though google wrote that not gmail but only minor services were vulnerable to the bug.

Here is another thing I am trying to understand (please excuse my ignorance): what is the risk of having an unpatched version of OpenSSL on your router (e.g. DDWRT or Tomato)? Any SSL traffic from my computers is just passively routed through, right? Or is the OpenSSL implementation on the router involved in regular SSL traffic from connected computers as well?

William Clark April 10, 2014 10:40 AM

I have not received a single e-mail from any service disclosing they were vulnerable and have taken the correct steps to fix that, even from two services that were widely reported to be vulnerable: Steam and Yahoo.

When the services I know were vulnerable (and have my e-mail address) won’t disclose that they patched the bug, created new keys, and had them certified, much less ask me to change my password, I have to assume they haven’t. No longer being vulnerable to the bug doesn’t mean that that their keys haven’t been copied, after all. And other services I have used or may use may have once been vulnerable and not done everything they have to do to be secure again, and I have no way of telling with them either.

Am I wrong in assuming that in an environment where non-disclosure appears to be the norm, that by default that all services are insecure, and can only be trusted if they WERE vulnerable AND have publicly disclosed the process they went through to restore security AND that process included all the actions needed to mitigate the damage?

Paul April 10, 2014 10:50 AM

This might have been mentioned before but as I write this…
LastPass’ tool says Paypal is not fixed.
The tool linked to in the article says it is fixed.

Just wondering who to believe?
Rough guess is LastPass is using cached data so might not be updated vs. the tool linked to in the article.

Poison Ivy April 10, 2014 11:10 AM

As to that eff.org article Bruce links to in “Possible evidence that Heartbleed was exploited last year”…

That article mentions two IP-addresses (193.104.110.12 and 193.104.110.20) as being the source of the attack noticed by Terrence Koeman.

The article then says that those IP addresses appear to be part of a larger botnet.

Is that a NSA-operated botnet? And if someone says “no”, how would we really know?

Anura April 10, 2014 11:28 AM

@Bingo

What I do not understand about heartbleed is: How can such an error be introduced in 2012? There was no new functionality, or was it? If so, why did someone change the code at all? Did they try to fix other bugs and on the flight introduced heartbleed? Or was this a complete rewrite for the sake of efficiency? Why then was it still written in C? Why is nobody publishing the comments on the commit of the first buggy version?

Yes, it was new functionality. They were implementing an extension to TLS, RFC 6520. The commit itself was discussed a little bit, but it’s not very interesting. The link was posted as well.

RFC:
https://tools.ietf.org/html/rfc6520

Commit:
https://github.com/openssl/openssl/commit/4817504d069b4c5082161b02a22116ad75f822b1

Benni April 10, 2014 11:42 AM

@Poison Ivy, yes the nsa seems to operate botnets

Oh my god: the IP

193.104.110.12 and 193.104.110.2

are not only in the range of a botnet that wanted to copy everything from freenode:

193.104.110.12 and 193.104.110.2

There was also this leak at cryptome (usually cryptome leaks are credible):
http://cryptome.org/0001/nsa-ip-update14.htm

In 2007, someone leaked an ip table. Of course cryptome can not reveal its source, but it said that these IPs are nsa controlled.

Then in 2014, cryptome reports that there is a botnet that is attacking websites from these alledged nsa IPs leaked in 2007:

http://cryptome.org/2014/03/nsa-zologize.htm

And now: the IP table reads:

“193.0.0.0 – 193.255.255.255 (subranges are NSA-affiliated and/or NSA fully-controlled)”

Apparently, the reported IPs 193.104.110.12 and 193.104.110.2 are in this range.

This together with the fact that these Ips were used to spy and copy every conversation on freenode smells much like we have found an nsa botnet here.

Anderer Gregor April 10, 2014 11:45 AM

Two things are making me very nervous right now.

The first one is that, right now, millions of admins are currently generating new SSL keys, on freshly booted machines, with recently downloaded packages, of which only a very small amount will exist. I could not imagine a better way to lower the available entropy for those keys … in other words, if there is anything, even the smallest thing wrong in the RNG or the key generation of the two or three OpenSSL version these admins are using for this, we will be in big trouble again sooner or later.

The other problem is that, right now, a couple of people seem to believe that is a good idea to implement Crypto stuff in some “safer” languages of their choice. Cryptocatting, so to say. So we will be going through all the protocol issues, timing attacks, side-channel attacks, RNG problems, initialization issues again we went through with the big&ugly ones like OpenSSL, SSH etc over the last couple of years. With the extra fun that we are now doing this in programming languages where it is harder to make timing data-independent (because the language/machine/… and not the programmer decides which checks to do, which memory to free when, …), where it might be more difficult to obtain secure RNG states (see the Android (“Bitcoin”) RNG problem a while ago), etc.

Benni April 10, 2014 11:47 AM

ah sorry, fifth line of my post reading “193.104.110.12 and 193.104.110.2” should have been https://botmonitoring.github.io/ which is the site that reported them as spybots on freenode.

Notice that the attack, i linked above from cryptome came with a php script called “zologize”. Zologize means collecting critters, insects or bugs. So yes, given that our agency usually names its projects “FLYING PIG”, “FERETCANNON”, “FOXACID”, “Zologize” would fit into the naming sheme for a botnet that aims to hijack large numbers of webservers for turning them into bugs, thereby collecting bugs.

Nick P April 10, 2014 11:59 AM

@ All re Password Handling

No new or guesswork constructions. This is INFOSEC. We just use what’s been proven in practice. OWASP has a nice cheat sheet on how to properly handle passwords. Use it intead of whatever clever idea you have unless you’re a security engineer or cryptographer.

@ BadMemory

“Somewhere in the documents about the NSA’s ability to break encrypted internet traffic this ability was called “fragile”. Maybe because it was based on a bug introduced to open source code?”

Interesting thought. It could be. It might also be ‘fragile’ in the sense that they weakened protocols or users did plenty plaintext assuming no harm. They’d be concerned trends could shift against them if people’s perception of risk of these things changed.

@ Uhu

“We are talking about a bug in OpenSSL, and as a solution you propose to use a product that uses OpenSSL? :-)”

I saw that too lol. I thought “This is so ridiculous I’m not even posting a critique of it.” Theo de Raadt burning OpenSSL team was the highlight of my day so far, though.

@ Quirk

“The bright sparks working on OpenSSL had customised the memory allocator to “improve performance”, and it was these customisations to the memory allocator that caused the trouble. It would also be possible to do custom memory allocation in managed code land, and that they felt they had to hack memory allocation to improve performance strongly suggests that anything that they would have been doing in managed code would not have been any safer. ”

People writing in more safety-centered systems languages rarely do anything like that. So, it’s already more likely that it wouldn’t happen. That said, these people tried to cheat as you pointed out so this particular development team might shoot themselves in the foot in a safe language. But, it’s kind of a fake argument as you say “But, if they bypass the safety features with bad coding the safety features don’t help.” No kidding. They could also put their private key on twitter. But when do we judge a safety/security tool by how its more foolish users apply it?

“The real killer was that they were using unsanitised user input to work out what should be given back. ”

A good point. My rules of secure development say look at all input coming into the application and check it. It’s a pain that adds a lot of extra code to the app logic. Most just don’t want to do it. Far as tools, there are cutting-edge systems that tag all data with a provenance bit and impose restrictions on externally-generated data. Then, a bit more practical, there are software libraries & frameworks that do input validation of certain types. However, if it’s not well-understood and common case, I think it will always be up to the developer to just think about their application, identify potential input problems, and fix them.

Systems enforcing POLA, strong modularity, memory compartmentalization, etc. can also help here in containing damage. For instance, I had one design for secure web server that broke it up into separate address spaces on a microkernel. The SSL & HTTP connection mgmt processes were both isolated. Message passing was architecture and web server components could only talk to message router. They ask “may I,” where router then checks source, destination, and size against security policy before moving message itself. Compromising any given component only allows you to send or receive messages authorized to flow through that component. So, a memory attack on OpenSSL would have trapped to the microkernel, which would send it to a process that handles this stuff. The process would log the error, optionally save entire execution state of SSL partition, notify admin, and temporarily disable web interface. Memory attack merely becomes a loss of availability that also tells me where the 0-day was. The power of sound, security architecture. 😉

ParanoidUser April 10, 2014 12:02 PM

Here’s a silly question – lots of “really popular” sites have been hit by huge DDOS attacks in the last two years. Do you suppose some of those attackers were using the Heartbleed vulnerability to harvest loads of encrypted traffic/information from those servers?

#shudder

Anura April 10, 2014 12:07 PM

@ParanoidUser

A DDOS is a poor way to collect information. You are going to end up filling the memory with a ton of junk. Most DDOS attacks seem to be for either political or financial gain, e.g. businesses bringing down competitors to increase their traffic, or people trying to get ransom.

This is a very light-weight attack.

Sad Clown April 10, 2014 12:13 PM

I am amused/surprised/curious about the references to ADA. Is that used outside of DOD? I am no fan of the complexity of C++, and I have always thought of ADA as worse, but that is really an uninformed opinion. Is it really a good language for networking and security? Any great website to get up to speed on it?

I am a long time C programmer who has fallen in love with GO. Yes, yes, it comes from google, but as source, and the source seems clear enough to desk check. Any opinions — or facts — about security concerns in using go for encryption and secure networking?

KnottWhittingley April 10, 2014 12:18 PM

I’m not sure if this is what ParanoidUser means, but I was wondering if some apparent DDOS attacks are actually attempts to scoop up information using some exploit like Heartbleed, because somebody wants some information that’s on that computer right now—e.g., something about some user they’re particularly interested in who think is on that system now.

Sad Clown April 10, 2014 12:51 PM

@Anura Thanks, that was an amusing read. It really shows how little one can know about a language without actually programming in it.

I have no doubt about GO’s capabilities, although a lot of important capabilities are in packages, not the core language. I’m just looking for any heads ups security-wise. It’s birth at Google gives me pause.

SoftmasterG April 10, 2014 1:16 PM

Are there any KNOWN cases of the random memory containing passwords, credit card numbers or private keys? I know there’s a possibility, but has it actually happened? How does a hacker turn random bits of memory into a clear picture? Sure you can pull 64k of memory from a server but has anyone does this continuously and then compared that to the actual contents of memory or to data that they knew was being processed? Sorry if I missed the answers to these questions.

William Clark April 10, 2014 1:28 PM

@SoftmasterG
The original researchers reportedly managed to. And in the comments to the first Ars Technica article on this, some comments alleged that they were posted using credentials gleaned from the hack.

Benni April 10, 2014 1:48 PM

@Nick_P

“”We are talking about a bug in OpenSSL, and as a solution you propose to use a product that uses OpenSSL? :-)”

I saw that too lol. I thought “This is so ridiculous I’m not even posting a critique of it.””

Actually, it is not as ridiculous as it seems. Retroshare is not an ordinary web server. For example, the full answer of this dev reads:

http://retroshare.sourceforge.net/forum/viewtopic.php?f=17&t=4031#p12264

Debian/Ubuntu: update libssl, and restart Retroshare.
Windows: we’ll publish a new installer/windows package tonight (Apr. 10) since ssl is statically linked

MacOS: make sure you’re not vulnerable. Apparently MacOS uses openssl 0.9.8 which is fine, otherwise, update libssl as well.

What to do next?

  • Since your private PGP key is stored encrypted in memory, it’s unlikely that an attacker can obtain it. If your PGP password is not strong, that still can be a problem;
  • your location keys (SSL keys) might be compromised by an attacker. Since Retroshare uses PFS, recorded conversations cannot be decrypted anyway. But it is advised to generate new locations for your Retroshare nodes.

In any case, if you’re dealing with sensible information, re-generate your keys.”

So, the private pgp key can not get in the hands of the nsa, even with retroshare being vulnerable, since it is additionally encrypted. But this key is needed to decrypt everything. Additionally, they have perfect forward secrecy enabled…

Retroshare has enough layers of security that the even the nsa will have to work really hard if they want to get in there.

Would Snowden have used retroshare to distribute his material, it still very likely still could be read only by his chosen friends and not by the nsa.

Vatos April 10, 2014 1:56 PM

I think it is worth noting that the filippo.io test sometimes has to be run many many times before it will report that the site is vulnerable

mclearn April 10, 2014 2:00 PM

@Vatos: The filippo.io test site is under heavy load which contributes to its inability to give precise answers all the time. A timeout error is not necessarily the absence of the vulnerability which he is clear to point out.

@SoftmasterG: Yes. I’ve managed to extract my own passwords and session cookies from the memory dump giving me unauthenticated access to my own systems.

Random832 April 10, 2014 2:07 PM

“In my opinion, the simplest and easiest reaction this mess is : Web services should hash passwords on the client side, and never – ever – send them in clear form.”

A client-side hash is a password equivalent unless you know what you’re doing.

Nick P April 10, 2014 2:12 PM

@ Benni

“Retroshare has enough layers of security that the even the nsa will have to work really hard if they want to get in there.”

And it’s written in a risky language using shoddy libraries on platforms NSA has 0-days and automated attack systems for. I’m sure that this combination will be “really hard” for NSA to penetrate. 😉

QED

anonymous from switzerland April 10, 2014 2:17 PM

<@ Clive Robinson (and Anura and all…)

Here’s a schematic “baby” approach for a way to authenticate and pay online using what I just called “onion hashes”:

Alice has a strong password (containing 256 true random bits), she uses that for all her online transactions.

She wants to buy things from Amazon with her VISA card.

Initially she shares the following with Amazon.com (SSL + DH):
– auth-token-shared-with-amazon = SHA-256(“<password>;amazon.com”)
– (her username(email) and address etc.)

And the following with VISA (how is TBD):
– auth-token-shared-with-visa = SHA-256(“<password>;visa.com”)

To authenticate at Amazon, she presents the following:
– auth-message = SHA-256(auth-token-shared-with-amazon +”;<username>;<timestamp>”) + “;<username>”

To pay for something, she presents the following:
– payment-message = SHA-256(auth-token-shared-with-visa + “;<amount-to-pay>;<card-number+exp+ccv>;<timestamp>”) + “;amazon.com;<amount-to-pay>”

(Note that Amazon has to contact VISA in order to verify the validity of the payment message.)

IF (and that is the crucial if) this “onion hashing” really works in the sense that knowing any of one the hashes but not the secret password does not reveal any useful information about the other hashes or the password, then the following seems to be true:

  • Amazon does not know your password, has no information about it
  • Anybody who discovers “auth-token-shared-with-amazon” (c.f. recent Adobe user DB leak) can use it to log in as you at Amazon, but it does not help in any way with logging in to other sites, even if the same secret password is used
  • Anybody who discovers an “auth-message” cannot use it to log in later on, an immediate second login within the same time window with the excact same “auth-message” has to be prevented by Amazon
  • Anybody who discovers a “payment-message” can do nothing with it, at the very most a replay of the same payment VISA to Amazon could be achieved if VISA/Amazon would not check for that
  • Amazon does not know your credit card number, expiry date, ccv, … (and hence loses also the burden of protecting that data – thus maybe a good incentive for corporations?)

Now regarding the “crucial if”: Generally I would think that hash functions are built such that they are safe in this regard (different inputs even if only slightly different must yield totally different output), but then again attacks against hashes usually aim – I guess – mainly at finding collisions, not at finding out HASH(secret+known1) from HASH(HASH(secret+known1)+known2), but I am definitely not an expert at that.

So, how big are the chances that I just “invented the wheel”? 😉

(And even if so, the approach I just presented has of course still some practical issues like the long password (enter in web form or in browser or from a separate device which transmits already hashed messages to devices connected to the web, … ?) and details about formatting the messages to be defined, but maybe if the rest is sound, that might be worth the effort? Your call, of course…)

mike~acker April 10, 2014 2:29 PM

this problem is a rank beginner’s error: you never allow user input to control your program. you process the user’s request to the extent that it is with the specifications of the program. that does not include over-running a wrong length record or going out of bounds on an array.

we may be at a tipping point in software quality control where product liaility will need to come under consideration. if so it will be necessary to define who will be responsible for what part of the needed quality process

first off an o/s should not allow itself to be modified by the activity of an application program. responsibility goes to the o/s oem

the o/s should not allow an app program to snoop on or modify the data belonging to another app. responsibility goes to the o/s for implementing storage protection and to the chip makers for the storage protection circuitry

the o/s should not allow an app program to open read, execute from or write to dataset that the originating user does not have permission to use. responsibility to the os/oem

application programs should be monitored and controlled where possible to assure they are operating as intended and that such activity is proper.

these are just ideas — meant to be trampled on. hopefully something meaningful can derive from this mess we have experienced last few months.

remember: quality control is something you do not something you get.

Benni April 10, 2014 2:43 PM

@Nick_P

“And it’s written in a risky language using shoddy libraries on platforms NSA has 0-days and automated attack systems for. I’m sure that this combination will be “really hard” for NSA to penetrate. ;)”

Na, for sharing with friends of friends, the connections are anonymized in Tor style.

NSA has written a slide called “Tor stinks”, saying “we will never be able to de anonymize all tor users”

http://www.theguardian.com/world/interactive/2013/oct/04/tor-stinks-nsa-presentation-document

when they say Tor stinks, then so does Retroshare, since both programs are using very similar mechanisms and techniques.

In fact, tor too, uses openssl:
https://blog.torproject.org/category/tags/openssl

and it seems that they have much more problems than retroshare because of this bug.

Vatos April 10, 2014 2:43 PM

The test site reports “All good” for me, most of the time. It only occasionally reports “is vulnerable”

GP April 10, 2014 2:55 PM

Does anyone know why google hasn’t bothered to update their SSL cert? It’s still dated March 12. Have they known about/fixed the bug that long ago?

David in Toronto April 10, 2014 2:59 PM

@z

1.) If you’re changing keys anyway, now is a great time to go with 2048 bit keys and finally rid the internet of 1024 bit keys

Short version: In case you didn’t hear all 1024 bit certs are gone as of last December.

Longer version:
A body that certifies CA’s, the certificate authority browser forum as I recall, read NISTs guidance on 1024 bit RSA and decided last February to mandate that they all a) not be renewed after 2013 and b) get revoked if they expire after 2013 because they are insecure. Several CA’s got proactive and started revoking in October. So they’re all gone by now.

Well almost.
* They kind of missed that there are a lot of imbedded devices out there using 1024 bit certs that can’t be easily updated. I’m not sure what happened there. I am sure that some of these devices didn’t support CRLs or the newer replacement and are likely still operable. And because I haven’t seen an industry melt down, I have to assume it either wasn’t an issue or there’s an exception list.
* Also, they grandfathered a bunch of 1024 signing certs from prior to 2010. These were the ones that were used to bootstrap the early 2048 certs into place. I also find this hilarious because if all 1024 bit certs are insecure, then clearly revoking all the end points and not revoking the root signing certificates addresses the biggest risks.

I have real problems with how they went about this. Short time frames. No visible public discussion. No consideration for impact. And an apparent double standard. All rushed through because a perception. Very ad-hoc.

Yes, it should have been done but it was ham-handed and could have caused potential problems the way it was done. IMO they dodged a bullet that a bit more time and planning could have been a lot safer.

David

CallMeLateForSupper April 10, 2014 3:12 PM

@Bruce Schneier
“[…] if this turns out to be something like the Y2K bug — then we are going to face criticisms of crying wolf.”

I think that would not be a big problem. Maybe an inconvenience. I’m sure there would be more than enough voices explaining that the reason the sky didn’t fall is precisely because of the whoop-de-doo and mad scurrying around.

Too, DHS has been taking flak since… lemme see…ALWAYS … because NSA continues to spend eye-watering amounts of cash and cannot point to even one foiled terrorist attack, and yet their credibility is intact. No, wait….

kashmarek April 10, 2014 3:17 PM

Question: is heartbleed a ruse?

That is, perhaps NOBODY is getting any critical information from the 64K memory blocks, but somebody is poised to collect information on all of the new password change requests. Spy agencies would be the best guess here.

Anura April 10, 2014 3:19 PM

@anonymous from switzerland

The extra hash doesn’t actually protect you from this type of attack, because that token can still be recovered (and it also has to be stored). As for the token:

SHA-256(“;amazon.com”)

If I recover Amazon’s password database, then I can guess passwords (dictionary attacks get pretty sophisticated). Without each password being uniquely hashed, I just have to run through my algorithm once to find all the matching passwords in the database. It also allows you to authenticate with Amazon without any effort whatsoever.

If instead I have a random salt, I have to run through my algorithm for each password until I find a match, and then I can move on to the next password. This significantly ups the time, however, but password entropy is low enough in general that it’s still pretty trivial to break most passwords in the database.

So you would have to use a slow hashing function on Amazon’s side, which is basically what we are doing now. Even if you do, a large chunk of the passwords will still be compromised since entropy is often close to non-existent, it just takes longer.

So what’s the solution? Well, the only way to protect your information is to not allow the server to have enough information to impersonate you in the first place. There is a way to do that, but from a usability point of view it’s pretty bad, and that is client certificates.

With an RSA client certificate, you essentially sign a unique message to verify your identity. This is great, because the only way to compromise you if you use a large enough key.

If we used Diffie-Hellman instead (and I’m not going to lookup how the TLS standard does it), then you combine your respective private and public keys to generate a shared secret; the problem with this is that if the server is compromised and you are using static keys, then anyone can reuse your shared secret to impersonate you. Note that the bigger problem in this case is that the server’s certificate has been compromised, so the shared secret itself is less of a big deal, and it’s fixed as soon as they renew their certificate.

Instead of using your static keys together to generate a shared secret, you can also use ephemeral keys. By generating two shared secret, one with the client’s ephemeral keypair and the server’s static keypair, and one with the server’s ephemeral keypair and the client’s static keypair, each shared secret is only used once. It’s still a problem if the server’s static key is compromised, but there’s not much you can do about that. However, it does provide perfect forward secrecy, as both one of the static private keys and one of the static ephemeral keys or the derived key has to be recovered to break the communications.

So again, the problem here is usability. However, that’s because a protocol doesn’t exist. Let’s say we had an extension to HTTP where the client can generate a static key for each host, sends them a certificate over an encrypted conenction (or even just a public key) and the server then signs that certificate (or just stores the public key alongside the account information). The client can then use that to authenticate, and then compromising the server won’t add any additional risk for the client (well, unless they use the attack to trick you into installing malware onto your computer) beyond the server losing its private key, which you can’t really mitigate anyway.

As for credit cards, I think they should be replaced with keypairs as well. As long as the merchant has enough information to charge your card without you present, it is a breach waiting to happen (in fact, it’s happening all the time).

Of course, now you have to exchange your private key with your various devices, which can be a usability problem for that as well. The likely solution will be to store your private keys on the cloud… Problem Solved!

anonymous from switzerland April 10, 2014 3:29 PM

@Anura

Please note the initial (crucial) assumption:

Alice has a strong password (containing 256 true random bits)

Anura April 10, 2014 3:38 PM

@anonymous from switzerland

If you have that ability, and it’s a bad assumption to make, then you might as well have a random key for each website and store a hash of that on the server, avoiding complexity. Either way, if the memory is scraped you have to change your key with the compromised website. If you are using symmetric keys (passwords/passphrases or random keys in this case) they have to either store or receive enough informaton to authenticate you.

A nonny Mouse April 10, 2014 3:48 PM

A lot of sites are saying “we’re not affected”, but what they really mean is their ssl server on 443 is unaffected. Its still likely that some of their back office tools are compromised. What if their vhost is fine, but its virtualized and the hypervisor is vulnerable, or they have some subsystem that can be compromised and gain access to the network. What if their upstream router, or firewall, or terminal server or their LOM management system is vulnerable? Or their inward facing systems? most attacks come from within and people are scrambling to give themselves a premature clean bill of health.
I think this will get worse before it gets better.

Also much not commented on widely are the links between Codenomicon and Microsoft, namely their Chairman of the board Howard Schmidt, formerly microsoft’s chief security office. He should know about responsible disclosure proceedures right? yet that has been completely ignored.
Feels very man behind the green curtain doesn’t it? same tired old dog from so many past battles, same tricks.

John P April 10, 2014 4:10 PM

@someoneElsewhere

Just built the latest version of mutt (1.5.23) from mac homebrew. It linked against openssl 1.0.1g, which is patched against CVE-2014-0160, per the NEWS file in the source directory.

Buck April 10, 2014 4:14 PM

@A nonny Mouse

Everything upstream is vulnerable… Heartbleed or not, there are bound to be enough bugs (and at least one ‘feature’ – CALEA) in network devices to facilitate the compromise of any downstream server. However, if these devices are not under control of the entity in question, it becomes “someone else’s problem”. This lack of capability/responsibility to do anything would make any litigation efforts totally worthless.

Though I will admit that embedded edge devices could prove quite the challenge for any providers lucky enough to be stuck with vulnerable proprietary hubs, routers, switches, etc…

Has anyone been compiling a list of networked appliances that come with a copy of OpenSSL 1.0.1?

Direct hit to the Starship Bridge April 10, 2014 5:14 PM

Thanks much for the information on zologize, a telltale smartypants affectation of the Maryland Procurement Office. In systematic arbitrary use,

https://www.eff.org/deeplinks/2014/04/wild-heart-were-intelligence-agencies-using-heartbleed-november-2013

this is “une activité préjudiciable à la sécurité de l’Etat,” a type of illegal warfare cited in Geneva Convention IV. The term covers spying and sabotage, NSA’s two core missions. The NSA program is also a breach of the non-intervention principle, about which the ICJ ruled, “The principle of non-intervention is to be treated as a sanctified absolute rule of law,” and interestingly, depending, NSA’s indiscriminate sabotage of global communications infrastructure may constitute coercive interference or use of force:

http://armscontrollaw.com/2012/10/09/did-stuxnet-breach-the-un-charters-principles/

In any case, NSA sabotage is a continuing internationally wrongful act giving rise to obligations including but not limited to reparation, that is, restitution, compensation and satisfaction, with interest.

This is what happens when you let 95-IQ military apes off the base.

Benni April 10, 2014 5:33 PM

@Direct Hit:

Here is an old translated spiegel article from 1996,

http://cryptome.org/jya/cryptoa2.htm

telling how nsa and bnd together subverted crypto boxes. The nsa agent who helped subverting the boxes of crypto Ag used to formerly advice motorola. If you use crypto hardware from Motorola or Crypto Ag, this may interest you, but I merely link this because i think the following line fits so much here:

“In the industry everybody knows how such affairs will be dealed with,” said Polzer, a former colleague of Buehler. “Of course such devices protect against interception by unauthorized third parties, as stated in the prospectus. But the interesting question is: Who is the authorized fourth?”

Perhaps something similar is true with openssl.

In the Spiegel book “DER NSA COMPLEX” the journalists write that the nsa was given the following secret directive; “To own the internet”. Another directive says

“Any of our efforts must serve one goal: The information superiority of the world by America”.

Spiegel writes that the internet would be no longer free, if a state actor pursues and then even achives such goals.

Benni April 10, 2014 5:39 PM

sorry, I translated it wrong:

“”Any of our efforts must serve one goal: The information superiority of the world by America”.”

Should read

“Any of our efforts must serve one goal: The information dominance of the world by America”.

Benni April 10, 2014 5:43 PM

Nah, still wrong,the sencence in the german spiegel book is “Informationelle Vorherrschaft von Amerika über die Welt”.

Google translates this as

“information supremacy of america over the world”

That a top secret directive of the nsa.

Direct Hit to the Starship Bridge April 10, 2014 6:03 PM

benni, now that’s real journalism. They know where they’re going with their investigations. They’ve got NSA’s wrongful acts documented as attributable to the US government. And for NATO they’ve raised issues of complicity and joint and several liability under Articles 16 and 17 in (A/56/10) and corresponding case law.

Godel April 10, 2014 6:05 PM

@ Paul
“This might have been mentioned before but as I write this…
LastPass’ tool says Paypal is not fixed.”

Paypal said yesterday that they were never affected. Yahoo have fixed their stuff today, but password change is needed.

Benni April 10, 2014 6:44 PM

@Direct Hit:

In this comment
https://www.schneier.com/blog/archives/2014/03/friday_squid_bl_420.html#c5240416

Ive written more on that spiegel book, showing that obama indeed has ordered the nsa to get ready for active internet attacks until 2012.

An old Spiegel article on the NSA is here:

http://translate.google.de/translate?hl=de&sl=de&tl=en&u=http%3A%2F%2Fwww.spiegel.de%2Fspiegel%2Fprint%2Fd-13494509.html&sandbox=1

showing that the nsa aimed to a full collection of 1/3 of all german phone calls in 1989.

and then we have these documents:

Showing the american government has founded an “advocacy centre”, where us companies can get direct first hand information from the nsa, for the purpose of “levelling the playing field” and better positions in “the bidding arena”. Spiegel writes by the way, that on their attack on Huawei, NSA was assisted by the trade ministry, by the way:

http://www.heise.de/tp/artikel/7/7743/1.html
http://www.heise.de/tp/artikel/7/7744/1.html
http://www.heise.de/tp/artikel/7/7749/1.html
http://www.heise.de/tp/artikel/7/7747/1.html

http://www.heise.de/tp/artikel/7/7752/1.html

For example, in the last link above, we can read with the appropriate citing reference, of course, that “By July 1994, CIA director Woolsey was asserting that “several billion dollars a year in contracts are saved for American business by our conducting that type of intelligence collection.

We intend to continue to do it. It is relatively new. We are very – frankly – very good at it, and we have had some very positive effects on contracts for American businesses.””

In the spiegel book basically the following is revealed:

After the attacks of 9/11, the budget of NSA increased dramatically.
Before 9/11 Michael Hayden had blocked a plan for snooping on phone calls, out of juristical problems. After 9/11, Bush ordered him to break the law.
And from then, the nse programs expanded.

But, anti terror operations only made up 30% of the goals that the nsa has.
The problem is, that the sigint priority list did not change very much by the 9/11 attacks.

So in some sense we are seeing here something like we did with the Iraq war. There, the 9/11 attack and arguments with weapons of mass destruction wer used to attack a country, mainly for oil.

And in the nsa affair, we have a massive increase of intelligence budget and activities, but the goals of the nsa are not mainly concerned with terrorists, but they remained the same as before,

This here contains an excerpt:
http://www.spiegel.de/international/world/secret-nsa-documents-show-how-the-us-spies-on-europe-and-the-un-a-918625.html

The Americans recently drew up a secret chart that maps out what aspects of which countries require intelligence. The 12-page overview, created in April, has a scale of priorities ranging from red “1” (highest degree of interest) to blue “5” (low interest).

Countries like Iran, North Korea, China and Russia are colored primarily red, meaning that additional information is required on virtually all fronts.

But the UN and the EU are also listed as espionage targets, with issues of economic stability as the primary concern.

The focus, though, is also on trade policy and foreign policy (each rated “3”) as well as energy security, food products and technological innovations (each rated “5”).

Yea well, the nsa has a priority to spy on new technologies developed by europeans! And since 9/11 the nsa has much more money for this.

The programmer of the ssl bug now writes on Spiegel that it was just a mistake:
http://www.spiegel.de/netzwelt/web/heartbleed-programmierer-deutscher-schrieb-den-fehlerhaften-code-a-963774.html

Well it does not matter. The NSA also welcomes mistakes and does not hesitate to use them if they find one.

Direct Hit to Lt. Uhuru April 10, 2014 8:47 PM

NSA not only welcomes mistakes, they make them worth your while. Ask Eric Rescorla.

z April 10, 2014 9:45 PM

This is the best article I have seen for explaining Heartbleed to non-experts.

http://www.newyorker.com/online/blogs/elements/2014/04/the-internets-telltale-heartbleed.html

A little off topic, but articles like this are hard to do. It’s not easy to simplify complicated technical subjects without losing the details that matter. Usually, the really detailed technical articles are incomprehensible to most people, and mainstream publications give a vague, hand-wavy gloss over that fails to explain anything beyond “change your passwords”.

Nick P April 10, 2014 9:50 PM

@ Benni

Thanks for the NSA slide. It’s good to know they were still having trouble in 2012. I’d like to have more data on what they can do. Right now, we only have that slide set and posts like this. The part that jumps out most is Runa (worked on tor) says: “Global dragnet surveillance was never part of the threat model…” And that’s what NSA is building. Hence, the protocol will only get weaker over time barring major breakthroughs in anonymity tech.

Someone also posted here a while back a discussion about a certain weakness and a mailing list conversation. In short, the Tor person said the system would be broken against an adversary who could see about all network traffic. Similar to what Runa said. However, these problems just mean Tor isn’t perfect against a global TLA and that it’s clearly still an obstacle to them. If anything, it justifies it’s use for private web traffic as anything hard for NSA is probably really hard for everyone else. Just know that there’s always a risk re NSA and low tech tradecraft is still the safest if it’s something they’d really get you for. And there’s more ways than ever of encoding/hiding data in ordinary objects mailed to or dropped off at uninteresting places. 😉

Just got an idea…

Remember the map about their exploitation systems? We also have maps of Internet backbone and geographical lists of Tor nodes. There might be a security benefit to creating a whitelist of nodes that mostly in areas they have minimal coverage of. (Or at least did at time of presentation.) That would create gaps in what they see. Then, the threat model goes from global dragnet surveillance to a dragnet surveillance in certain spots. Might help on top of the security Tor already provides.

The other aspect is browsers and apps. Most successful attacks NSA does that I’m aware of are endpoint attacks. They use them to strip Tor’s privacy away. The more effort that can be put into containing attacks on them, the better. Efforts to port browsers such as Quark and OP2 to Tor, with necessary improvements, could go a long way. More modifications to common protocols & network apps to reduce leaks as well. One shortcut, already done by hobbyists, is a hardened device that does nothing but shove all traffic through Tor. I say it needs to go a step further and be a ground-up secure router with a Tor proxy built into it. One of the various clean slate secure hardware-software architectures would be ideal for this.

Back to Retroshare.

“when they say Tor stinks, then so does Retroshare, since both programs are using very similar mechanisms and techniques.”

All the points in favor of Tor do not apply to them unless they use Tor itself. (Idk there.) This is true for about any security claim or evaluation as it has built in assumptions, design, implementation, etc that collectively make the claim true or false. Tor is designed/coded by experts working full-time, enhanced/supported by many others, and constantly reviewed by amateurs and pro’s alike for vulnerabilities. That’s why it’s giving NSA headaches. If Retroshare reuses their codebase, then maybe about the same. Otherwise, they’re doing A Good Thing by obfuscating their traffic & aiming for privacy throughout, but with unknown level of security until it’s thoroughly reviewed by experts.

And if NSA goes after it, they’re certainly going to use experts. Many crypto and anonymity schemes were broken easily once qualified people looked at them. And anonymity is much harder to do right than regular encryption. The program, as I pointed out, also uses the kind of platforms and code that we have precedents of NSA subverting. If it’s safe right now, it’s probably just because most people don’t use it. (see Mac OS X) However, if NSA isn’t in your threat model, then it might be a decent alternative to similar programs as it’s always better to choose those that put effort into security/privacy over those that don’t. The less opponents can vacuum up, the better. That much we agree on for sure.

“and it seems that they have much more problems than retroshare because of this bug.”

Wouldn’t surprise me. They’re also the group that used 1,024-bit ephemeral keys against organizations that spend millions on custom key-cracking machines. (rolls eyes) Nothing shocks me in INFOSEC anymore… Well, maybe if they ported it to DOS. (Googles.) Ok, no DOS version. World is still sane. Kind of.

Marc April 10, 2014 11:09 PM

This is an accident. Just look at the sloppy code. This is a prototypical malloc/memcpy bug. You make an assumption about size and that assumption is going to bite a huge chunk of meat off of your ass because data is not quite of size. If size is too small you just broke your software. If size is too big you just memcpy’ed your CC number to me. In this probably literally. Thank you. I’ll buy something useful.

The thing is that this is so trivial that everyone looking at the code with really little insight into it should have found it. This piece of code is trivial compared to the package we are talking about. All you really need to know to have a bug screaming in your face is that payload is actually payload_size and that payload_size is network provided value that obviously needs to be checked. The RFC states MUST BE CHECKED. The co-author of the RFC is the guy who coded this.

If this would be deliberately someone has gone a great length to make it obvious.

The problem, and that’s a problem with open source in general, is that we just assume someone has checked the code whereas in reality NO ONE has. Just because we CAN doesn’t mean we DO. I for sure didn’t. You probably didn’t either. So why do we both assume someone else did? I did. You probably did as well.

John Doe April 11, 2014 12:07 AM

Keep It Simple SSL, lot of useless “QOS” features…
This and sending raw commands to a server drive me insane.

Michael April 11, 2014 12:07 AM

It’s time that we stop pretending the C language is a reasonable one for writing secure software. It’s the equivalent of building skyscrapers out of Legos; regardless how how skilled the builders are, the result is a brittle toy. Sure, bounds-checking has a cost. But in a world where commerce almost universally occurs on top of code, bounds checking is orders of magnitude less costly than the alternative: inherently insecure transactions.

anonymous from switzerland April 11, 2014 12:08 AM

@Anura
“If you have that ability, and it’s a bad assumption to make, then you might as well have a random key for each website and store a hash of that on the server, avoiding complexity. Either way, if the memory is scraped you have to change your key with the compromised website. If you are using symmetric keys (passwords/passphrases or random keys in this case) they have to either store or receive enough informaton to authenticate you.”

I disagree that what I proposed increases complexity compared to known methods with comparable features. From the user perspective it is very simple to get – you have a single (albeit long) secret and you do not have to manage anything besides that. No useful information about your password or your credit card is stored at all the sites that you visit to buy things. And you remain free to chose different identities at different sites.

Is this more complex than PKI / SSL with client auth or than, say, SAML or OpenID etc.? I don’t think so, at least not necessarily. What makes it more complex initially is of course to specify it and to implement it. And it may be too small of a step to come after SSL+username/password. (And it may not be as secure as advertised, of course.)

How naive of me to post something like that here and at this specific time and to expect somebody to really take some time to consider it and how stupid to invest this kind of energy with a (so far mild) flu. So, I will definitely leave it at “Your call, of course…” for everybody, been in way to many “asymmetric wars” in online forums in the past.

Waer nod wott haett ghah 🙂

Jen April 11, 2014 12:20 AM

All of this is talk about the NSA is ridiculous. If the NSA wants your data the NSA will get your data. If the NSA can’t get your data electronically, the NSA will figure out a way to get it anyway. Offices robbed in the middle of the night? Security alarms didn’t work, cameras were off, seriously people, if the NSA wants something they are going to get it, isn’t that hard to do in a low tech way.

Chris Abbott April 11, 2014 12:53 AM

This reminds me of Bruce’s post about inserting backdoors

https://www.schneier.com/blog/archives/2013/10/defending_again_1.html

Low discoverability, high deniability, minimal conspiracy

All it took was one guy to mess up the code just a little bit and it took 2 years to discover! Granted, since nobody expects FOSS to have backdoors hidden in it, people perhaps weren’t looking as hard as they should (which needs to change), but seriously, 2 years! And it’s all so simple. A perfect backdoor…

@Jen

Low tech ways won’t get you into a lot of foreign countries and highly secure (physically) server farms run by Google and Microsoft.

L April 11, 2014 2:21 AM

@anonymous from switzerland:

I like token-based auth algorithms, I do not like your timestamps, as they introduce time-window management, usually require accurate clock synchronization, and you have to store everything to avoid replay attacks.

The real problem is: who manages those tokens? You need a program that manages all tokens, and I wouldn’t integrate that into a browser (way too big attack surface), so you now also need some way for your browser and the program to share tokens. Except that by your protocol a compromised browser can easily ask the id token for visa and make random payments. This is because you put together the authentication to a system and the authorization to payments.

Also, your payment doesn’t protect the receiver, so a malicious website could charge multiple times while billing to multiples destination.

In short, I believe the idea of using tokens to authenticate is correct, but not much else.

I’m also working on a token-based authentication algorithm that includes encryption and key-exchange (so it’s not based on SSL, although the key exchange is very similar). I’m done with the formal definition and analysis, I’m now implementing it all, should be finished by the end of the year.

If you are interested in token exchange protocols I suggest the old but always good “applied cryptography” from our Bruce Schneier, you’ll learn a lot and also why your assumptions do not hold up.

Bye,
L

Zakharias April 11, 2014 3:02 AM

  1. And again a vulnerability was introduced by – applause – the ietf-standards. This game gets boring.

  2. There was twitter and facebook communication analysed by experts. Why not analyse the connections in the ietf-standards and the mailinglists? Who collaborated with whom on standards; where do they wortk now? Who introduced vulns in mails, who gave Pro or Con on the proposals?
    E.g. Eric Rescorla collaborated on ‘Extended Random Values for TLS’ and now is responsible for TLS in Mozilla. Again TLS.

  3. Putting my aluhat on, I can’t see any use in transmission of random, unnecessary and unwanted data of variable length. Could this be planned as a future sublime communication channel for malware?

Roy B April 11, 2014 4:01 AM

As for Y2K, it still amazes me that it failed so miserably marketing-wise. Consider: “We spent millions of man-hours, billions of dollars, to fix Y2K-bugs, and what happened? Nothing! What a waste!” How hard could it be to explain that this was a true success?

It oughtn’t be too hard to explain the apocalypticness of this one, even if it should turn out we’ve been lucky.

anonymous from switzerland April 11, 2014 4:32 AM

@L

“I like token-based auth algorithms, I do not like your timestamps, as they introduce time-window management, usually require accurate clock synchronization, and you have to store everything to avoid replay attacks.”

Authentication: You have to store authentication tokens on the server during the time-window in which they are accepted to prevent replay, yes. Not ideal. Then again, I see no simple alternative and this seems managable.

Payments: VISA has to store payment tokens on their server during the time-window in which they are accepted to prevent double payments. I see no problem at all with that. Their core business. (With payments that are e.g. only made once an item is shipped, things get more complicated but how exactly to deal with that would be a matter between Amazon and VISA, of no immediate concern to the buyer.)

Clock sync: I think clock sync within two hours could be made mandatory these days, resp. it would be up to Amazon resp. VISA which differences in time to accept – during that time window they have to prevent replays.

Overall, I see no show-stopper here.

“The real problem is: who manages those tokens? You need a program that manages all tokens, and I wouldn’t integrate that into a browser (way too big attack surface), so you now also need some way for your browser and the program to share tokens. Except that by your protocol a compromised browser can easily ask the id token for visa and make random payments. This is because you put together the authentication to a system and the authorization to payments.”

Token management: The user/client does not have to manage tokens! Only the password has to be managed/protected. All tokens can be generated when needed on the client side.

Password entry in web form: That is bad, any malicious site that gets you to enter the password and posts the password to itself (instead of calculating just the needed token), gets access to all your accounts and can make any payments with VISA.

Stronger forms of password entry / token generation:
– Password entry in a special dialog that the browser presents, the browser calcuates the respective token
– Password entry in a special dialog that the host OS presents, the OS calcuates the respective token
– Tokens calculated on a separate hardware that protects the password

To me that is likely the killer. Without a separate hardware, I would not trust such a system enough in the sense that it would at most be too small a step after SSL+username/password. And with separate hardware, there are lots of contenders.

“Also, your payment doesn’t protect the receiver, so a malicious website could charge multiple times while billing to multiples destination.”

Huh? As long as the malicious website does not have the password, they can at most try their luck with replaying a previous payment. Or how would your attack look like?

Overall, I (humbly 😉 think that my idea has some interesting (maybe even original) properties that might come in handy some other time for some other purpose for someone somewhere…

andromeda April 11, 2014 5:28 AM

I am wondering what google is doing with their chrome browser. Considering CRL checks are disabled per in chrome per default and their own method (using CRLsets) is likely not as fast as OCSP or CRL….

Jon April 11, 2014 5:46 AM

It sees still unclear to me as to under which circumstances the private keys (or parts thereof) can be seen through exploitation of the heartbleed vulnerability.

I’d like comments on the following analysis. I welcome comments as to which parts I might be failing in the analysis:

  • As I understand it, the bug is limited to revealing up to 64 KB of memory accessible to the process running openssl, and limited to memory which were available for writing at the time of exploitation. If so, static content is not exposed.
  • Since private keys probably are read in elevated/privileged mode of the process, whereas it during normal operation should run with limited privileges, this part of memory will not be freed during normal use of the program. The private key is needed each time a new ssl session should be initiated, and should not be freed.
  • The private keys will thus only be exploitable by heartbleed if the private keys also are copied to new temporary locations in memory. Under which circumstances is this the case?

Though private keys might not be as exposed as originally assumed, user credentials, session cookies and user passwords etc still typically allocated to dynamic memory, and thus theoretically exploitable.

nobody April 11, 2014 5:56 AM

“It’s worth noting that the commit was merged on new years eve, at 11pm. This is a strange time to be merging code, unless you are attempting to avoid scrutiny.”

Reminds me that the NSA was founded Nov 4th, 1952 – U.S. Presidential Election Day, very arguably for the same reason.

Makes one suspect that security related open source projects are heavily infiltrated with spies/saboteurs, even in the apparently unlikely case where heartbleed would not be an example of that (a weakness with plausible deniability and who’s expoitation by others can be monitored by the ones who might have introduced it, or the other way round).

C is really bad at preventing that – implementing a back door with plausible denyability that can bleed a private key is already much harder with any language that nominally prevents random memory access.

L April 11, 2014 6:00 AM

Bruce, I’m running firefox with the Calomel SSL validation plugin, it gives me a 23% score for this website, and that’s pretty low…

there’s no PFS, still using RC4…

In NGINX I’m using these settings:
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:!MD5:!aNULL:!eNULL:!DES:!EDH:!EXPORT56:!EXPORT40:!SSLV2:!ADH:!LOW;
ssl_prefer_server_ciphers on;
ssl_session_cache builtin:1000 shared:SSL:10m;

This should support PFS, uses elliptic curves, protected against BEAST attacks…

tested with:
https://www.ssllabs.com/ssltest/‎

The website also gives a compatibility list for various browsers and devices.

my list misses the compatibility with a couple of devices, does anyone have a better one?

Jerry April 11, 2014 6:38 AM

So … assuming that I am a reasonably smart guy who tries to do the right thing, but is not capable of writing his own crypto library… what are the lessons that I can take from this?

I can think of some possibilities, but I don’t know if any of them are good ideas:

1) use the oldest maintained branch that has the features you need, instead of the newest maintained branch?

2) look at all of the possible build flags and try to remove any features that I don’t know what they do and figure I probably don’t need them?
(e.g. look at all of the ./configure –without-foo flags and use all of the ones that I don’t know what they mean and figure I probably don’t need them? (Was there a ./configure –without-heartbeat, or just the -DOPENSSL_NO_HEARTBEATS?))
(Oh right, and openssl doesn’t use ./configure anyway, it has its own “./config” … what is the deal with that, anyway?)

3) is there another compatible crypto library out there that is interested in developing safe code instead of new features? maybe one that uses some unit tests, you know, things like that?

Thanks for any advice.
Jerry

Clive Robinson April 11, 2014 6:40 AM

For those a little more interested in “the blood and guts” of “heartbleed”,

http://blog.cryptographyengineering.com/2014/04/attack-of-week-openssl-heartbleed.html

A simple line of code to put in a bounds check would solve this particular problem.

However I will reiterate my point about calloc and malloc,

Neither malloc or free zeroise memory in the majority of versions this has significant security implications.

So when you create a block of memory with malloc it contains what was previously put in it by the process, which means the only time it is full of zeros by default is when the memory is “virgin” from the OS putting it into the process heap with the system call brk().

If however you use calloc it should fill it with zeros before returning the pointer to you (however check some idiot has not made it “more efficient” and removed the zeroing). But whilst that might prevent problems from that point onwards it won’t stop problems prior to this.

That is “any process” with sufficient ability to see memory of it’s choice will be able to read the contents of the memory prior to the zeroisation. By “any process” I’m not talking about just OS related processes but such things as DMA and Memory Freezing and covert channles as well (which might or might not depending on your definition of “memory scraping” fall into your mental threat model).

Thus the use of calloc can only be regarded as a partial solution that does not close some security vulnerabilities.

Which with a little thought should mean that you realise that you need to zeroise memory obtained by calloc/malloc prior to calling free. And that shows where K&R should have thought a little further and made a “clear on free” call as well as free.

But even then a “cree()” call can only be regarded as a partial solution that whilst closing quite a few more vulnerabilities than calloc does not close as many as might be considered prudent.

You still have to consider that some security requirments –such as PrivKey protection– need extra precautions which have been discussed on this blog before.

Clive Robinson April 11, 2014 8:16 AM

@ Jon,

The answer to your questions is in many cases, “implementation speciffic”.

However at the very least people should as a general precaution zeroise any memory as soon as it is finished with or if it needs to be kept in memory consider zerroing it and then reloading it to the same place.

Alternativly –and where security is important– use some method of in memory encryption where the “encryption keys” are kept out of normal memory spaces –ie user/kernal RAM hard drives etc– which is unfortunatly very implementation specific as it often requires additional specialized hardware.

JJ April 11, 2014 9:06 AM

I asked this earlier but this thread is getting really long, so I am posting it again:

What is the risk of having an unpatched version of OpenSSL on your router (e.g. DD-WRT or Tomato)? Any SSL traffic from connected computers is just passively routed through, right? Or is the OpenSSL implementation on the router involved in regular SSL traffic from connected computers as well?

Wayne Conrad April 11, 2014 9:12 AM

@JJ If a device having the heartbleed bug has an embedded web server for managing the device, then heartbleed could be used to gain administrative credentials to the device.

Benni April 11, 2014 9:43 AM

@Nick_P

“That’s why it’s giving NSA headaches. If Retroshare reuses their codebase, then maybe about the same. Otherwise, they’re doing A Good Thing by obfuscating their traffic & aiming for privacy throughout, but with unknown level of security until it’s thoroughly reviewed by experts.”

1)By the way, you can now download an updated version of retroshare:
http://retroshare.sourceforge.net/

2) Retroshare is open souce too. Here is a more detailed security description of retroshare:

https://retroshareteam.wordpress.com/2012/12/28/cryptography-and-security-in-retroshare/

“There’s one major difference between RS and TOR: With the later, you being an exit node make you transfer packets in the clear from the internet (to anonymous clients). So websites acting as bait can catch you as an exit node. With Retroshare, if you can trust your friends, you can’t be spied because all the traffic passes through them, and no one else.”

Adding a friend to retroshare means you must communicate your public pgp key to him via ordinary e-mail or otherwise. you can not simply add people to your retroshare network that you do not know from within the retroshare application.

Since you are likely to add thrusted peers to retroshare, retroshare does not have the exit node weakness of tor.

And in addition to that, the indirect communications, that is, all data transfer between friends of friends, is routet anonymously with a codebase that is quite similar to tor.

Please note also that even with a retroshare version being vulnerable by the heartbleed bug, nsa would not be able to decrypt anything, since the key for doing that is additionally encrypted.

On the contrary, in tor, we have the following situation:

https://blog.torproject.org/blog/openssl-bug-cve-2014-0160

“Relays and bridges: Tor relays and bridges could maybe be made to leak their medium-term onion keys (rotated once a week), or their long-term relay identity keys. An attacker who has your relay identity key can publish a new relay descriptor indicating that you’re at a new location (not a particularly useful attack). An attacker who has your relay identity key, has your onion key, and can intercept traffic flows to your IP address can impersonate your relay”

As a result, this is what retroshare users usually say on tor:

https://www.google.de/#q=retroshar+etor

“On a side note, Tor isn’t very secure…”
“Yes Tor is not realy secure .
Its bether make own anon Network with RetroShare.”

So, if for the nsa, tor stinks, then retroshare must be smelling really ugly.

If nsa can not even get into the relatively weak security of tor, then it can be safely assumed that retroshare is quite out of their capabilities.

Nick P April 11, 2014 9:58 AM

@ Benni

“If nsa can not even get into the relatively weak security of tor, then it can be safely assumed that retroshare is quite out of their capabilities.”

If you assume they only attack the protocol, which they don’t (see QUANTUM). I do agree, though, that such architectures are superior to the typical model of “let’s let a bunch of strangers send us malicious packets all day!” Last system I used for Retroshare-like purposes was Freenet. The protocol and asynchronous design of it was nice. It was also F2F and supported protocols running on top of it. However, much like my concern with RS, there was risk at the endpoint. Risk played out over time for FN in that Java is No 1 source of vulnerabilities today. (RS superior far as endpoint risk, ironically.) It’s why I advocate solutions like TAILS distro that work from OS to app to force all traffic through anonymity scheme, while minimizing leaks. A RetroShare on it, if it doesn’t exist already, would be awesome.

I’ll definitely try RetroShare in the future. I like what you’ve showed me about it. I’m just going to assume high end attackers will know what I use it for or be able to attack my machine with a 0-day in it. The assumption doesn’t actually bother me as it’s my default assumption for all things I do over an untrusted network or on a mainstream OS. News bites about black hats and recent NSA leaks just prove it out over and over again, with Tor mass collection failure a rare exception. I’ll keep my assumption.

Thunderbird April 11, 2014 10:40 AM

Does anyone else find it weird the OpenSSL has a custom malloc function that doesn’t bother to clear out the memory before returning the pointer?

No version of malloc that I’ve used clears memory, at least that I’ve noticed–that’s what calloc was for. Are there vendors that have unilaterally changed it? Sounds like a good idea, but I imagine “efficiency” rules safety, as usual.

ECDH April 11, 2014 11:07 AM

@Nick P

“I’m just going to assume high end attackers will know what I use it for or be able to attack my machine with a 0-day in it. ”

I believe this to be very true.. The state of endpoint protection is lot worse than protocols etc. And as a matter of fact I used to be worried about it a lot, knowing that any high-end attacker can enter your system at will (despite the numerous countermeasures) is not a pleasant feeling.
So I kept searching for a solution and I believe there is, just my current threat model doesn’t warrant it.. So I have given up protecting my machine from NSA and like.

However if anyone would want some protection vs these actors I would recommend going down the ‘security through isolation’ approach.
Take a look at QubesOS, awesome idea, however before I could trust it, it would need some serious peer review.
Other than that Hardened Gentoo with sufficiently tight MAC.
For research purposes some microkernel based OSs look really nice.

Benni April 11, 2014 11:13 AM

@ Nick_P

Retroshare is relatively new. Therefore, there is still no official debian package, and that was the reason for the Tails devs not to include it:

https://tails.boum.org/forum/RetroShare_to_replace_Pidgin_and_Claws-Mail__63__/

They say: “But if Retroshare already proposes a non-official Debian package, maybe they are not so far from having it into Debian. You could check with them if that is in their plan.”

The retroshare developers say:

http://retroshare.sourceforge.net/forum/viewtopic.php?t=1366&p=5257

“We would love Retroshare to have offical packages for Debian, Ubuntu, Fedora, etc. We have actively tried in the past – but got ignored.
Hopefully with Retroshare’s growing popularity, one day it will happen.
If you have any contacts or influence in this area? let us know.”

And indeed there is now a bug in debian, saying that retroshare is a package that is currently being worked on to include it in future debian versions:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=659069

Therefore, it can be assumed that Retroshare will soon be in the official Debian repository, and then finally make its way into Tails. The Tails developers say:

“By the way, I’m pretty sure we won’t use it to replace Pidgin and Claws Mail but provide it in addition to them…”

Thunderbird April 11, 2014 11:45 AM

Dylan Smith • April 10, 2014 4:21 AM

@anon:

> I don’t think this is correct. On Linux you will only get access to memory owned by your process id, so “anything in memory” is not true: your sshd private key is safe, for example.

No, it’s absolutely correct. When the OpenSSL library allocates a chunk of new memory, the VMM could assign that from anywhere in physical RAM. It could be that this new chunk of physical RAM was just freed by another process. If that chunk of memory happened to include something sensitive, bad luck.

Sorry, but that is wrong (since this is starting to sound like a Pythons’ routine, I mean the assertion that you can get memory from another process is wrong). On Linux, you never get anything but zeroed memory for a newly-allocated page. Residue attacks were understood in the 60’s, I think. Certainly in the 70’s. When people say “almost anything,” can be in the memory, they mean that since it’s a process on a server setting up SSL connections, you might find very security-relevant stuff.

Quirk April 11, 2014 12:03 PM

@NickP:

“People writing in more safety-centered systems languages rarely do anything like that.”

Which systems languages are we talking about here? I’m unaware of any widely used systems languages at this point other than C/C++. (Java and C# are not systems languages, but applications languages. You wouldn’t seriously attempt to write a kernel or even a network driver in them.)

People working with managed code who need very high performance frequently shell out to native code. (I do my share of tracing bugs in managed/unmanaged code interfaces). But then we’re back to OpenSSL being written in C. If you attempt to write a managed code system to deliver high performance, you end up with some pretty ugly optimisations being done, and some of these will have security applications.

So, to revisit my starting point: these people were already working in native code for reasons of performance, and were disabling existing malloc safety checks to get better performance. This is not a situation where managed code’s “safety” buys you anything at all. When people think they have valid performance concerns – rightly or wrongly – they are going to do horrible unsafe things to push the managed code to the limit to try and make it perform at native speed.

Beyond that, as pointed out by @Anderer Gregor, other classes of attack (such as timing attacks, RNG attacks etc) cannot be so easily guarded against in managed code. In short, managed code is not a panacea, and in many cases may even prove to be a worse solution.

NOYB April 11, 2014 1:47 PM

What is revealing about the Heartbleed issue:

  1. Even a core security over-the-wire protocol didn’t even have minimal unit tests set up. I think everyone expected that something as critical as OpenSSL would be taken seriously by someone.
  2. The open-source model is no panacea when it comes to security. For every reviewer checking code for bugs to secure the code, there are going to be 10X professional white-hats & black-hats looking to exploit the bugs.
  3. The fact that some semi-anonymous developer-guy can check-in code to such a critical area, another semi-anonymous tester-guy can sign off on such obviously broken code, where the code that is checked-in to implement some semi-anonymous feature is just scary.
  4. Is this stuff rigorously tested? Really? I’d tend to doubt it, considering the circumstances surrounding this incident.
  5. C/C++ language implementation is problematic and always will be when handling wire protocols.

My recommendations:

  1. Obviously the process surrounding critical infrastructure needs to change. Wire-protocols especially need to be properly tested. I there’s a place where care needs to be taken, it’s in where untrusted data is parsed. I’d recommend using explicit ‘program by design’ techniques in this area, where software is first designed, then unit-tests are designed to meet fully-documented use-cases, then code is written to pass the unit tests.

  2. Open-source shouldn’t go away (and it shouldn’t) but ‘checked-in’ code needs to be reviewed and approved by a committee of experts who have their professional reputation on the line and will be held accountable for mistakes.

  3. Some sort of layer needs to be inserted into the framework that bounds-checks (etc) incoming data in a rational and consistent way.

  4. Ideally, this stuff would be implemented in a pointer-restricted environment / language.

sshdoor April 11, 2014 1:59 PM

@L: “I’m in the process of writing the code for a new crypto/auth protocol,”

Please limit yourself to thousands of lines.
For example:

  • do not implement tunelling, only document that SLIRP could be used to tunnel in your encrypted connection.
  • do not implement keep alive packets.
  • do not pass environment variables.
  • use packets whose mandatory length is defined in the protocol: this will mean less mallocs.

Bonus: code review will be simpler and hence faster.

Minimize the number of branch conditions to limit attack surface in the assembly code using a secret key.

BleedingHeart April 11, 2014 2:59 PM

This is just more proof that computers can never be secure. We are fooling ourselves if we think so. NSA/GCHQ/FBI/BND or any other TLA will always have a backdoor, even if not intentionally planted. Us average people are fighting two fronts — one against well funded cyber-criminals and one against NSA. Our hopes of ever winning both battles (or even one) is next to nil.

When an SSL library has millions upon millions of LoC, our hope of ever making it secure (especially with a highly insecure language like C) is impossible. As big of an open-source fan as I am, I realize it will never be secure. Better than Windows, perhaps, but never perfect. There’s simply too many things that can go wrong. Even if the programmers can be trusted and all have pristine intentions (unlikely), there is no guarantee they audit their own code. And even if they do, there’s no guarantee they catch everything. This is especially true with encryption software where the number of people in the world who are qualified to audit it numbers in the hundreds. Most of them don’t bother.

And that’s just the software layer. I cringe at the thought of what hardware the NSA has subverted (Snowden docs mention that NSA had subverted “encryption chips” used in many VPN’s, but the Guardian decided to retract the name of the company who has betrayed the public’s trust). And I doubt that is the only hardware in which the NSA has planted backdoors. We have to assume Intel, AMD and every other chip fabricator is subrverted as well.

If you want secure communication, do it the old fashioned way — in a secure room face-to-face. Or have a courier send your contact a handwritten note encrypted with a properly implemented OTP. Just don’t lose or reuse the pad.

So, no, I am not hopeful that we ever solve our cyber-security problems, especially when there are very powerful actors who don’t want the Internet to be secure.

Mike Amling April 11, 2014 3:26 PM

If anyone’s writing a replacement for or new implementation of SSL, I have some suggestions.

  1. Don’t implement authentication-only (aka the null cipher).
  2. Don’t implement 40-bit encryption.
  3. Don’t implement 56-bit encryption.
  4. Don’t implement RC4.
  5. Don’t implement RSA with less than a 2k-bit modulus.

The above no-nos were all implemented in SSL the last time I looked (probably 10 years ago).

  1. Don’t implement protocols that omit Forward Secrecy.
  2. When generating secret or private keys, use a really good PRNG, like Fortuna. Seeding the RNG is very important, but I admit that seeding the RNG is still an open research question.

And if doing a replacement, I suggest
8. Don’t send anything in the clear (If a public key or certificate has to be transferred, transfer it encrypted with an ephemeral shared secret, and cache it for next time.) except an ephemeral public key.

Anura April 11, 2014 4:12 PM

I think priority number one is a safer replacement for OpenSSL, preferably if it keeps the same interface or provides a wrapper so it can be a drop-in replacement without having to change code in the applications that use it.

SSL is not going away any time soon, but it can go away in the future. I think we should be looking to keep everything as simple and modular as possible for any replacement. For example, the highest level of the protocol should be abstracted so as to not going into details, but use a set of interfaces. As an example (and I am not necessarily proposing this should be what the algorithm looks like):

1) Communication Establishment Protocol – a generic protocol that returns the algorithms, protocols, and data to be used for the subsequent steps (this should determine what is supported, and select the most appropriate algorithms based on a set of rules)

2) Pre-Authentication Interface – A generic interface to test the validity of the host information, such as if certificate is valid

3) Key-Exchange Interface – A generic interface that works with multiple key-exchange protools and algorithms

4) Confirmation Interface – A generic interface that a secure connection has been established with the host. This would perform things like key-confirmation to verify the host does indeed hold the private key.

5) Secure Transport Interface – A generic interface for sending and receiving of encrypted data. This layer would provide encryption and message authentication.

Each interface is passed a structure containing the data returned by previous interfaces (if any), each interface returns success or fail, and those protocols may be use be on top of other interfaces as well.

The idea is that it shouldn’t take more than a few pages to describe each protocol involved, which means the implementation of each component stays simple as well. On top of that, because it’s modular you can add a new supported key-exchange protocol, confirmation protocol, or transportation protocol without having to replace the main protocol. This also means that if you make the implementation itself modular, it is easier to verify the code.

Lizz April 11, 2014 4:27 PM

While anyone can introduce a bug, as German developer Robin Seggelmann readily admitted to (thanks for coming forth so quickly!!), I’m curious about OpenSSL’s code review process: checking a variable length should be detected several times before code gets put into production – both progmatically and manual reviews.

And if OpenSSL group missed this critical flaw, then it’s to be assumed virtually all other groups – open or not – aren’t doing this simple check, either. Aren’t there programming components or add-ons that can variable length checks have been done?

Figureitout April 11, 2014 4:39 PM

If you want secure communication, do it the old fashioned way
BleedingHeart
–The problem is initiating that or suggesting it to people you want to have private talks w/. I wouldn’t do any physical talking anywhere if you want ultimate “there’s no way in hell” security. I get a little excited going over what to do, but there can’t be any discernible pattern and I naturally like my patterns; so it’s uncomfortable. I’ll chat w/ anyone over hotmail email or wide open HF ham radio bands (have to make it sound like a normal conversation over airwaves) in the clear, w/ a well-made OTP exchanged physically.

Nick P April 11, 2014 4:53 PM

@ Quirk

Main language I’m referring to is Ada. Its basic static checks would’ve caught this. It’s also anything but slow and widely used in safety-critical industries. Others that still have tool support with good performance and some safety enhancements: Pascal (Lazuarus), Modula-2/3 (used still in Europe), PL/I/S (mainframe OS & partners), Typed Assembler, C-like languages (eg Cyclone), and recently people are saying Go too.

Btw, your distinction between “system” and “application” languages is quite artificial. There’s OS’ s written in Java, C#, Haskell, LISP, etc. They’re far from ideal for that but certain structuring and compilation choices mean it can be done. Although I wouldn’t recommend them for it, it just goes to show the system vs app concept mainly depends on how you use them.

I honestly couldn’t even see why you mentioned bloated Java and .NET as, like you said, language under consideration must be fast and low level. That you keep referring to “managed code” while implying it’s slow makes me think they’re your only experience with it. Languages on my list, real systems languages, can be made to give you as much or as little protection as you like. You can, for instance, code an algoritm in SPARK Ada, prove no memory violations exist, and then deploy a “zero runtime” version of that code. Or you can deploy a runtime. Or you can do dynamic checks automatically inserted. Such is why the Mondex CA was written in Ada.

Which brings me to your claim about people working around the safety system in dumb ways for extra raw speed. Yeah, even a safe language usually has a FFI or assembler support so sure they can. But that’s not “using” a safe language: that’s abusing it. Idiots can ruin any good safety net. Doesn’t make the safety net a bad idea or negate its claimed benefits. It just means don’t use libraries from people that do dumb shit in their code, esp if it’s not memory safe. That simple. 😉

Anura April 11, 2014 6:40 PM

So when I think systems programming, I’m thinking low level components: Kernel, bootloader, drivers, filesystem, shell, internet layer, etc.

Things like an HTTP server, not so much. So the question is this? What are we actually talking about here? Well, in regards to this exploit, it isn’t systems programming – it’s application programming. The reality is the overhead for SSL is not that much, even if you implemented it in a slower language.

Let’s say we got rid of all the assembly code, got rid of the “fast” malloc implementation, and used a language with some basic security features like bounds checking. How much of a performance hit would you really have? Well, when Google made Gmail entirely use HTTPS they experienced a CPU usage increase of 1%… So if your implementation is twice as slow, that would bring it to 2%. That’s not much, and you probably aren’t talking twice as slow, you are probably talking more like 20-50% slower at most at the beenfit of a significantly higher security margin provided by a language like Ada or Go.

Assembly implementations of ciphers are generally not going to give you a significant enough boost to even be worth the higher security risk compared to using an easier to verify implementation in a higher-level programming language. Maybe if you are using AES-NI or CLMUL instructions, but GCC provides library functions to access those anyway.

That said, for most web applications, it’s network/disk/database/memory IO that is the main bottleneck, not CPU.

We can’t justify using C based on performance reasons, because the gains are just not that much. There may be some, but it’s about priorities, and for a programming language (or any development, really) I think the importance is generally like so:

1) Security
2) Ease of Use
3) Performance

So you don’t justify performance over security, unless the security gain is tiny and the performance hit is huge. Bounds checking is a huge security gain for a small performance hit. Ease of use can be a security issue as well, as a language in which an easy mistake (e.g. C with using = instead of ==) can make bugs more likely, and vulnerabilities in turn. Of course, those simple language design features has absolutely zero impact on performance.

Jodina Joseph April 11, 2014 7:02 PM

I’d really like to know which sites and articles to believe. One day, one article tells me that certain sites were never vulnerable. The very next day I see a different article with a list saying that these same sites were vulnerable originally, but are now patched, and that I need to change my passwords. I don’t know who to believe. Both articles say they are updating live, but the updates I keep seeing are changing their stories from one day to the next, “as reports continue to come in”. At this point I don’t even know who has their facts straight, or whom to trust, as I read all that I can about this Heartbleed thing. I guess that my best bet is to wait a while longer and then change ALL my passwords to every site that I use, just to make sure that I’m safe.

Nick P April 11, 2014 10:02 PM

@ Anura

Great comment. From the problems to tradeoffs, well-presented in general. Btw, to support your point:

A type-safe, TPM-backed TLS implementation in Java
http://www.cs.cmu.edu/~mmaass/tpm_tls/report.html

This prototype uses safe Java code for TLS and the Flicker secure execution scheme that I posted here a while back. We’ve basically proposed using a safer, still efficient, systems language to handle protocol engine. This one is a much higher-overhead option that uses a JVM and a TPM for each operation. Yet, even with all that TPM interaction, the total overhead for initialization is about 3 seconds. They note that’s a problem as it’s noticeable to the user. Yet, the TPM takes up 1-2 seconds of it and Flicker basically freezes the system in time as a side-effect of its operation. The actual encryption and protocol handling seem to happen fast.

The report doesn’t justify mass adoption of Java+TPM+TLS. However, noting similarities and specific performance numbers, I think this report does justify believing that SSL can be coded on a safer or even totally managed language with acceptable performance for users. I’m trying to find examples of SSL in type and/or memory safe code with benchmarks but this report was hopeful at least.

Anura April 11, 2014 10:51 PM

BouncyCastle has both a TLS client and server implemented in both C# and Java. It might be worth setting up a simple server and running some load testing software/benchmarks to see what kind of hit you get compared to some other implementations.

Their API is a bit lower level, so it’s a bit of work to implement.

Nick P April 11, 2014 11:11 PM

I looked up some papers I posted here a while back on statically checked, network apps by Madhavapeddy. His 2006 dissertation set the stage for this 2010 technical report. The guy stay doing good, useful work in INFOSEC. He uses his Ocaml-based protocol development system to develop SSH and DNS systems. The SSH system performed at over 10MBps throughput on a system where OpenSSH did about 20-30. He also noted the crypto library he had to use was 75% slower than what OpenSSH used, which was hand-optimized assembler. So, they had a very high level language with crappy crypto library still getting 10MBps is pretty good. The DNS result was even cooler because Ocaml DNS actually outperformed BIND while also using less code & more safety features. If you can do fast OpenSSH, you can do fast OpenSSL. If you can do fast and safe DNS, then… then you should be using a safe and fast DNS server too, damnit! 😉

Btw, I did find pure implementations of crypto and SSL in Java. They were all late 90’s to early 2000’s commercial and academic. Mostly gone. (shrugs) SSH on Ocaml tells me we could do SSL on Ada, GCJ, etc pretty easy. If performance were a worry, I’d do the following:

  1. Use a dedicated core or chip for the SSL processing.
  2. Use an event-queue architecture.
  3. Ensure SSL function isn’t called until data is in memory.
  4. Ensure SSL process’s code and data fit into cache.

The result should be a SSL function with high utilization and efficiency. This should offset some of the burden of safety checks, which is already small for most of them. This is also compatible with MILS architecture I used to promote, meaning SSL engine memory & private key could be isolated at (currently) EAL6+ assurance. Formal analysis of protocol or implementation proving certain properties about data or memory access might also allow checks to be turned off in performance critical areas. (Common trick in SPARK crowd.) Another possibility I’ve seen in the literature is hybrid checks: things easy to statically check are done at compile-time, with the compiler using that information to wisely insert dynamic checks for the rest. Still a catch all problems approach, but with less overhead.

Dissertation quote maybe relevant to SSL bug:

“In Chapter 4 of his PhD thesis [131], Hayden discusses the impact of using OCaml
and notes that reducing memory allocation is a key concern. He also reports that using the
C interface led to hard-to-track bugs, confirming our approach of attempting to attain high
performance without resorting to using foreign-function bindings.

The code worked fine until they escaped their safety net and relied on C allocation mechanisms. Then, all sorts of weird crap starts happening. Good argument for one instead of the other.

Nick P April 11, 2014 11:20 PM

@ Anura

I did know about BouncyCastle. I didn’t know they had TLS so thanks for that. Load-testing it would be a good idea. I suggest whoever does this should first look at the crypto implementation and verify it’s all Java. Some “pure” Java implementations I’ve seen relied on libraries that used native code at some point. Any benchmark of a “Java” TLS or SSL should indicate where cut-off point is for type-safe code if there is one. If it’s at crypto primitive level, for example, it might not negate memory safety if constructed well and would still be easier to get right than a fully native system.

So far, nothing in BouncyCastle’s description indicates use of native code. I’d only call that validated if a Java veteran looked carefully at it to find both algorithm and cryptosystem implementations in Java code. Then, benchmarking could begin. Matter of fact, one wouldn’t even need a full SSL implementation: just pick one strong construction and simulate each part of the process while timing them. Compare to existing benchmarks that do it similarly, along with a check of overall time against what users find acceptable. It would be a nice start on real data.

Not A Real Dr Chuck April 11, 2014 11:50 PM

Excuse if this is a stupid question, but if Heartbleed only exposes data from OpenSSL’s heap, that means only information that has been encrypted relatively recently at the time of the attack (and of course the key used to encrypt it) is exposed, right? So the advice we keep hearing telling everyone to change their passwords on any site they’ve ever been to seems to be unfocused, at least. It’s the very act of logging in that puts your password up for grabs, so you should worry most about the places you logged into the past 4 days (and on day 1 they should have told us not to log in anywhere for awhile).

I get the concern that maybe possibly a few bad actors had the ability to heartbleed-eavesdrop before this was made public, but we know that everyone and his dog have had it for just a couple of days.

Again, I know I’m probably missing something. What?

required field April 12, 2014 1:48 AM

Bruce, could you explain how Perfect Forward Secrecy and this bug relate to each others exactly, regarding past and future communications and the need to change passwords? And thanks so much for helping the rest of us with these tricky yet vital things!

Mathias Hollstein April 12, 2014 1:53 AM

Not A Real Dr Chuck (April 11, 2014 11:50 PM), we came to the very same conclusion. Activity by the intelligence and cyber criminals must have been high throughout the last few days. Therefore we kept our own activity to an absolute minimum and did not transfer sensitive data like for e.g. passwords through TLS secured channels. Additionally its easy to grab passwords and the like but harder to get valuable certs or keys.

Also a good portion of services online was not affected at all either due to older libraries installed and used, heartbeat not compiled in or due to the fact that services were protected by some form of IPS or similar perimeter defense, for e.g. like some of those of 1&1 from Germany.

Finally we/I use strong and different passwords for various services. Therefore the possible compromise of one or two accounts would not affect others easily. The most important thing right now in our opinion is to make sure our local perimeter is secured (servers, routers, etc.) and large organizations and corporations we have contracts with patch their systems and renew keys and certs ASAP. Until that time our operations stay reduced to an absolute minimum and alternative systems, channels and protocols are employed if possible. Further we stay in touch with our most important contractors like our ISP and other providers.

We think that most people out there won’t be severely affected by this flaw as long as they keep their head down and use services that are secured soon and adequately.

The most interesting question that arose however is whether the intelligence and criminals are/were able to decrypt large amounts of previously recorded data in transit throughout the last 24 months. If so that would likely (have) put large amounts of sensitive data at risk. Not to mention that its nearly impossible to estimate or audit the nature and amount of revealed sensitive data. This is a true nightmare since so little can be done about this (ever). Maybe even worse is the problem with OpenSSL and the certs at large. We expect more like this to happen in the nearer future and already prepare for a full and permanent compromise.

Mike the goat April 12, 2014 3:32 AM

Nick P: I guess we can both feel validated by this vuln showing just how wholly inadequate C is for modern programming and/or secondarily how sloppy implementations become essentially grandfathered into use and get embedded in damn near everything.

yesme April 12, 2014 3:59 AM

@Nick P

AFAIK OpenSSH has no assembly code. OpenSSL on the other hand has lots of Mb in assembly (gotta love it).

But…

Whatif…

(just thinking out loud)

A SSL/TLS library is not a (shared) library but an executable command. That way the cmd can’t access the server application data by itself, only what’s send through the stream of data (think pipes or 9p).

Of course this wouldn’t help against heartbleed, but who could figure out that someone is dumb enough (even a PhD student) to develop a protocol that bypasses all security measurements with plain text?

On the other hand, if the tls cmd itself is split into seperate processes (keystore, aes, sha and ecc), I think it could be hardened quite significantly and as a bonus it would also simplify the code and is more according to the UNIX ideology of having small tools that do one thing only and doing that well.

yesme April 12, 2014 4:14 AM

I forgot one thing. When tls is written as a command it could be written in any language, because the communication is with pipes (or a vfs), so it’s just a stream of bytes.

Clive Robinson April 12, 2014 6:31 AM

@ yesme,

    What if

You are not alone in such thoghts.

I’m not sufficiently “indoctrinated” in either OpenSSH or OpenTSL –who is…– to know sufficien of the ins and outs of them to know all the relevent vulnerable bits.

But on the assumption it’s just keys/encryption I can see no reason why it can’t be split off into a seperate process. Which as Nick P and others will confirm is a “known soloution” when the use of crypto/smart cards are in use or likely to be. Such behaviour was quite common if you can remember back to the days of x86DX CPUs with built in math co-pros and the x86SX CPUs that did not but on motherboards with sockets for a co-pro chip.

Nick P April 12, 2014 8:33 AM

@ yesme

What you’re describing is a form of “assured pipelines.” That security model says decompose app into communicating components (typical), then enforce a specific information flow between those components. Highly secure platforms such as LOCK supported this structure. A form of it can be done in SELinux, as it’s a LOCK decendent. Only actual crypto library I know that does this is cryptlib. It has an internal security kernel that ensures each function properly uses other functions. This doesn’t stop, say, a memory attack but does allow one to ensure proper use of the crypto primitives.

Main problem I have with your version is that the command processor is in the TCB. The same thing would be nice if it only happens with simple IPC and input validation. This is essentially what I did with MILS architecture I referenced as separation kernel IPC is fast. They decompose the applications into functions in their own address spaces communicating with kernel IPC per a communications policy.

assman April 12, 2014 9:04 AM

“It’s time that we stop pretending the C language is a reasonable one for writing secure software.”

NO!!

Its time we stop pretending that C, C++, Unix, Linux and the whole diseased mentality has not horribly and badly damaged computing. The Unix Haters handbook is right. It has always been right. It will always be right. I am fucking tired of shitty piece of shit Unix nerds not admitting how their preferred systems suck. Repent your stupid evil ways and reform. Unix, Linux, C/C++ and that whole crap shit community is a pure form of computing evil.

yesme April 12, 2014 11:32 AM

@Nick P

I don’t know any simpler IPC than pipes. So I think that’s ok.

It’s just that with seperate commands / processes you reduce the attack surface (AFAIK). If you can somehow read the memory banks than you are screwed anyhow, even with ALSR.

@assman

Here is an interesting link about the UNIX legacy.

sshdoor April 12, 2014 3:38 PM

Oops, sorry for the double post.

@Nick P, About languages, you may want to look at Parasail, at least the version that can then run without VM. No pointer, no GC, … But it is in a too ealry stage for now …

HTW April 12, 2014 7:52 PM

The answer to TLS/SSL crap is very very very easy. We need JavaScript APIs to run internal browser crypto routines. Then better models could flourish. Browser devs could just wrap extant C/C++ routines with JavaScript APIs. Easy.

So why not do it tomorrow? Revenue. And you thought FOSS wasn’t about money and all about freedom of choice. Ha. I’m talking to YOU Mozilla. The TLS/SSL racketeering operation is just too good.

This revenue stream is exactly why Mozilla needlessly scares users away from self-signed certs with bogus dialogs.

And you thought Mozilla was all about “principles” lately? My ass.

Vacant Lot Scam

CA Authority Scam

Thoth April 12, 2014 8:13 PM

@HTW
Javascript needs an overhaul for the language too. It has it’s many deficiencies.

For a security critical function, C/C++ may not be the best. Using languages that addresses the issues of memory leaking without the users’ need to intervene is one of the plus that is needed that C/C++ does not cater. Running in a sandbox (VM languages) are a great choice too.

Nick P April 12, 2014 8:59 PM

@ sshdoor

I’ve posted on Parasail before. It’s a very nice language in my opinion. It has a good combination of productivity, performance, and safety. This is unsurprising in that the company backing it is a top Ada vendor. 😉 It’s just too new for production use.

Best thing to do with Parasail is this:

  1. Identify its internal model for implementing the parallel constructions safely and efficiently.
  2. Identify a safe, production-grade language to use for your project.
  3. Implement & test the parallel algorithm in Parasail with stub functions.
  4. Have a tool automatically translate that, via the model, into code in the safe language.

So, now you have the benefits of Parasail, while keeping a solid language. This is the same kind of thing I’ve done for many tools, languages, environments, etc. Writing C++ in BASIC and/or LISP (with code generator) for automating boilerplate, safety-checks, and more was one of my original uses for such a technique.

List of concurrent and parallel programming languages
http://en.wikipedia.org/wiki/List_of_concurrent_and_parallel_programming_languages

(Just a bonus in case someone didn’t know about this index.)

Figureitout April 13, 2014 6:13 AM

The answer to TLS/SSL crap is very very very easy. We need JavaScript APIs to run internal browser crypto routines.
HTW
–Uhhh….You just gave me an aneurism…..That is a failure on the nano second you just stated that; sorry to be mean but you need to corrected before more damage is done. But good call on piling more crap on more crap.

Any devs rushing in w/ Javascript as the answer need to get slapped in the face real quick.

Bill Clontin April 13, 2014 9:22 AM

There’s a guy in alt.comp.anti-virus who knew about the OpenSSL Heartbleed exposure years ago. Maybe he’s got some extra info on this problem.

[QUOTE]

From: Dustin abughunter@gmail.com
Message-ID: <XnsA30CA142042A4C9X238BHEUFHHI5RJ791@94.75.214.90>
Newsgroups: alt.2600,alt.comp.anti-virus,alt.comp.freeware
Subject: Re: Heartbreak virus
Date: Fri, 11 Apr 2014

X-No-Archive: yes

i’ve known about the “error” for two years now. It was in my
little bag of tricks. it’s okay, I’ve still got others that nobodies
published for the idiot savants to fix yet.

Why would I want to fix something that was useful to me and several
other people? 🙂 Stupid person, you are.

[/QUOTE]

Blatant April 13, 2014 9:49 AM

“I’m actually more disturbed by the microsoft closed source crypto libraries.”

If we find out that Microsoft has been selling us out to the NSA, we can sue them. With Open source, we can’t sue anyone. So they are just saying “Sorry, I was a student!”. The whole open source license is built upon the concept that programmers could not be sued for coding mistakes regardless of how stupid or how malicious. This is what people should be afraid of. They should let go of the notion that Open Source is secure because it’s open and demand another software license aside from the open source ones.

Tom April 13, 2014 10:03 AM

I think it’s humorous to point out up until a year or two ago, most people were sending their Facebook, gmail, Yahoo!, and the like passwords in the clear, as TLS was opt-in or not available at all.

Also, if users are being asked to change their passwords, what about their credit cards? If someone used their credit card at a vulnerable site, shouldn’t they have their bank re-issue it?

TIM April 14, 2014 3:53 AM

This time must be like christmas for NSA & Co, because so many people are changing all their passwords this weeks and even if the new encrypted channels can’t be broken today, the density of fresh passwords that won’t be changed for the next few years must be much higher these days than any other day this year (until now).

I think this should be one good reason to change all passwords in short cycles.

Boni Bruno April 14, 2014 11:59 AM

Checking for vulnerable SSL servers and patching them is a must, finding what data has been leaked can be paramount for many.

If you have a packet capture fabric in place, using a simple wireshark filter against the packet data can quickly find the various leaks that have or that are occurring on your network…

Below is a wireshark filters you can use against packet data to identify successful exploits of Heartblead.

((ssl.heartbeat_message.type == 1) && (ssl.heartbeat_message.payload_length > 61))

This filter might result in some false positives depending on whether or not there are legitimate clients out there that use heartbeat payloads > 61 bytes, but 61 seems to be the common number used. This filter will identify heartbeat request packets where the ssl.heartbeat_message.payload_length is larger than normal – a strong indication of an exploit attempt.

(ssl.record.content_type == 24) && (ssl.record.length > 64)

This will identify if the server responded to the exploit.

To see the contents, install your SSL keys on wireshark and down load the packets of interest using the filters above accordingly.

A full write up and snap shots are available at: http://blog.endace.com

Regards,

Boni Bruno

ejhuff April 14, 2014 3:32 PM

Can someone comment please on the pro’s and con’s of using browser side certificates instead of passwords? Is there a good reason this isn’t this available as an option on banking sites? startssl.com provides free browser certificates and requires them to register and login to their site.

Also, Firefox stores the certificates in the “Software Security Device” also used to save passwords, encrypted by the Firefox “Master Password”. What “Hardware Security Devices” are available?

Anura April 14, 2014 8:47 PM

@ejhuff

I talked about it a little bit above. The main problem is that it is kind of a PITA to use, and customer support would spend way too much time helping customers use it. You would need a protocol for generating them in a friendly manner for it to be feasible for widespread use.

You could potentially have the server send an HTTP header requesting certificate authentication. If you didn’t support the protocol (or chose not to use it) you would just display a standard login/registration page, otherwise you would probably get a browser dialog confirming you want to register with a certificate or use an existing one. If you use an existing certificate you start a new TLS session using the client certificate, if you need to register you are redirected to a page that contains a form specifically for registration with a certificate. The browser would then submit your request to the server, which would sign the certificate with their private key and send it back to you. By including a list of domains this could possibly work for a limited single-sign-on.

Personally, I would like to replace TLS with a protocol that allows you to just send a public key that they store in their database to look up your user information. Signing a certificate is unnecessary if you are only talking to the issuer of the certificate.

Paul Kosinski April 15, 2014 4:37 PM

“If we find out that Microsoft has been selling us out to the NSA, we can sue them.”

How do we sue Microsoft for complying with an NSA directive (CALEA, NSL). Anyway, their Terms and Conditions probably absolve them.

How do we find out? (Cf. NSL gag orders.)

If the only “acceptable” source of software is companies that can be sued, then we will end up only having software backed by the best lawyers, rather than software backed by the best engineers.

Andrey April 16, 2014 2:00 AM

Some time ago, a vulnerability was revealed in OpenSSL, and I guess there’s no programmer who hasn’t been talking about it since then. I knew that PVS-Studio could not catch the bug leading to this particular vulnerability, so I saw no reason for writing about OpenSSL. Besides, quite a lot of articles have been published on the subject recently. However, I received a pile of e-mails, people wanting to know if PVS-Studio could detect that bug. So I had to give in and write this article.

A Boring Article About a Check of the OpenSSL Project: http://www.viva64.com/en/b/0250/

Rob April 16, 2014 10:45 AM

So far I have been unable to find any solid information on how a site or server can determine if they have been attacked with the Heartbleed vulnerability. Is there any fingerprint left in the logs of a standard Apache2/OpenSSL configuration with production level logging (ie. NOT set to 11)?

Anura April 16, 2014 12:56 PM

@Rob

The only place you would likely be able to see this is in TCP logs, which is probably not done unless you explicitly installed a TCP logging utility. That would probably be a gigantic logfile if it was detailed enough to detect the exploit.

Mbck April 17, 2014 5:35 PM

Just a little confused here. I am usually the first to cast aspersions on C, which I refer to as “the portable assembler that thinks it’s an HLL”. But the problem at isu here appears, to me, to be independent of the language.

What I read in the “offending” code is that the following happened.

  • A function is invoked with two parameters:
    • one is a buffer (B),
    • the other a number (N).
  • The function obtains
    a chunk of memory (M)
    whose length is as indicated by the number (N).
  • The function then copies (B) into (M),
    where it fits.
  • The function returns
    the whole content of the memory chunk (M).

You can write that in C, C#, Java, Pascal or assembly code. There will be no exception crash or otherwise. You can run this in segment-based virtual memory machines, it will run and still be wrong.

There are two problems, but they are independent of C.

One is that the version of malloc used does not zero the contents of (M). The requirements for malloc does not specify contents, and to use an existing version (e.g. calloc) that explicitly zeroes the buffer would, TTBOMK, be too expensive in many crypto or I/O operations.

The other is that the length of B, the length of M, and the number N, should match. That’s a boundary condition testing error that involves a use case that was not envisioned.

Assembly languages should not be blamed of things they didn’t cause …

Mbck April 17, 2014 5:49 PM

@sshdoor @Nick P

The only language that fits what is needed for this kind of task is DCALGOL…

:-O

NIck P April 17, 2014 6:22 PM

@ Mbck

It’s a bounds-checking error. Wikipedia article lists these languages as supporting it: “Mainstream languages that enforce run time checking include Ada, C#, Haskell, Java, JavaScript, Lisp, PHP, Python, Ruby, and Visual Basic. The D and OCaml languages have run time bounds checking that is enabled or disabled with a compiler switch. C# also supports unsafe regions: sections of code that (among other things) temporarily suspend bounds checking to raise efficiency.”

Ada was my recommendation for low-level, critical stuff so error would have been caught. Pure Java with AOT compiler would’ve been fine, albeit slower. Article also notes that ALGOL, the grandfather of languages like C and Go, also had bounds checking. Hoare, its co-designer and a C.S. legend, said this about it:

“A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interest of efficiency on production runs. Unanimously, they urged us not to—they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980, language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law.”

Another note was that certain tagged architectures, like Burroughs B5500 I’ve often referenced, supported it at the hardware instruction level. So, I actually could blame the assembly language for this one. 😉

“The only language that fits what is needed for this kind of task is DCALGOL…”

Haha nice. I heard that platform costs trillions to build, but only millions to wield.

Mbck April 18, 2014 5:43 PM

@Nick P

  • “It’s a bounds-checking error”

Well, yes and no. In this specific exploit, it is code written to be overly flexible. If instead of copying anything to a fresh buffer, the code just passed back the reference to the input buffer, there would have been no bounds to check. The design assumed that the length of the input and the length of the output could be different.

And the bounds-checking error, or lack thereof, would have manifested itself with the length difference being the other way: the malloc’ed buffer could have been smaller than the one received.

Actually, this is probably what can be blamed on C: Because such a buffer overrun error is so frequent, the author and the reviewer spent time checking for it – and omitted to check the cases where received size is less than allocated. There’s only so many minutes in a day.

  • “The Bxx00 .. supported it at the hardware instruction level”

Not the only one. Interestingly, if memory serves, the Intel 80286 did support segmented memory, and paged memory only became available with the 80386. So the compilers could have assigned separate segments to each object – but the market (and memory sizes etc.) was not ready for that. Instead, we are left with paging, which I think dates back to the OS/360 and before. BTW, AFAIK, this OS is still with us. Mainframe designs never die.

  • “ALGOL, the grandfather of languages like C and Go”

I take big exception. Algol, Pascal (and, I think, PL/i) are block-structured languages. The semantics of these is not supported easily by hardware and requires a little more than general-purpose registers and linearly addressed memory. Hardware support for block-structured languages died, AFAIK, with the HP3000 – even though the Burroughs architecture endures in emulated mode (Intel hardware used in a very personal way).

Block-structured languages allow a completely different organization of code – and of memory allocation – than what is possible when all you have is an unstructured heap and a per-process stack. Alas, RISC killed such “complicated” hardware cold.

For what I remember, in Large Systems MCP, the OS was compiled at lexical level 0, device drivers at lexical level 1, applications above. When you needed a local buffer, you declared it at a higher lexical level – it was allocated on your process’ stack, no garbage collection required. The OS could address your data, you could not even construct an address to get to the OS data from an app.

This is similar to the design of the THE operating system (the Wikipedia article is good.)

I’d love to see someone with real Large Systems credentials correct me where I get it wrong.


No, C is not a descendant of ALGOL. It is a descendant of PDP-11 Assembler, with a little syntactic sugar added to make it tasty. It is certainly better than straight Assembler, but it does not build anything solid on top of linear-memory-addressing-with-GP-registers.

Nick P April 18, 2014 9:36 PM

@ Mbck

“And the bounds-checking error, or lack thereof, would have manifested itself with the length difference being the other way: the malloc’ed buffer could have been smaller than the one received.”

The gist of what I was saying was “Would a safe (dynamic checked) or managed language have prevented it, caught it at runtime, or gave the attacker the memory? And if it was coded as was typical in that language as well.” Certain languages handle it way better than C in both directions on bounds checking IIRC. Of course, I could be wrong about that.

“Interestingly, if memory serves, the Intel 80286 did support segmented memory, and paged memory only became available with the 80386. So the compilers could have assigned separate segments to each object – but the market (and memory sizes etc.) was not ready for that. Instead, we are left with paging, which I think dates back to the OS/360 and before. BTW, AFAIK, this OS is still with us. Mainframe designs never die.”

Intel still kept supporting the segmentation model. The GEMSOS security kernel (and I think XTS-400 STOP OS) used it to enhance security. The Native Client architecture (2009) that sandboxes Chrome used segments, too, noting that they’re a very efficient for such things. Matter of fact, an Intel document said on Atom processors unprotected access is 1 cycle, segments 2 cycles, and paging 8 cycles. So, it’s still a very efficient protection. That said, AMD eventually ditched it and most recent Intel processors might have. I’m not sure.

Burroughs B5000 had segments in 1961 so it was probably the first. (Many firsts with that one.) The IBM System/360 used keyed memory where a page of memory is associated with a key and a process needs to have the key to access it. Depending on implementation, the concept might involve several keys with a mix of hardware and software enforcement. PA-RISC and Itanium have that feature, too. IBM System Z and Secure 64’s SourceT both use this feature extensively inside the operating system for reliability & security.

“The semantics of these is not supported easily by hardware and requires a little more than general-purpose registers and linearly addressed memory.”

P-code Pascal was ported to something like 70 architectures, 8-bit to mainframes, per Wirth. So, these kinds of languages can work fine with plenty of hardware profiles, amazingly so in case of P-code. They might take more overhead, though, as they’re not just throwing bits around. (eg safety checks, structuring)

“When you needed a local buffer, you declared it at a higher lexical level – it was allocated on your process’ stack, no garbage collection required. The OS could address your data, you could not even construct an address to get to the OS data from an app.”

I didn’t know that. Thanks for the info as I’m quite interested in Burroughs architecture. I keep referencing it here as it’s far superior to most systems today in terms of security and support for high level languages. I think it was just too far ahead of its time. Check out this advertisement I found recently. Feels strange and neat to look back in time at something that tried to look forward in time.

“This is similar to the design of the THE operating system (the Wikipedia article is good.)”

That and A1-class systems are how I learned layering back in the day. 😉

“No, C is not a descendant of ALGOL. It is a descendant of PDP-11 Assembler, with a little syntactic sugar added to make it tasty.”

C is a decendent of B, a variant of BCPL. BCPL was a watered down version of CPL to make it easier to compile. Strachey et al, per their paper on CPL, required readers to be familiar with ALGOL 60 as CPL was heavily influenced by it, although was more complex. The lineage is ALGOL60->CPL->BCPL->B->C. So, they’re related. It’s just that ALGOL good design was so mutated and watered down that the end result looked like cross-platform assembler for, say, a PDP-11. 😉 And it’s safety, maintainability, and easy of integration follows from this. It’s real fast though!

Mbck April 20, 2014 8:26 PM

@Nick P

Many thanks for this thread. On the Large Systems, I still would recommend the book by E. Organick on Computer System Organization ACM Monograph, . The notes by Jack Allweiss ( http://jack.hoa.org/hojaa/b5900.htm) are a more direct narrative of the environment where the Large Systems philosophy developed.

The Large Systems Architecture (e-mode) survives to this day in the Unisys Libra/MCP systems. To get information about how these machines work is essentially impossible to do. The book by Organick was an anomaly in its time, and the Unisys philosophy about disclosing, or, worse, advertising architectural choices has not changed since (please prove me wrong.)

I take note of your remark about P-code; but P-code is an interpreter, and therefore incurs a penalty compared to running directly on the hardware: This is what I was aiming at. Recent languages lay a thin layer on top of the flat-memory-and-registers hardware model and try to stick to it as closely as possible – witness the amazing results of work in code optimization.

IMHO, because of this idea of squeezing the latest instruction cycle out of code, we lost sight of code robustness. In the meantime, the notion that there could be a different architectural model that the current one is completely forgotten, or, worse, taught as an aberration of people who didn’t know.

It’s difficult to reintroduce ideas like block-structured languages and cactus stacks. “You can’t have programmers write malloc and free calls — THEREFORE you need automatic garbage collectors” is essentially impossible to fight.


“And it’s safety, maintainability, and easy of integration follows from this. It’s real fast though!”

… as you would expect from an assembler 🙂

Nick P April 21, 2014 1:23 PM

@ Mbck

“On the Large Systems, I still would recommend the book by E. Organick on Computer System Organization ACM Monograph, . The notes by Jack Allweiss ( http://jack.hoa.org/hojaa/b5900.htm) are a more direct narrative of the environment where the Large Systems philosophy developed.”

Links dead. Thanks for the book reference. I’ll look into it sometime.

“The Large Systems Architecture (e-mode) survives to this day in the Unisys Libra/MCP systems. ”

Sort of. I’ve looked into them. They’re more like current machines that emulate the old ones. The biggest strength of B5000 line was that it enforced in hardware the foundation of system safety/security. Changing that to what’s essentially a virtual machine… well, look at Java. 😉

“To get information about how these machines work is essentially impossible to do.”

There’s plenty of information out there. I got a lot of technical information just Googling “Burroughs architecture,” “B5000,” and so on. There’s also the Capability-based security book online free that details many descripter- and capability-architectures. For composing it into a NUMA or mainframe machine, I don’t think we need to know how Burroughs/Unisys did it as much as look at similar machines with public info. For NUMA or interconnects, I always thought SGI was lightyears ahead of the competition so I’d copy them. There’s also many academic papers, prototypes, designs, etc for building large systems.

So, it doesn’t worry me. We only need to understand at an abstract level how they worked. Then, improve the design to help cover today’s threats. Then, have pro’s build it, others evaluate it, and a fab produce it. Simple as that. Expensive, though.

“I take note of your remark about P-code; but P-code is an interpreter, and therefore incurs a penalty compared to running directly on the hardware: This is what I was aiming at. Recent languages lay a thin layer on top of the flat-memory-and-registers hardware model and try to stick to it as closely as possible – witness the amazing results of work in code optimization.”

Definitely a penalty and what you say about mainstream systems languages is true. Compiler science has narrowed the gap you mention more than ever. Hope isn’t lost for robust systems programming on mainstream architectures. We at least have the likes of Ada and Modula for safe[r] systems programming with little performance hit. There’s others in development. I’d rather have hardware support, though.

“IMHO, because of this idea of squeezing the latest instruction cycle out of code, we lost sight of code robustness. In the meantime, the notion that there could be a different architectural model that the current one is completely forgotten, or, worse, taught as an aberration of people who didn’t know.”

Exactly. That’s totally what’s happened and is happening.

“It’s difficult to reintroduce ideas like block-structured languages and cactus stacks. “You can’t have programmers write malloc and free calls — THEREFORE you need automatic garbage collectors” is essentially impossible to fight.”

The battle I’ve been fighting… Funny you mentioned cactus stacks. I had totally forgotten about them. Looking it up, I found that Burroughs used that (more brilliance) and it was used in this parallel machine that Tandem had a part in. I didn’t know about this machine before. So, this conversation was both interesting and rewarding.

“… as you would expect from an assembler :-)”

Lol. They started with a good HLL for systems, then worked from there to re-invent assembler in more portable form. Man years of research well spent, eh? 😉

Villy Madsen April 22, 2014 12:18 PM

C Strikes again???
Yes, I know that applications development would grind to a screetching halt if programmers were forced to use languages that required you to specify the maximum size of a variable (or array) during program development.

On the other hand, it seems to me that debugging was easier and faster in those bad old days of Fortran, Cobol & PLI. (although I think that I would prefer Pascal these days).

It reminds me of a certain DB product running under VAX/VMS. There was a hardware feature that prevented execution of code in data space. You had to turn that feature off if you wanted to use that Database Product. – At least the feature was still enabled for everything else running on the box, but….

Villy G. Madsen CISA

Mbck April 24, 2014 12:46 PM

@Nick P

Sorry about the dead link — I just tried this one — http://jack.hoa.org/hoajaa/BurrMain.html — which, from here, is live.

Some quick notes, and more thanks…

  • Yes, MCP-on-Intel is emulated. From what I remember, and which is somewhat confirmed in Allweiss’s notes, this is the result of an effort to formally document the instruction set for Large Systems – “e-Mode” for “emulated mode”. Up to the B6900, each machine model was slightly different from the others, because of the changes in the type of logic being used – that world was changing fast then.

It did not matter too much, since the Burroughs compilers were very fast and everything was available as source code: you got a new iron, brought up a minimal MCP that ran on the intersection of all instruction sets (…), compiled the compiler for that specific box, then compiled the MCP, the libraries, … and eventually the application programs. Still if an installation had several boxes, it was a nuisance.

I would put e-Mode in the same class as p-code. Java tries to be much more, emulates much more than just an instruction set, and gets immensely complicated because of that.

But I agree: what we need is not toe revive the B5500 product line with all its warts and limitations, but to sresume learning about NUMA, block-structured languages, and proper encapsulation as close to the hardware as possible.

  • Cactus stacks and Tandem. I quickly looked at the article; while the need for a “cactus stack” exists both in any complex MCP program and in the environment thewy describe, they apparently serve different purposes. The Tandem article does not even reference Large Systems. Interesting other anecdotes: at Burroughs, I heard the tale that one of the Tandem architects was one of Burroughs’ own hardware developers – never could verify, and now I do not remember his name. Wikipedia pays tribute to the B5500/6800, but traces Trandem back only to the HP3000.
  • C, HLLs and Assemblers. For what I remember, the main push for C in early UNIX was to write as much as possible in a portable language. There were a few parts of the OS that were written in PDP-11 machine language, but I would say about 2%. The emphasis was on extensibility, portability and reasonable efficiency.

C delivered on most counts. For extensibility, it forgot about adding language constructs for addressing new domains, relying on libraries instead. Portability was actually limited to 8-bit-byte addressable machines — I remember an attempt to port UNIX to a VARIAN620, word addressable, no good. But the industry eventually went to byte-addressability and the problem went away.

Secure programming was not a big concern; space was, since the design was for a machine that could only address 64Kbytes per process space. Hence things like zero-terminated strings and the presence of strcpy in the libraries.

We should learn from the car industry. They also started competing on speed (and luxury) with little attention to reliability and safety; they came a long way. Hopefully Computer practice will, too.

Just like thirty years ago the slogan was “quality is free”, we should say that “security is free” as well. Will take twenty years.

Cheers!

Cheers!

Bryo April 30, 2014 2:50 PM

Catastrophic is, indeed, the right word. Luckily the Linux Foundation intervened and will be getting more much-needed funding for OpenSSL. It’s definitely a step in the right direction.

katie May 4, 2014 2:06 PM

I have openssl v 1.0.1f and openvpn 2.0.5 installed and am still not vulnerable, do you know any solutions to make my server vulnerable? Thanks

Buck May 4, 2014 11:12 PM

@katie

I have openssl v 1.0.1f and openvpn 2.0.5 installed and am still not vulnerable, do you know any solutions to make my server vulnerable? Thanks

First thing’s first, if you have any sort of device in between your server and the connection from your service provider, please remove it now! Then, disable (or open all ports for) any software based firewalls on the server. Obviously, if you have implemented any IPS solutions, you’ll want to uninstall those as well… There is an exception to these rules however; if the system is fairly common and especially if it has an internet facing administration portal, you may want to leave it running with the default username and password. Submit the publically available address to some of the major search engines for extra credit! 😉

Another step you may wish to take is enabling anonymous and (if your OS allows it) passwordless account access. If you’re operating a *nix system, you may try running passwd -d root and if you are running an SSH service (if you’re not, go ahead and do that now) you will probably also be interested in the ‘PermitEmptyPasswords’ & ‘PermitRootLogin’ directives. Actually not 100% sure about that advice though… No-password logins may be rare enough that you’d be better off using a very common password, like ‘password’ or ‘12345’ – your mileage may vary.

One way to test the optimal – I’ll just call it ‘ownability’ for the purpose of our discussion – level of your system, is by setting up some so-called honeypots. These may be real or virtual servers designed to function exactly like your real server except for a few key differences… For example, you may have no password set on your main server, while on your honeypots, you can try various common words & phrases. Pay close attention to the logs, and if it appears that any of these potential passwords are likely to result in more efficient ownability, you may consider changing the password on the server you wish to be more vulnerable…

This last tidbit of advice is by no means the least, and it will certainly require more vigilance than many other security-avoidance techniques! You should constantly scan the bug fixes, errata, and patch notes from the developers of the operating system you’re using. Also see the CVE lists compiled by the MITRE Corporation and the U.S. DHS’s NVD. If your configuration seems vulnerable (even better, test it yourself!) you’re pretty much golden, though you’ll need to stay away of any patches that could close these holes… Bonus points if you go out of your way to install vulnerable versions of network services! (Their ‘threat scores’ aren’t always an appropriate measure, so you’ll want to remain alert for descriptive phrases such as ‘arbitrary code execution’ and ‘remotely exploitable’)

If this feels daunting at first, fret not! You can always take it one step at a time… and you may find some solace in knowing that your server was likely delivered from the factory in a vulnerable state thanks to an arrangement between any one of your favorite TLA’s and a manufacturer near you! 😉

T May 4, 2014 11:40 PM

@Buck
Katie wouldn’t get hacked with that setup, I don’t think anyone would, not even from worms 🙂

Nick P May 4, 2014 11:46 PM

@ Buck

LMAO. That’s the most epic treatment of the topic I’ve seen to date. I’m adding it to my link collection.

Nick P May 5, 2014 1:04 AM

@ T

Actually, Katie’s rig following Buck’s advice to the letter might be totally secure from remote attackers. I just noticed this one imperfection in his scheme:

“First thing’s first, if you have any sort of device in between your server and the connection from your service provider, please remove it now! ”

Service providers typically ship a device, like a modem, that sits between the “server and the connection from… service provider.” Removing it disables the Internet. Removing the Internet should make the system quite secure from Internet threats, which make up 99+% of attacks. 😉

Of course, the system will still have trouble if it also has built-in wireless that automatically (and conveniently!) connects to nearby access points. cough Mac OS X cough I figure the latter feature would be included in Buck’s recommendations if they covered wireless networking.

AlanS May 5, 2014 9:56 AM

@Buck

I think you have stumbled across a new security paradigm called “security through visibility”. I believe, by manipulating attackers’ sense of suspicion, this will turn out to be a vastly superior strategy to security through obscurity. If your system is too easy to break into–your system appears to be the lowest of the low-hanging fruit– bad guys will suspect a trap and stay away. It also keeps the NSA from snooping around as your level of NSA-suspiciousness is directly proportional to the number and sophistication of the security controls you have implemented. As the saying goes, you have nothing to fear if you appear to have nothing to hide.

pax September 22, 2014 7:04 PM

I forgot to mention in my opinion on the scripts they create this is a company not an individual. They set up marketing EULA licensing agreements sent to mailing lists beginning from my contacts & expanding, twitter, fb, you tube etc. They have wallet payments accepted. My laptop turned into some form of on-line gamers sys were they even cleaned out help files/menus & had halo & shrek ‘latest versions available’ instead. My phone tho seems to fav Google apps & androids?! I’m a complete novice wen it comes to this device (didn’t get a change to learn it) and am bamoozalled where oss appeared from to start with. so this is my opinion only. I have no facts just a headache from trying to read scripts & chasing androids around night after sleepless night trying desperately to kill the little glitter. I hope you all have better luk than I did. And bugger those responsible for truly ruining what could b an absolute delight.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.