More on Heartbleed

This is an update to my earlier post.

Cloudflare is reporting that it's very difficult, if not practically impossible, to steal SSL private keys with this attack.

Here's the good news: after extensive testing on our software stack, we have been unable to successfully use Heartbleed on a vulnerable server to retrieve any private key data. Note that is not the same as saying it is impossible to use Heartbleed to get private keys. We do not yet feel comfortable saying that. However, if it is possible, it is at a minimum very hard. And, we have reason to believe based on the data structures used by OpenSSL and the modified version of NGINX that we use, that it may in fact be impossible.

The reasoning is complicated, and I suggest people read the post. What I have heard from people who actually ran the attack against a various servers is that what you get is a huge variety of cruft, ranging from indecipherable binary to useless log messages to peoples' passwords. The variability is huge.

This xkcd comic is a very good explanation of how the vulnerability works. And this post by Dan Kaminsky is worth reading.

I have a lot to say about the human aspects of this: auditing of open-source code, how the responsible disclosure process worked in this case, the ease with which anyone could weaponize this with just a few lines of script, how we explain vulnerabilities to the public -- and the role that impressive logo played in the process -- and our certificate issuance and revocation process. This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

EDITED TO ADD (4/12): We have one example of someone successfully retrieving an SSL private key using Heartbleed. So it's possible, but it seems to be much harder than we originally thought.

And we have a story where two anonymous sources have claimed that the NSA has been exploiting Heartbleed for two years.

EDITED TO ADD (4/12): Hijacking user sessions with Heartbleed. And a nice essay on the marketing and communications around the vulnerability

EDITED TO ADD (4/13): The US intelligence community has denied prior knowledge of Heatbleed. The statement is word-game free:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

The statement also says:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

Since when is "law enforcement need" included in that decision process? This national security exception to law and process is extending much too far into normal police work.

Another point. According to the original Bloomberg article:

http://www.bloomberg.com/news/2014-04-11/nsa-said-to-have-used-heartbleed-bug-exposing-consumers.html

Certainly a plausible statement. But if those millions didn't discover something obvious like Heartbleed, shouldn't we investigate them for incompetence?

Finally -- not related to the NSA -- this is good information on which sites are still vulnerable, including historical data.

Posted on April 11, 2014 at 1:10 PM • 178 Comments

Comments

WalterApril 11, 2014 1:26 PM

The problem is that this "attack" doesn't leave traces (or, in real life, it's very hard to find someone that would store all data served from their servers in some log). Even if it was hard to leak the private key, that doesn't exclude the possibility that some other attack could make a server reboot (or perhaps using so much memory allocation that the server could swap something to the disk and read again in the upper memory) so that it would be easier to get such keys.

Matthew WeigelApril 11, 2014 1:41 PM

Reading the CloudFlare article, I don't think they are saying "Heartbleed probably can't be used to acquire private keys." I think they are saying "Heartbleed probably can't be used to acquire private keys from OUR systems." Can anyone say that unmodified Nginx has the same characteristics? What about Apache? (Which version?) What about Lighttpd, Dovecot, Sendmail, Exim, Postfix, Courier, OpenLDAP, and every other application that implements a standard protocol that is often or typically protected with SSL as implemented by OpenSSL?

What about the reports I've seen elsewhere that "first SSL connection after process start" might be the one that can get an SSL key - does CloudFlare's "hack me" stunt address that possibility? (I don't believe it does.)

That adds a lot of nuance, and points toward having a lot more factors you may want to consider in determining your response. However, the complexity and number of factors to investigate before developing confidence that in no way are you secret keys at risk leads me to think that you may be better off just assuming that your secret keys are at risk.

SimonApril 11, 2014 1:45 PM

Walter, it is leaking memory from the current process. So what leaks will depend on architecture details and memory layout and precise details of code. Most OSes protect against reading memory that is freshly allocated to stop one process reading data other processes had in memory, so what is paged to disk should be irrelevant. I can quite believe on tightly written code SSL key is read once, never freed, and a long way from user activity in RAM.

bob tApril 11, 2014 2:04 PM

The real problem is that this will go out into the media and people will start receiving emails that say. "Hello Facebook User, Due to the Heartbleed vulnerability we are asking all of our users to click here (badguys.com) to reset your password immediately, or your account will be closed in 24 hours. Sincerly, Facebook Administrator"

nobodyApril 11, 2014 2:04 PM

That article about the marketing aspects of heartbleed triggered the following indirect chain of reasoning in me:

Doesn't the name "heartbleed" remind quite a bit of an NSA codename (two simple words combined to create an easy to remember unique codename)?

OK, maybe such codenames have just come into fashion (or have already been in other organizations?), but let's still suppose for fun that the NSA knew about the vulnerability for at least a week longer than the public and had some time to investigate what it can do worst case.

Now, under those circumstances, would "heartbleed" seem more appropriate for a backdoor that can reveal only passwords or for one that can also reveal private keys?

Would seem that the codename would fit the latter case better, but I guess that is not certain even under the assumptions made here.

Darragh DelaneyApril 11, 2014 2:10 PM

Hi Bruce,
We have just released an update to our LANGuardian product which can detect SSL/TLS servers and alert if it detects Heartbleed exploits. It uses packet data as a source. I can arrange to get you a personal copy of this software if you want it for test purposes.

Darragh

Peter A.April 11, 2014 2:15 PM

@nobody: hacker community likes such word-games, where deliberate misprounonciation or misspelling makes comical, topical, or otherwise meaniningful and "witty" result. So this "heartbeat - heartbleed" alteration is of no surprise and very much in style.

Some very old examples are here: http://www.catb.org/jargon/html/soundalike-slang.html

AlexTApril 11, 2014 2:21 PM

In case anyone missed it:

Now if this is true - and I'd be really suprised if Bloomberg doesn't have some grounds to put this out - I guess it's really game over.

WalterApril 11, 2014 2:23 PM

@AlexT: Do you mean the "Millions of Android Devices Vulnerable to Heartbleed Bug" or the "NSA Said to Have Used Heartbleed Bug, Exposing Consumers" ? Or even the "Heartbleed Found in Cisco, Juniper Networking Products" ?

LGWApril 11, 2014 2:27 PM

I'm guessing AlexT is linking the Bloomberg article about the NSA. I'm guess I can't link it either, so here's the key snippet (is anyone surprised?):

The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.

The NSA’s decision to keep the bug secret in pursuit of national security interests threatens to renew the rancorous debate over the role of the government’s top computer experts.

TuckApril 11, 2014 2:28 PM

I once used a free-ware CD ripper. It had one restriction: it would only rip half the tracks on the CD, and it picked them at random. I thought I could live with that, but I quickly paid for the full version.

I suspect that this exploit is only going to be marginally useful: unless you get lucky, you're going to have to hit the server so many times that it would probably be equivalent to a DoS attack...

nobodyApril 11, 2014 2:55 PM

@Peter A.
"So this "heartbeat - heartbleed" alteration is of no surprise and very much in style"

Hmn, coming to think of it - "Herzblut" (n.) or "herzbluten" (v.) is a common German term to express something of a very vital nature - "herzbluten" literally translated would be "to heartbleed".

So maybe even a German speaking origin of the term? Would not be unexpected with the developer being German if it was a planted backdoor, maybe?

But, of course, this is speculation, for example one of the discoverers of the bug could have known German well enough, etc.

Still, conspiracy theorists have recently been more often provably right since Snowden, so I remain curious... ;)

AnuraApril 11, 2014 3:03 PM

What we need is some sort of security agency on the national level that hires all the best hackers and cryptographers and tries to find flaws in protocols and software that are used by the government (as well as major players in the private sector), and then works to get them fixed as soon as possible. If we had an agency doing this, then maybe it wouldn't have become so widespread before being fixed. It seems like that is in the best interest of our national and economic security.

nobodyApril 11, 2014 3:07 PM

"Now if this is true - and I'd be really suprised if Bloomberg doesn't have some grounds to put this out - I guess it's really game over."

Hmn, maybe America is still "working" in the sense that excessive power is not tolerated in the end and reduced - if regular means do not get you there, you find a different way, like getting Al Capone for tax evasion?

So, a covert operation aiming to expose the NSA as the reason why millions of citizens have to change their passwords and be afraid they could have been stolen, leading in the end to an NSA that has less power and is investing more of its energy into making computer systems more safe than the opposite?

Would be nice, if true - and if so - thanks a million to the ones who had the guts to do it :)

KnottWhittingleyApril 11, 2014 3:19 PM

Just for explicitness, here's the first para of what Bloomberg is reporting re Heartbleed:

The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.

The NSA declined to comment.

UKNOWWHOApril 11, 2014 3:23 PM

Rumor around the office is that "somebody who knows somebody over at Google" says that the scuttlebutt over there is that Google engineers have been monitoring traffic across their private networks, placing 'tag' data in the stream, then looking for this tag data moving across other networks using stealth taps of their own.

Sure enough, they saw copies of (or at least hashes of) their data where it shouldn't be and investigated.

Kind of makes sense. I guess the NSA is going to have to start encrypting THEIR private data links!

AnuraApril 11, 2014 3:28 PM

Re: Bloomberg Article

Unless we have leaked documents or something other than anonymous sources, I think we should proceed with caution. I mean, it fits in with everything we've learned, but what I want is solid proof. If we have that, we can make a case that the NSA is actually hurting national security, with a widely publicized exploit to go with it.

AnonDevApril 11, 2014 3:30 PM

KnottWhittingley • April 11, 2014 3:19 PM wrote:

"Just for explicitness, here's the first para of what Bloomberg is reporting re Heartbleed:

The U.S. National Security Agency knew for at least two years about a flaw"

Sure. With the number of employees they have I think we can assume they are sitting on top of every code repository of any significance and in real-time examining any checkins for anything useful.

CuriousApril 11, 2014 3:30 PM

As someone who isn't a computer engineer or anything, I want to ask:

Can/could the Heartbleed bug be used to attack anyones computer that make use of the flawed OpenSSL code and simply solicit for memory content on that machine and probably aquire any data related to whaterver usernames and passwords are being cirulated on that machine throughout the day? (Basicly learning every username and passwords a user might have.)

Like, an attacker coming up with an excuse for contacting anyones client machine that happens to require a secure connection over SSL.

The George John Dasch Medal for valorApril 11, 2014 3:33 PM

Oh boy, the next Snowden! An illegal, TOP SECRET disclosure by an NSA insider threat, to their criminal accomplices at Bloomberg.

"The agency found the Heartbleed glitch shortly after its introduction."

This new whopper shows NSA's hysterical panic. Even Rogers, dumbest American commander since William Winder, can dimly sense that his agency is in deep deep shit. If this historic goat rope threatens to run into money, NSA is going to get rolled up like the Diocese of Fairbanks. To protect NSA from summary liquidation, Rogers is fecklessly attempting to characterize NSA sabotage as a catastrophically destructive wrongful omission instead of an act. As if that's going to make a difference once the open-source evidence gets arrayed.

AnonDevApril 11, 2014 3:35 PM

UKNOWWHO re Google scuttlebutt

It is interesting that there still seems to be no official advice or statement from Google to change passwords, as of now their security blog still states:

"We’ve assessed this vulnerability and applied patches to key Google services such as Search, Gmail, YouTube, Wallet, Play, Apps, and App Engine. Google Chrome and Chrome OS are not affected. We are still working to patch some other Google services."

(From http://googleonlinesecurity.blogspot.com/ )

Maybe they are hinting they still do not know all the implications.

PeteApril 11, 2014 3:36 PM

Here's how you access private keys:

1. Send legitimate heartbleed packets to an array of hosts at port 443 once every 30 seconds or so.
2. When you detect packet loss on 443 traffic to a site (indicating a restart), start pinging that IP with heartbleed messages.
3. Returned packets may contain a portion of the private key, e.g., for Apache servers on restart.
4. Solve for private key either through serial application of this process or by using a portion of the private key to seed a brute-force factoring of the public key.

It seems unlikely that anyone but the NSA or a major backbone provider could put the pieces in place to do this -- there would be very low yield, but for actions that can be automated, very difficult == possible.

BleedingHeartApril 11, 2014 3:43 PM

The bottom line is this: Computers are not secure and probably never will be secure. This is especially true when you have an agency with billions to spend subverting your security (and an agency that can compel software and hardware makers by LAW to cooperate). The game is fixed. We are playing against the house, and the house laughs at us as we play games that return negative expected value. Sure, we might win a game here and there, but it's just an illusion. Just like in Vegas, if you play long enough, you will lose money over the long-run.

I have a feeling that the Snowden documents are just the tip of the iceberg. I bet the NSA is dug deep into every major piece of security sensitive software and hardware (no doubt Apple and Microsoft are cooperating). I doubt there is any hardware (or software) that is not subverted. This includes open-source. We have to assume *everything* we say or do electronically can be seen or heard by NSA (and it seems this is no exaggeration). Two years ago people would say "paranoid" or "tin-foil hat." Now the tin-foilers are yelling "told you so."

The best we can do is to fix major flaws that allow script kiddies to steal CC numbers. However, we will never stop determined, smart, and well funded adversaries. Simply won't happen. Sure, we might have an illusion of security, but you can bet your ass that NSA (and others) will always work to subvert it behind closed doors. We may find a flaw like heartbleed, patch it, and give a sigh of relief. The truth is, it is just one exploit out of NSA's *thousands* of critical flaws that do the same thing. The Heartbleed fix has done nothing to thwart NSA surveillance.

I never believed NSA's "dual mission." That is just a front for their intelligence gathering; a means to placate the skeptics and to fool foreign governments. Now we know that all along they were rigging encryption standards (something I have always suspected). I don't trust any crypto that is NSA vetted. Now I even question AES. What NSA has given us is crypto that can keep secrets from your kid sister. That's about all it's good for.

Moral of the story: do not do or say anything on the Internet (even if it is heavily encrypted) that you do not want made public to the world. Period.

DavidApril 11, 2014 3:53 PM

"NSA knew" story

If it's true that the NSA has been aware of and has used Heartbleed since right after it was introduced, that's quite a revealing fact.

It implies that the NSA has a large staff of programmers who do nothing but search open-source code for new vulnerabilities. I'm thinking dozens to hundreds of folks who constantly pull GIT and SVN updates from open-source project as they are submitted and then review them for weaknesses.

Just an AustralianApril 11, 2014 3:56 PM

Are the authors of openSSL aware that it is critical public infrastructure subjet to active attack? Have they heard of the kind of engineering you do in response to this? Have they heard about TDD? Have all the people who used openSSL in spite of it's shoddiness heard of these questions too?

I'm a user if openSSL. But not for long...

JohnApril 11, 2014 3:59 PM

@Curious

I don't think this is a concern. As you have probably gathered, in the vast majority of cases OpenSSL is running on the server side and can dump process memory to anyone who queries the server.

(I think) it's pretty uncommon that OpenSSL is the SSL software on the client side - at least when the client is a browser. Even if OpenSSL was running on your machine (aka the client), the server side would have to exploit Heartbleed - perhaps navigating to a URL deliberately trying to exploit it (like a phishing link). An attacker wouldn't be "dialing into" your computer.

Even then, what's exposable is OpenSSL's working memory at a given moment in time. So more data related to where you're visiting "right now", not where you've already been.

Perhaps someone else has comments on this?

The NSAApril 11, 2014 4:00 PM

IF I were the NSA, I'd definitely plant agents that have the EXACT profile these guys in Europe who checked & approved this code have:

Ph.Ds who have pseudo 'credibility' in the open-source / security community, who have a (somewhat short) history 'contributing' fixes.

Just enough credibility to slip a massive & untraceable vulnerability under the rug & 'review' it.

It should be interesting to watch what journalists turn up on these guys.

Was it a legimitate mistake, or were they NSA agents? Stay tuned...

I think the core problem here is that it took only one guy to review and approve the change. That has got to change.

FigureitoutApril 11, 2014 4:15 PM

BleedingHeart
--We all have our days when we feel like giving up, fact of the matter is there is too many ways to relay a message and too much data occurring nonstop everywhere that NSA still won't see real threats or "surprises". The first thing that young college grads can do is, if you're recruited by NSA, tell them to suck it. Destroying the trust of the world in the US and leaving all our products and protocols open to buffer overflows and malicious circuits is not in the interest of national security. I wonder how much US IP has been stolen as a direct result of NSA and other contractors letting our tech. infrastructure rot and be neglected. I recall a recording where NSA was hiring linguists at a college and students actually stood up and called the recruiters out; which they eventually left and had the stupidest arguments.

Next is to begin planning how you want to make a "basis of trust". It will be layers and steps, as in slowly incrementing towards a nice isolated machine w/ defenses set up. Me (I'm just getting my feet wet and won't say anything is secure for awhile), mostly Nick P, NWFOR, Petrobras and others are looking towards endpoint security and secure languages. Network engineers, guys/girls that like networks and nodes, we need new protocols. Phone networks that get owned trivially, wifi, again it's owned w/ tiny scripts as the protocols allow it. We always need more crypto (Anura's looking into that) as OTP's are very impractical.

I would say the guys doing it for money, the private security groups, that's great; people doing what they love for money. At the same time, people working on security *in their free time*, those are the ones that may be a little more trustworthy. Find an area you think you can contribute (and importantly educate simply to others) and help out. Not today though.

KnottWhittingleyApril 11, 2014 4:20 PM

AnonDev:

Sure. With the number of employees they have I think we can assume they are sitting on top of every code repository of any significance and in real-time examining any checkins for anything useful.

Now that we know about Heartbleed, and that it's such an obvious bug, it would indeed have been surprising if the NSA hadn't known about it right away.

I am a bit surprised that they sat on it for two years, given that it's so obviously a gaping security hole that any code review should have caught.

I would have thought that they have enough less-obvious ways into systems that they'd rather this one got fixed so nobody else could use it.

Which leaves me wondering. Do they not actually have all the ways in that I think they do, or do they just prioritize securing the nation's infrastructure dead last?

I'm guessing it's not the former. Heartbleed provides a really easy and convenient way to get into a whole lot of stuff, and they'll screw actual national security for their convenience.

That could a lot of powerful corporations unhappy with them, which is good.

It's one thing to fuck with mere human rights, but it's an entirely other thing to fuck around with business.

I do still have to wonder if the vulnerability was only uncovered because the NSA tipped somebody off, somehow, after noticing that someone else had noticed the vulnerability and started exploiting it too.

I hope they would usually do that, and not keep mum about a major vulnerability they knew was being exploited by hostile countries, criminals, etc.

No matter how cynical I get, though, it's never enough.


BenniApril 11, 2014 4:22 PM

By the way, is this a coicidence:

Theo de Raadt, of the OpenBSD OpenSSH fame, claims here:

http://thread.gmane.org/gmane.os.openbsd.misc/211952/focus=211963

that a malloc protective wrapper, provided for an exploit mitigation,
was intentionally avoided by the openSSL maintainers in order to gain
some performance advantages on some systems - causing this current
fiasco.

And THAT indeed raises the question whether some devs of openssl get
their money from an agency. Since by deliberately omitting a malloc
protective wrapper, you literally just have to wait for some useful
idiot, who, during a drunken night, makes a mistake and submits
erroneous code.


And here is a similar thing:

This kernel dev is with right proud that he resisted pressure from intel developers, to make the linux random number generator solely dependend on the hardware random number generator:


https://plus.google.com/+TheodoreTso/posts/SDcoemc9V3J

After sometime, he got mails from an claimed redhead dev, with exactly the same argument as these openssl devs were having above: If you would use only the hardware random number generator, then you would have "better performance" on "some systems". But the redhead dev does, even after being explicitely questioned, not tell, which systems should run faster, and for which applications a superfast random number generator is of advantage.

If one reads the following discussion, one must get the impression that this redhead developer was bought by some agency:


https://lkml.org/lkml/2013/9/5/212

On Thu, Sep 05, 2013 at 08:18:44AM -0400, Prarit Bhargava wrote:
> The current code has two exported functions, get_bytes_random() and
> get_bytes_random_arch(). The first function only calls the entropy
> store to get random data, and the second only calls the arch specific
> hardware random number generator.
>
> The problem is that no code is using the get_bytes_random_arch() and switching
> over will require a significant code change. Even if the change is
> made it will be static forcing a recompile of code if/when a user has a
> system with a trusted random HW source. A better thing to do is allow
> users to decide whether they trust their hardare random number generator.

I fail to see the benefit of just using the hardware random number
generator. We are already mixing in the hardware random number
generator into the /dev/random pool, and so the only thing that using
only the HW source is to make the kernel more vulnerable to an attack
where the NSA leans on a few Intel employee and forces/bribes them to
make a change such that the last step in the RDRAND's AES whitening
step is changed to use a counter plus a AES key known by the NSA.

On Thu, Sep 05, 2013 at 11:08:28AM -0400, Prarit Bhargava wrote:
>
> The issue isn't userspace /dev/random as much as it is the use of
> get_random_bytes() through out the kernel. Switching to
> get_random_bytes_arch()
> is a search'n'replace on the entire kernel. If a user wants the faster random
> HW generator why shouldn't they be able to use it by default?

Where is the speed of the random number generator a bottleneck?

In general, adding knobs when users can make what might be
potentially
the wrong chance is very dangerous. There is a reason why there
aren't convenient knobs to allow users to select the use of the MD4
checksum, "because it might be faster, why shouldn't the user be
allowed to shoot themselves in the foot"?

BTW, note the following article, published today:

http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encrypti
on.html?pagewanted=all

"By this year, the Sigint Enabling Project had found ways inside some
of the encryption chips that scramble information for businesses and
governments, either by working with chipmakers to insert back
doors...."
Relying solely and blindly on a magic hardware random number
generator
which is sealed inside a CPU chip and which is impossible to audit is
a ***BAD*** idea.

On Fri, Sep 06, 2013 at 08:08:52AM -0400, Prarit Bhargava wrote:
>
> Your argument seems to surround the idea that putting stuff on the internet is
> safe. It isn't. If you've believed that then you've had your head in the
> sand
> and I've got a lot of land in Florida to sell you.

I have no idea how you are getting this idea. My argument is that
putting all of our faith in one person (whether it is DNI Clapper
lying to the US Congress), or one company (like Intel, Qualcomm, TI,
etc.) is a bad idea. Software can be audited. Hardware can not. We
can at least test whether or not a network card is performing
according to its specifications. But a HWRNG is by definition
something that can't be tested. Statistical tests are not sufficient
to prove that the HWRNG has not been gimmicked.

Hence, unless you can show me where the speed advantage of bypassing
the entropy pool is needed, why should we do this? And if there is a
specific place where need to consider adjusting the security
vs. performance tradeoff, let's do that on a case by case basis,
instead of making a global change.

Hence, your patch is IMHO irresponsible. It exposes us to more risk,
for an undefined theoretical benefit.

Of course nothing on the internet is going to be perfectly safe. But
that doesn't mean that we shouldn't make it harder for any government
agency, whether it is the Chinese MSS, the US NSA, or the UK GHCQ,
from being able to easily perform casual, dragnet-style surveillence.

AlexTApril 11, 2014 4:42 PM

I was indeed referring to the NSA Bloomberg story, which has (unsurprisingly) been promptly refuted.

One of the problems with recent events is that no-one would believe the NSA, or for that matter, the US government, on face value in those matters anymore. What was conspiracy theory a few years ago is just business as usual today. Even if they did not know about Heartbleed they will have quite a hard time to get the message through...

Another question I have not seen addressed (but I might have missed it) is how was the issue discovered by the Google researcher ? Via code review or by observation of an actual exploit ?

Finlay does anyone here actually log all traffic to their servers for forensic analysis ? I understand the implications (I do only replicate it for real time analysis and it is quite a ordeal already) but it would be really interesting to see if / when Heartbleed started to show up... Is that being done ?

KnottWhittingleyApril 11, 2014 5:12 PM

OK, now it's being reported that NSA denies having known about the Heartbleed bug.

That's almost worse, isn't it? You mean they're NOT reviewing crucially important code to at least find really glaring dumbass bugs like missing bounds checks on buffers that could expose sensitive data?

I don't know what to think.

CuriousApril 11, 2014 5:18 PM

I am inclined to believe that NSA, or rather someone at the NSA, would be more than happy to offer propaganda than truthful statements. *shrugs*

SkepticalApril 11, 2014 5:21 PM

@Knott: Yeah, left a longer comment about it in Squid thread to avoid turning this one into a NSA discussion. Was surprised by the NSA commenting on this at all, which, combined with Riley's description of his sources, inclines me to lend some credence to the denial.

The George John Dasch super secret disguise kitApril 11, 2014 5:23 PM

More comical panic.

"Reports that NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong. The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report. The Federal government relies on OpenSSL to protect the privacy of users of government websites and other online services. This Administration takes seriously its responsibility to help maintain an open, interoperable, secure and reliable Internet. If the Federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL."

Clearly, they've lawyered up and tardily concluded that deniability is safest, plausible or not, because they've finally started to look at US state legal exposure to their out-of-control agency's wanton destruction.

This is priceless: 'biased toward responsibly disclosing such vulnerabilities.' Yes, because selectively withholding information to breach diplomatic and commercial obligations to specific states goes over so much better in the arbitral panels.

BobApril 11, 2014 6:08 PM

Bruce, et al.

See this today:

http://www.zerohedge.com/news/2014-04-11/nsa-abused-heartbleed-bug-years-left-consumers-exposed-attack

From Bloomberg:

NSA SAID TO EXPLOIT HEARTBLEED BUG FOR INTELLIGENCE FOR YEARS

The U.S. National Security Agency knew for at least two years about a flaw in the way that many websites send sensitive information, now dubbed the Heartbleed bug, and regularly used it to gather critical intelligence, two people familiar with the matter said.

And the punchline:

NSA SAID TO HAVE USED HEARTBLEED BUG AND LEFT CONSUMERS EXPOSED

Putting the Heartbleed bug in its arsenal, the NSA was able to obtain passwords and other basic data that are the building blocks of the sophisticated hacking operations at the core of its mission, but at a cost. Millions of ordinary users were left vulnerable to attack from other nations’ intelligence arms and criminal hackers.

“It flies in the face of the agency’s comments that defense comes first,” said Jason Healey, director of the cyber statecraft initiative at the Atlantic Council and a former Air Force cyber officer. “They are going to be completely shredded by the computer security community for this.”

Sancho_PApril 11, 2014 6:25 PM

NSA stands for “National Security Agency” (?).
Shouldn’t they protect the nation?

If they knew about the bug and let it go it would be treason (helping the enemy).

If they didn’t know it may be a shame.

However, now it is time to investigate their huge database,
to find out if and who has used that exploit since when.

Last chance to earn some bonus points!

Mike the goatApril 11, 2014 8:19 PM

Knott: their denial was curious and I guess post-Snowden there isn't any way to read into what their statement /really/ meant. If the NSA's audit team independently discovered Heartbleed they would not disclose such a boon (note I am not even suggesting they authored it in this hypothetical, just found what the commercial security world was going to find earlier).

Re the cloudflare challenge - the danger of this is that the mainstream media has already picked it up and are now playing down Heartbleed's severity. This is dangerous and could lead some to not bother patching devices that are vulnerable. Given there are myriad of architectures and web servers, accelerators etc I think it is impossible to say at this point in time what can and can't be leaked until further research is done. This is why the Cloudflare challenge is both important but also dangerous as it may breed complacency from those who aren't running nginx on Linux x64 etc etc.

Greg JaxonApril 11, 2014 10:14 PM

Among the "human aspects" were articles (Guardian was the first I saw) in which "experts" said you shouldn't change your password until the server in question has been secured - that using that server made things worse (as if that was possible). Perhaps Randy @ xkcd should explain it this way:

UserName: Bruce
Password: I don't know Bruce's password.

Server: Sorry that's not Bruce's password, because I just loaded Bruce's (hashed?) password into memory and checked... but... you... already noticed, eh?


sooth_sayerApril 11, 2014 10:47 PM

Open source is running the world now .. but it has real junk at a lot of places -- this one was serious but there are others that are not noticed or fixed when known to be flawed.

Commercial software is not much better- that's why on a good week my iphone updates 2-3 applications (out of about 40 ) have installed) .. in older days the software was much less functionality and COULDN'T be changed unless you spent some money .. even shipping a floppy -- so there was some rigour to it. Now there is none!

RogueAIApril 11, 2014 11:02 PM

Government has being using the human aspect to mitigate responsibility for some time. By outsourcing to private corporations through confidential contracts, that rely on corporations self reporting abuses (which itself runs contrary to their maintaining of profit margin), governments avoid responsibility and proper scrutiny.

A department, such as the NSA, would simply employ a private contractor to exploit the HeartBleed bug via a confidential agreement, that directs the contractor to then dump all useful obtained information back into the intelligence network for analysis using selectors. The NSA can then state their department did not exploit the HeartBleed bug.

Governments have used the same system with prisons and detention facilities since Thatcher and Reagan. Private corporations have been found to breach their responsibilities often under these contracts, as it actually is in their interests to let problems grow out of control, then turn around and ask for more money to fix the problem. The government avoids scrutiny even though private contracting of state responsibilities becomes less efficient, more expensive and more unaccountable for it's actions. A few minor staff are blamed and fired from the private corporation and no government minister or department takes responsibility.

Jimmy_largeApril 12, 2014 12:33 AM

Many sites in an attempt to update their certificates and still keep their site functioning are simply allowing users to login using http and therefore negotiate in the clear.

nobodyApril 12, 2014 12:34 AM

Re NSA / US government denial and my hypothesis:
"Hmn, maybe America is still "working" in the sense that excessive power is not tolerated in the end and reduced - if regular means do not get you there, you find a different way, like getting Al Capone for tax evasion?"

Hmn, maybe some people simply thought to themselves, "if we can't get the NSA for the things they did, lets get them for something they did not do, nobody is going to believe them now"?

Sort of a reverse "crying wolf", the NSA too often denying things they did do (in very hypocritical ways like an "egoistical giraffe", head high up, "oh you have nothing to eat down there - hey up here all the trees are still green" - sorry could not resist the pun ;) and now when they are maybe really innocent, nobody believes them and they have maybe nothing in their hands to prove their innocence?

Would really love to know what is going on behind the scenes worldwide since Snowden... After all, maybe the NSA / US government really/genuinely secretly decided to change policy and to make the worst vulnerabilities for the public and industry public instead of keeping them to themselves - but decided to do this secretly and deny it? Oh, so many possibilities...

Mr. PragmaApril 12, 2014 12:35 AM

@BleedingHeart (April 11, 2014 3:43 PM)

"The bottom line is this: Computers are not secure and probably never will be secure "

I disagree. Because that's focussing on the symptom rather than on the cause. If at all, one needed to say that nothing involving humans will ever be secure.

I see 2 problems:

- Open Source
Following the line of a chain being as strong or weak as its weakest element, OS is considered to be lousy.
Kindly note that I don't say it *is* lousy but that it is zo be considered to be lousy (from a security perspective).

For two reasons. a) unqualified hobbyists and b) - and more interesting IMO - because it's free (b being somewhat related to a)).

Looking at it from my pov. we've been too greedy. OS offers 2 major advantages namely, being open source (and such verifiable) and being free. While the former is a major plus the latter is shooting oneself in the foot by asking for docs to not be written and test to not be performed.

So, better get used to the idea of paying for OS software one way or another. If we do, OS people can afford better (motivated) developers and TESTING/auditing. And we still get way more than from closed source software! And cheaper, too. Still a very attractive deal money-wise and a way more attractive deal quality-wise.


- "Coolness", stupidity - languages.

Don't get me wrong, I've used C myself for more than a decade. And yes, that was when I was young and when being cool meant a lot to me.

Let's face it. C is outdated and a glorified - and badly mistaken - super assembler. And it's a certain trouble ticket.

C isn't formally verifiable and, in fact, it's even hard to correctly parse. In other words, developing security related software in C is akin to Schneier or Bernstein not properly developing algorithms but asking drunk college boys on a weekend night to come up with something.

Wirth, Meyer and others were right after all. Yet I just looked up bcrypt and found implementations in C (of course), Ruby, Python, Perl (aka. "That was yesterday. Me not understand that code today"), PHP, and whatnot.

Pascal? Fail. Modula? Fail. At the very least Cyclone, a safe kind of C (well, an attempt at least)? Fail.

If heartbleed - something that just HAD to happen - doesn't make us think again and think hard than we f*cking deserve to be hacked. Sorry.

DBApril 12, 2014 2:10 AM

@ Mr. Pragma

How is your fascination with closed source working out when our laws allow any company to be legally compelled to purposefully put in back doors and keep quiet about it, on pain of imprisonment? Why are you promoting closed source when that's the case? Advocating closed source makes no sense in that kind of environment.

Mr. PragmaApril 12, 2014 2:14 AM

DB

Kindly read my post again. Pointing out problems related to Open Source is not identical to promoting closed source software (which I do *not*).

What I suggest is to support OS projects financially.

FredApril 12, 2014 2:21 AM

Quoted from heartbleed.com:

"Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates..."

It appears someone was able to steal the secret private keys from a running server process.

Mike the goatApril 12, 2014 2:24 AM

Norman: yeah, and I am hardly surprised. Cloudflare's comments that it perhaps couldn't be done were reckless and ultimately incorrect.

Iain MoffatApril 12, 2014 2:45 AM

@Fred: I note from your quote that they report finding their own key. It occurs to me that if examining 64K blocks of data from unpredictable offsets in the SSL process memory, that finding one's own (known) private key is a much lesser challenge than being able to recognise any other key - a simple search for a known sequence of bytes rather than having to identify a possible key from its context and structure and then test the candidate to see whether it works. Not impossible but certainly something requiring more knowledge and either automation or repetitive trial and error.

Iain

Mike the goatApril 12, 2014 3:27 AM

I wonder how all the CAs will manage everyone applying for replacement certs and if they will profiteer from it or offer Heartbleed affected customers a subsidized rekey?

Dave WalkerApril 12, 2014 4:22 AM

I'm not overly surprised that getting useful material out of Heartbleed (or rather, identifying and isolating useful material in the pile of stuff that comes back) is hard; I'd expect people would have to do a small version of Shamir and van Someren's entropy-based searching.

Actually, though, one of the early thoughts which crossed my mind when I saw a detailed description of Heartbleed, was "why didn't static code analysis tools pick this up?". I'd assume such tools (not just security-enhanced lint, but maybe also code from the likes of McCabe) would get run against something as widely-used and security-critical as OpenSSL - is this not actually the case, or were they run against OpenSSL and missed the bug? If the latter, I'd say that's a more serious bug in the analysis tools, than Heartbleed is itself...

replayApril 12, 2014 5:04 AM

"I'm not overly surprised that getting useful material out of Heartbleed (or rather, identifying and isolating useful material in the pile of stuff that comes back) is hard; I'd expect people would have to do a small version of Shamir and van Someren's entropy-based searching."

But not for RSA private keys, see section 2 in the paper.

Clive RobinsonApril 12, 2014 5:10 AM

Let's try steping back from the heat of battle on this for a moment and try to look at it a little dispassionatly.

Firstly the OpenSSL project team is just a handfull of people working on it part time with their own resources. Now I don't know what motivates them but I for one are thankfull that they do as it's not an easy task at the best of times.

Now there has been a question over what some regard as an elementry mistake (not bounds checking correctly) however I don't think they are looking at it dispassionatly and are also using 20-20 Hindsight.

The bounds check is not the problem, nor for that matter is the usage of malloc and friends, or for that matter the use of C.

The real problem was the wooly thinking that went into the original specification and that is what should be kept in mind and realy the starting point of any "enquiry".

The first question should be not why the heartbeat was of the format it was, but why was it necessary at all and what were all the options available.

From what I can see it was a cludge in it's own right, and that it's design was likewise a cludge, giving rise to an ill thought out specification. All of which realy ment that anything that followed was going to break in some way.

This is part of what I have described on a number of occassions as "an artisanal aproach of the wheel wright" bereft of any real engineering. And it is something that is so prevelent in software production it should rightly be claimed as a plague.

As I've pointed out in the past the artisanal aproach might produce a "rustic look" but it also killed a lot of people by it's faliure. Whilst most of these deaths went unreported as "accidents" or "the drivers fault" when wheels colapsed and those on board were thrown about, the advent of steam changed that. Whilst the anoying whistle of low preasure steam could be ignored, the explosion of a high preasure boiler with bits, pieces and bodies being flung far and wide could not it was to newsworthy. Eventialy sufficient preasure was put onto the English Parliment and they legislated that boilers like guns and cannon before them should not only be "proofed" at manufacture but regularly there after and boilers and the engines should be properly maintained and certified as such by independant inspectors. Engine making and keeping was turned from a craft into a proffession and science made the difference between artisan trades of metal smiths and engineering as we have come to know it.

Interestingly "regulation" was not as the neigh says of the time predicted "the death of industry" it drove up standards, saved lives and reduced wanton destruction, which increased condifence and thus increased the market for boilers and engines and thus actually brought new life and vigor to the industry.

Whilst I would not argue that software development is identical to artisanal boiler and engine development, the parallels are fairly clear to see.

And the recent "blow up" in the press about the NSA and other Five Eyes Nations is earily similar to the reporting of engine disasters.

Perhaps it's time to make regulatory legislation to turn the artisanal code cutting driven by "managment" not "consumer" need into a proper proffessional industry, which properly addresses customer safety --which includes privacy-- over short term profit. I suspect the result if regulated judiciously would be a significant improvment in the market.

However the weasle word is "judiciously" based on the various Five Eyes Executives and law makers currently I suspect any legislation would be anything other than judicious.

DBApril 12, 2014 5:16 AM

@ Mr. Pragma

So you're proposing that open source programmers should be paid, and that users should pay for open source software? Interesting. Without holding the software itself hostage to get everyone to pay (i.e. making it in fact closed source) how do you propose to make people pay for their OS?

I would admit that more open source programmers being paid by some charitable foundation would be beneficial, I just don't see how you're going to pass that cost along to the end users with it still being free and open. Openness itself is a kind of freedom, so being free is not a detestable problem to be avoided.

Keep in mind that the word "free" means two different things: free as in no cost, and free as in freedom. This might be where I misunderstood you.

DBApril 12, 2014 5:29 AM

@ Clive Robinson

I'm a bit leery of anything that concludes that we need more "regulatory legislation" to fix all our problems... Is a cruel power-hungry overreaching government a problem or our savior? I'd rather we be self-regulated in the industry than give up even more human rights to these jokers.

yesmeApril 12, 2014 6:07 AM

@Clive Robinson

"The bounds check is not the problem, nor for that matter is the usage of malloc and friends, or for that matter the use of C.

The real problem was the wooly thinking that went into the original specification and that is what should be kept in mind and realy the starting point of any "enquiry"."

In this particular problem, and the previous disasters, WERE in fact caused because C is a hard to read language and easy to mess around with.

OpenSSL is a very badly written library. I mean, it has #ifdefs running around through "||", through tertiary if-then-else statements and through goto labels. Frankly, I don't know how you can write worse code than this that still compiles.

They should have just seperated the damn stuff. It's not that hard.

But most of all, a ship needs a captain. (think Linus Torvalds and Theo de Raadt)

Last but not least, the IETF is indeed also to blame for making things way to complex. And the worst thing is that they let it all happen. It's probably a habit of committee descision making, that's actively being influenced by Big Corp and NSA.

You know, thinking about it, I ask myself the question of how long is this internet still maintainable?

The pile of garbage is immense.

JacobApril 12, 2014 6:32 AM

@DB

You said "I just don't see how you're going to pass that cost along to the end users with it still being free and open".

Annual donation drives, as Wikipdia does, are a good source of income. I use Wikipedia regularly, and when I saw their donation drive I donated. I got in response a thank-you letter that gave me a warm fuzzy feeling, and since then I regulraly donate once a year.
Evidently, it is not only me. According to AP last year: "Wikipedia has raised $20 million in its annual plea for donations to help expand and improve the Internet's leading encyclopedia."

Now back to openSSL: according to Bloomberg, there is a Foundation with a $1M annual budget that supports the 5 programmers which work on the openSSL project. I bet that they can do a good code refactoring and critical review with another $1M a year. Let's have them run an annual domation drive. Many user will contribute - me included. Big companies that use their code and donate handsomely will get a prominent mark on a donation hall-of-fame poster page, and even a nice looking framed certificate (no pun intended) to hang on their corporate lobby wall.

It is doable.

mozApril 12, 2014 6:44 AM

Actually; after thinking about the post above; I have realised I was completely wrong. It should probably read:

"Heartbleed was identified and exploited by GCHQ under our contract with them to spy on US citizens. Damn that's clever. What a good thing we never activated the part about trying to keep the USA secure."

Clive RobinsonApril 12, 2014 7:16 AM

@ yesme,

I'm not blaiming the tool when the craftsman has limited choice.

But you also need to consider that even type safe languages are only "so good" and in many cased far from good.

Interestingly if you hunt around on the net for verification tools that have been run against OpenSSL either they don't pick up on the lack of bounds check or produce such a verbose output with all sorts of warning etc messages that that the signal on this problem is as a submariner once said "is buried in whale song and seal farts".

So if verifier tools specificaly designed to hunt out such things can not reliably find this one what hope for a compiler that is not? Which then gives rise to other questions about people relying on such tools thus never gaining the experiane to spot the more subtal bugs...

As for the IETF they have several problems not least of which is technology improvment is outstriping human ability to compensate for it. When they started even 8bit chips were as rare as hens teeth. Designs that made sense within the then current technological constraints got the thumbs up everything else got the thumbs down. Which means that much that should have been there was not nore even alowed for. This is the reason why the "functioning" DoD protocols won over the "blue sky" ISO protocols.

However as you have noted it's not sustainable as someone one once observed "there's only so high you can pile the brown stuff before it buries you in a stinking moras."

Clive RobinsonApril 12, 2014 7:32 AM

@ DB,

As I said regulation has worked in the past as history shows.

However as I also noted you need to get it right by being suitably "judicious", but with the way the Five-Eye nations legislators currently are I would be very cautious.

And that's a real problem, because history shows us that "free markets" are almost always races for the bottom unless legal or some other regulatory effect --such as cartel/monopoly-- applies, or they are "faux markets" designed for some alternative reason --such as asset stripping or stealing money-- that as we see with Pyramid Scheams and Financial Instruments implode with the perps/organisers walking away with the assets/money.

rj07thomasApril 12, 2014 7:42 AM

https://www.cloudflarechallenge.com/heartbleed

seems cloudflare have been proved wrong, quite quickly.

Also, disagree with US warning only to change passwords once site "safe". On individual scale, hassle of checking arbitrary sites unrealistic. Also, risk of not changing means account could be compromised from prior attack data leak anyway (can't answer probability puzzle about increased attacks this week v. increased password changes meaning more or less likely to have credentials intercepted).

DBApril 12, 2014 8:01 AM

@ Jacob

I agree... but getting something for free and then loving it so much you donate... is quite different from: "free" is the problem, make everyone pay!! That's the part that threw me off. I've donated to some things too, both money, and time in my area of professional expertise. Smart people study and learn about and find jobs in areas they love, so they are professionals not just clueless stupid hobbyists anyway.

anonymous from switzerlandApril 12, 2014 8:42 AM

@Clive Robinson
"I'm not blaiming the tool when the craftsman has limited choice.

But you also need to consider that even type safe languages are only "so good" and in many cased far from good."

I see a lot less ways to leak the private key of a web server using any language that nominally prevents random memory access at runtime and be able to plausibly deny to have intentionally rather than accidentally created this possibility.

For this reason alone, any language without nominal runtime memory access checks seems to be a no-go for future security software, and, yes, arguably finding a good alternative is not so easy... Maybe best pragmatic way would even be a modified more secure variant of C? (But maybe that is also not practical, but maybe not certain without some evaluation.)

yesmeApril 12, 2014 8:44 AM

@Clive Robinson

Most people forget that source code is being read way more times than it's being written.

With that in mind it makes sense to have a language that is easy to read and understand.

I prefer a language that doesn't allow you to write hacks, because everyone knows that hacks are like temporary laws: They don't go away.

And C has lots of problems more besides that.

Now think 40 years ahead (C is little over 40 years old). Do we still want to talk about the nasty features of C? About it's speed? About the preprocessor crap.

C has had it's time. C++ was a mistake. Let's face it.

We need a language that is unambigious, easy to understand and does what you think it should do. C is not that, and C++ certainly not.

Mike the goatApril 12, 2014 9:11 AM

Yesme: you wrote...

OpenSSL is a very badly written library. I mean, it has #ifdefs running around through "||", through tertiary if-then-else statements and through goto labels. Frankly, I don't know how you can write worse code than this that still compiles.

I nodded in agreement so hard I might have to see a chiropractor.

OpenSSL's src is a mess. If cyaSSL can mirror its basic functionality and do such a thing with code that is both legible and importantly it is advertized as 20 times smaller than OpenSSL. Of course it is targeted towards the embedded market but I would rather audit a significantly smaller codebase that a lay sysadmin can likely understand than try and analyze the Pandora's box that is OpenSSL's source. It really is a disgrace.

Feature creep is a terrible thing. Security critical libraries should focus on their task. At least move redundant or niche usage extensions and other foolery out of the core and disable them unless uncommented in the Makefile at compile time.

DBApril 12, 2014 10:01 AM

@ Clive

I realize that "self regulation" in the business world is code for "we don't give a f*** and we'd poison our own grand kids if it made us a buck" (I am thinking of a few examples where this is quite literal)... but open source in general is not quite so business driven, it's got a lot of volunteers. In the volunteer world it means a slightly more altruistic thing since money isn't such a god. It means, let's figure out what works best and do that. Now standards bodies on the other hand, they've become a bit tainted by business interests, yeah. Regulation might be the only way to tame business, but don't stifle the volunteers. The problem is business has a full stranglehold on Congress, so there's no way for it to go right...

VatosApril 12, 2014 10:26 AM

The exposure of RSA secret keys is clearly expensive, in terms of web site operators having to get new certificates and allowing decryption of traffic from previous sessions.

Perhaps it would be wise for OpenSSL to farm off the use of such keys to a separate process which would do the calculations required and send the results back to the main process.

Any hole which exposed the memory of the main process would therefore be much less likely to reveal the RSA secret key

Mr. PragmaApril 12, 2014 1:01 PM

Clive Robinson (April 12, 2014 5:10 AM)

The bounds check is not the problem, nor for that matter is the usage of malloc and friends, or for that matter the use of C

According to the OpenBSD guys, who damn know a thing or two about security, malloc was a problem or, more precisely, sailing blindly *was* a problem.

And so is using C.

Let me give you another ugly hint. The programming language tells a lot about the mindset. When C was created it did help to *avoid* errors because it's so much better readable than asm and because software design is so much closer to the human.
Unfortunately todays C users seem to have forgotten that point.

Yes, Pascal or Modula feel comparatively bureaucratic and tight and uncool or, to put it in todays setting, "unhacky". But programmers generate considerably less errors using those uncool languages and the errors they produce are way easier to spot and to fix.

In the end it's simple. It's about how solid software is, how easy to maintain, and how effective the development is - in that order.
And there C fails big time. Maybe it's cool and fun but it doesn't fu**ing deliver!


DB (April 12, 2014 5:16 AM)

So you're proposing that open source programmers should be paid, and that users should pay for open source software? Interesting. Without holding the software itself hostage to get everyone to pay (i.e. making it in fact closed source) how do you propose to make people pay for their OS?
...

Keep in mind that the word "free" means two different things: free as in no cost, and free as in freedom. This might be where I misunderstood you.

Kindly stop piling biased religious junk on me, will you.

If we had to pay, quite probably a rather modest price, we would still have the "free as in freedom" and, more importantly, we would still have the sources.

Now, chose: You prefer paying 300 $ for a binary blob or you prefer paying 100 $ for full sources? Well, I wouldn't have to think for a second.
Problem is, that we get (and want) everything and give (close to) nothing.
This might look like a good deal to fools but it isn't. As we've seen again and again (like right now).


yesme (April 12, 2014 6:07 AM)

YES! You hit it.


Jacob (April 12, 2014 6:32 AM)

Now back to openSSL: according to Bloomberg, there is a Foundation with a $1M annual budget that supports the 5 programmers which work on the openSSL project. I bet that they can do a good code refactoring and critical review with another $1M a year.

Well, money is just one part. Kindly note what I've written above about languages and attitudes.
As a rule of thumb you can assume that C code will only rarely be refactored or even just properly maintained.
For two reasons:
a) software quality doesn't mean too much for most C programmers (well, those who can choose the language for a project). If it did they wouldn't use C in the first place. Furthermore it usually is *very* difficult to make any non-neglegible changes to C code for many reasons, one of them being loads of side effects.
There are a few exceptions, Linus, Theo (OpenBSD) who are strong and accepted leaders and almost anal about code quality.

b) Besides some small corrections it is so troublesome and work intense to correct C Code that quite often projects (or parts thereof) are simply completely rewritten. It's easier and cheaper and more promising.

What the money could do is to provide for payed people to write docs and to properly test and audit code.

But there's still the other ugly problem, language. And for that there is only one way (and I'm inclined to bet that it won't be taken) and that is using a language that is up to the task.


Clive Robinson (April 12, 2014 7:16 AM)

I'm not blaiming the tool when the craftsman has limited choice.

Pardon me but from what you write it looks like you are on the theoretical side and lack experience.

For a start, in 99,9% of the cases OS projects *can* chose their language.

And you are completely mistaken re. "DoD" and bureaucracy, too.

The fact is that languages of the Wirth, Meyer, Ichbiah/Tucker lines have proven again and again to be highly efficient, very well maintainable, and verifiable to a large degree if not fully.
The fact is also that those languages evolved over decades and there is, to name an example, Ada 2012 after Ada 2005 and Ada 95 and finally Ada 83. Similarly there is Object Pascal and even "old" Modula is still far more modern and up to date than C.

Ad verification tools: How the hell do you want to properly verify code that is even hard to parse? For verification one needs at least an educated guess about intention - which is all but absent in C code.

---

And again, to name just one example: The best we can today to properly and securely store passwords is scrypt and bcrypt. Both have been implemented in C and for both there are implementations in Ruby, Python, Perl, Java and whatnot - but not in one single trustworthy language that lends itself to secure implementations.

Think about that.

Mr. PragmaApril 12, 2014 1:17 PM

Oh and btw:

A youngster (co-)designed and implemented heartbeat and an experienced man checked it and signed it off.

There's a reason the error wasn't spotted.

Rilind ShamkuApril 12, 2014 2:05 PM

This is shocking! Very very trivial length check can cause all this mayhem. Anyone that’s read a few C code security books would have been able to avoid this fundamental error. This is not a C issue; it is definitely a human error. C was designed for efficiency and to give programmer more power especially around memory allocation. I can not believe that a crucial project for the internet such as, OpenSSL would let this fly by without checks. I understand that the project is underfunded (ridiculous, obviously giant internet companies don’t care about investing in security), etc… but still a simple source code analysis session would have been able to find this coding error. This leads me to believe that there might be more trivial errors like this one in the code. So with all that said, I believe it is very crucial that we spend the same exact amount of time that we use to develop code in developing source code analysis tools to check our code. Source code analysis is the first process of any code reviews. I hope this is the only major vulnerability with OpenSSL.

tomApril 12, 2014 2:18 PM

The OpenSSL Software "Foundation" is disingenuous calling itself a foundation when in fact it is shell for an ordinary for-profit commercial consulting company. At least the phony Bitcoin 'Foundation' is a 501(c)4 trade association forced to file an annual 990 disclosure form with the IRS (and available online).

I've personally started and run a legitimate 501(c)(3) charitable organization for the past 17 years. It involves a one-time fee of $1200 for the initial IRS approval letter, about 10 hours a year for signing checks and another 4 hours for the annual reporting.

So the OpenSSL Software "Foundation" is being real economical with the truth: "We looked into it and concluded that 501(c)(3) status would require more of an investment in time and money than we can justify at present."

They disclose support only from 3 companies (Qualys, opengear, and PSW Group) but acknowledge "we ask permission to identify sponsors and that some sponsors we consider eligible for inclusion here have requested to remain anonymous."

A mission-critical internet infrastructure paid for by anonymous parties which might include intel and hacker firms? Does the OpenSSL SF really have the capacity to vett these parties or is it more like Expedian 'vetting' detectives from Vietnam?

A million bucks a year for four core people and seven programmers who are 'active contributers'? That's a sweet deal for part-timers, $150k a year. Volunteer a little on SSL to get funneled juicy consulting gigs on SSL -- not my idea of altruistic public service. Might those gig providers include BND, GCHQ, NSA or one of their many front groups or security industry allies?

https://www.openssl.org/support/donations.html

The sole person to review Mr. Seggelmann's wooly protocol and sloppy code (as characterized above) seems to have been Dr. Stephen Henson of the UK, reportedly the only member of the OpenSSL core team who works full-time on the project.

His personal website is whimsical, unprofessional and uninformative: http://www.drh-consultancy.demon.co.uk/

No one seems to have asked Hensen to describe code checks he did on Seggelmann or what he would do differently today. As a practical matter (no one else was paid to do it), Hensen may have been the sole code reviewer on many hundreds of OpenSSL commits.

What is the motivation anyway to spend extra hours on code arcana when you're on fixed annual salary? Well, the motivation for in-depth code review might be to find some backdoor bugs, approve the commit and sell the bugs on -- it's only human.

I'm not alleging Hensen or Seggelmann did anything dishonorable in the slightest. It's the overall structure I'm unhappy with, not the individuals: we can't base the whole of internet security on two unknown guys who have carved out sweet spots for themselves with great potential for abuse.

"Please note that the OpenSSL Software Foundation (OSF) is incorporated in the United States as a regular for-profit corporation. It does not qualify as a non-profit, charitable organisation under Section 501(c)(3) of the U.S. Internal Revenue Code. We looked into it and concluded that 501(c)(3) status would require more of an investment in time and money than we can justify at present. This means that, for individuals within the U.S., donations to the OSF are not tax-deductible. Corporate donations can of course be written off as a business expense."

Donation? I think I'll pass until there's a really different business model.

rj07thomasApril 12, 2014 2:40 PM

Wowser. I was just sitting down to watch an online movie and thought, hang on a minute, what about heartbleed?

Did an online chat with company (no name being provided) who apparently don't even know what heartbleed or OpenSSL is. I switched to watching "Gravity" on Blinkbox...

LApril 12, 2014 3:02 PM

I read a lot of people here saying the C/C++ programming is bad for security.

I agree for C, since it's hard and very verbose to read.

C++, and C++11 especially, on the other hand, can give you the low-level you need for performance, and the high-level, memory-safe version.

I don't get why so many people are against C++. Personally I do not know of security-oriented software written in C++, it's all C or some other safer/stricter language where performance is not a problem.

Also, we are talking about software written back when static analyzers where unusable, where a compiler warning could mean that the compiler was wrong...

Start using clang, --Weverything, static analyzers and you almost won't even need things like valgrind, as long as you are not trying to do black magic.

C is long and hard, C++(11) can be easily modularized, checked, made safe and quick.

Sure, you might need to restrain a little bit the developers, but every project has coding guidelines. And that's what a maintainer is for.

C++(11) is *NOT* on the same level as C in safety, and saying so shows you do not know the language.

yesmeApril 12, 2014 3:26 PM

@L

"C++, and C++11 especially, on the other hand, can give you the low-level you need for performance, and the high-level, memory-safe version."

The funny thing is that I expected a reaction like this.

This is what I said earlier:

"We need a language that is unambigious, easy to understand and does what you think it should do. C is not that, and C++ certainly not."

The problem is that in C++ you can't even guarantee that 1 + 1 = 2. You have to look at the disassembled code.

And about micro optimalisations, do you think that your micro optimalisation even matters in 10 years? The OpenSSL bug was also possible because of some ridiculous malloc reimplementation that these guys thought would be helpful 10 years ago. (here we have it again: temporary code)

Finally, C++11 is only C++11 is all the required libraries are C++11. If they contain old C++ code you still have to know the old C++ code. The problem is and always will be backwards compatibility.

So C++ is a crappy language, made worse with every release because it is still backwards compatible. It's a bolted on language.

Larry SeltzerApril 12, 2014 4:25 PM

As long as so many will be revoking and reissuing certificates, perhaps they could move to SHA2 at least?

BenniApril 12, 2014 4:57 PM

On the eff facebook page someone tipped me to this old but interesting discussion on how the heartbeat standard was developed:

https://www.ietf.org/mail-archive/web/tls/current/msg08013.html

I would need additional time to figure on what exactly they are talking, but what they are discussing here seems not very responsible to me.

This seems to contain discussions on draft work for the tls standard. But they seem not guided by anything related to security concerns, when explaining payloads, heartbeat and padding lengths...

Nick PApril 12, 2014 4:57 PM

@ L

The problem for C++ is same as for C: language is hard to read, hard to compile correctly, and provides plenty opportunities for disaster.

The D language improves a lot on it. I think it deserves a mention.

tomApril 12, 2014 5:49 PM

I'm still mulling over the fact that Neel Mehta of Google discovered Heartbleed, apparently as part of assigned team duties. He has a rarely updated twitter page https://twitter.com/neelmehta; Google has not allowed interviews.

This says to me that Google doesn't trust OpenSSL Software in the slightest and is paying for its own top to bottom code review of the whole security software enterprise.

If so, this was perhaps driven by NSA double-crossing them with wholesale extra-legal fiber optic theft from private leased lines which cost Google a bundle to remedy at their data centers. Not to mention the intangible costs of lost trust and its re-establishment. NSA became the top threat to their entire business model, stealing data that normally is sold.

While we can appreciated this bug being disclosed, we have no assurance that Google is reporting all the bad bugs they are presumably discovering. For negotiating traction ("back off, you *ssholes"), they may be holding some back that NSA is really using on a large scale.

As someone noted above, Google is in a position to taint traffic tags to establish if NSA/GCHQ is still stealing, even without knowing explicitly how they are doing it. (If you cannot constantly monitor agreement compliance with NSA/GCHQ, you won't get any.) In that event, the SSL back doors that NSA really cares about might just surface as new CVEs.

In this scenario, Heartbleed was just a shot across the bows.

*/*/*/*

How about Codenomicon announcing the same bug 3 hours later, simultaneously naming it, designing the catchy icon, and releasing a dedicated web page? A coincidence -- same day for a bug out there 828 days/ 3 hour window of 18,872?

Thus the options here are independent discovery yet somehow with shared communication; Mehta managing code review by an outside contractor; a large informal consortium of companies cooperatively checking security software, each component assigned to pairs; or a German govt employee like Seggelmann serving as conduit between Google and Codenomicon.

*/*/*/*

I'm also mulling over Seggelmann causing billions of dollars of global damage but being entirely unapologetic, blowing it off as just one of those inevitable little things.

He evidently feels safe in the second year of his IT security job with Deutsche Telecom -- where the German govt retained a 35% controlling interest -- the company didn't fire him for crappy code but instead bragged about his work yesterday on the corporate front page. Marketing the trust. http://www.telekom.com/corporate-responsibility/security/229078

Recall a week earlier the German govt announced a no-holds-barred adverserial Sigint attack on the US, after failure of negotiations to become a genuine ally. The continuing NSA intransigence on monitoring the 134 top German officials below Merkel (who got a temporary reprieve but no deletion of 300 product reports on her) did not sit well with them.

We can thus speculate that Seggelmann originally introduced the bug, leveraged it into a top security job with the German telecommunications, with the govt then offering it as a bone to NSA.

After getting slapped down (like Google) in negotiations with NSA, the German govt then opted to send a message, third-partying the bug for disclosure to a Finnish security company with whom it does a lot of business. Google has had a lot of regulatory problems in Germany and a common enemy in the US security state so would have reasons to join the project.

The coincidence in bug release timing is thus part of the message: 'we (Google and the German security state) have even more painful leaks that can be disclosed if NSA won't come to the table.'

OpenSSL is riddled with mistakes, used by top players for barter. Heartbleed was just a first shot across the bows.

BenniApril 12, 2014 6:00 PM

@Tom:

"They disclose support only from 3 companies (Qualys, opengear, and PSW Group) but acknowledge "we ask permission to identify sponsors and that some sponsors we consider eligible for inclusion here have requested to remain anonymous."

A million bucks a year for four core people and seven programmers who are 'active contributers'? That's a sweet deal for part-timers, $150k a year. Volunteer a little on SSL to get funneled juicy consulting gigs on SSL -- not my idea of altruistic public service. Might those gig providers include BND, GCHQ, NSA or one of their many front groups or security industry allies?

https://www.openssl.org/support/donations.html


"Please note that the OpenSSL Software Foundation (OSF) is incorporated in the United States as a regular for-profit corporation. It does not qualify as a non-profit, charitable organisation under Section 501(c)(3) of the U.S. Internal Revenue Code."

Your comment worried me very much.
There are so many german programmers. On openssl I would have expected a broad international team. But there are almost none foreign persons.


Here is an old article in DER SPIEGEL. It is from 1996:
http://cryptome.org/jya/cryptoa2.htm

It describes how BND and NSA together worked to deliberately weaken Cryptographic devices of Crypto AG. The NSA advisor who told how to weaken used to formerly advise motorola. Whilst that may be interesting, if you have a motorola crypto device, the article writes that the BND practically owned the entire company!


"But a big part of the shares are owned by German owners in changing constellations. Eugen Freiberger, who is the head of the managing board in 1982 and resides in Munich, owns all but 6 of the 6,000 shares of Crypto AG. Josef Bauer, who was elected into managing board in 1970, now states that he, as an authorized tax agent of the Muenchner Treuhandgesellschaft KPMG [Munich trust company], worked due to a "mandate of the Siemens AG". When the Crypto AG could no longer escape the news headlines, an insider said, the German shareholders parted with the high-explosive share.

Some of the changing managers of Crypto AG did work for Siemens before. Rumors, saying that the German secret service BND was hiding behind this engagement, were strongly denied by Crypto AG.

But on the other hand it appeared like the German service had an suspiciously great interest in the prosperity of the Swiss company. In October 1970 a secret meeting of the BND discussed, "how the Swiss company Graettner could be guided nearer to the Crypto AG or could even be incorporated with the Crypto AG." Additionally the service considered, how "the Swedish company Ericsson could be influenced through Siemens to terminate its own cryptographic business.

The secret man have obviously a great interest to direct the trading of encryption devices into ordered tracks.

Sepending on the projected usage area the manipulation on the cryptographic devices were more or less subtle, said Polzer. Some buyers only got simplified code technology according to the motto "for these customers that is sufficient, they don't not need such a good stuff."

In more delicate cases the specialists reached deeper into the cryptographic trick box: The machines prepared in this way enriched the encrypted text with "auxiliary informations" that allowed all who knew this addition to reconstruct the original key. The result was the same: What looked like inpenetrateable secret code to the users of the Crypto-machines, who acted in good faith, was readable with not more than a finger exercise for the informed listener.


"In the industry everybody knows how such affairs will be dealed with," said Polzer, a former colleague of Buehler. "Of course such devices protect against interception by unauthorized third parties, as stated in the prospectus. But the interesting question is: Who is the authorized fourth?""

It would be very surprising if the activities of the german secret service would be restricted to only "crypto hardware". If it manipulated crypto hardware, it can be assumed it is also interested in crypto software.

And there we have an open ssl foundation, consisting of merely germans, and which financed by groups or people who want to be anonymous!

That sounds strange. I mean, if an ordinary company wants to strengthen ssl encryption, why should it want to remain anonymous?

MDB Ströbele of the green party is in the parlamentarian controll comission and currently asking tough questions.

Someone should message him that it should be investigated whether BND has an involvement in openssl and knew heartbleed.

William PayneApril 12, 2014 7:02 PM

It is nice to see the emphasis on the programming omission that made heartbleed possible. It is also nice to see the discussion encompassing issues like static analysis tools and code review. Hopefully we can start to raise awareness of these tools and techniques in the public consciousness, as well as to spread the message about just how much effort it takes to provide any sort of meaningful quality assurance for real-world software.

....Says the software / algorithms test guy trying to shore up institutional support for his discipline ...

:-)

DBApril 12, 2014 8:29 PM

@anon

"Reverse heartbleed" has been mentioned as also possible since the beginning, just not with that fancy name and not with that much analysis yet. Thanks for the url.

Secret PoliceApril 12, 2014 8:43 PM

This is a good talk by Poul-Henning Kamp (FreeBSD) @ FOSDEM '14

- he covers how open source projects are sabotaged
- how discussion is distracted purposely
- how NSA shills can sneak in patches
- how NSA can shutdown privacy projects with patent trolling
- how Skype was likely bought by the NSA thru proxy
- how totally screwed OpenSSL is
- how totally screwed Ipv6 is
- how totally screwed DNSSEC is

http://youtu.be/fwcl17Q0bpk

OpenSSL is like, the crown jewel of sabotaged standard libraries.

Nick PApril 12, 2014 9:07 PM

@ Secret Police

"how NSA can shutdown privacy projects with patent trolling"

I was a step ahead on this one. I posted here that I was focusing my efforts on building a secure hardware/software architecture that only used 20+ year old tech. I said they'd use patents to destroy whatever I create. Good news is that permutations of old tech can produce highly secure and even usable systems.

Secret PoliceApril 12, 2014 10:52 PM

@Nick P

Go on alibaba and find feature phones. Create a SIM overlay that uses SIM toolkit to encrypt voice and sms, and acts as a firewall to block commands to the SIM like stealth type-0 SMS tracking and OTA. Remove the baseband, and attach it to the back of the phone cover and code it with Osmocomm open GSM stack or something. :P Though I suspect Nortel or old Blackberry patents have that covered

Didn't know it would only cost them

Secret PoliceApril 12, 2014 10:54 PM

Der, html filtered.
Didn't know it would only cost them under $30k to block all future privacy software with patents

Fly_on_wallApril 12, 2014 10:55 PM

Many networks and ISPs have the ability to spy on https using a proxy and an extra certificate to make the connection look secure. Microsoft has a handy tool to do this, and there are other companies that provide this service. Many connections if tested with a regular browser to see if traffic is being sent via a SSL proxy will come up positive.

How do you check this usually? By checking if the SHA1 fingerprint for that site match what you are being served via your browser. If you check the fingerprint using Tor it matches, but however using Firefox without Tor, for example which I was testing with, the SHA1 fingerprint does not match for that connection. That shows a proxy between your connection, perhaps your ISP, and the internet. Such a proxy could log all traffic for later analysis and filter it for keywords and internet addresses.

BuckApril 12, 2014 11:04 PM

That the D language is itself so commonly used for exploitation, should probably raise some red flags on its own... That it's seen so much use covertly, it's certainly a disaster still hiding in the shadows...

Nick PApril 12, 2014 11:15 PM

@ Buck

What the D language is commonly used for says nothing about its exploitability. That it's an immature language and not primarily focused on security are risk factors. It would be something I'd say build on or try to improve rather than use right now. Ada with safety-checks on is my primary recommendation. I'm working on a special compiler design that get's around *its* issues, while not actually being an Ada compiler. (what a paradox, eh?)

Nick PApril 12, 2014 11:26 PM

@ Secret Police

That won't do. I already have several abstract secure phone designs. There's plenty of custom hardware in there needed to get rid of most threats. The project would cost a LOT of money with no guarantees of end result. Meanwhile, I just continue thinking of cellphones as entertainment devices that also pass messages (eg calls) through crowds of gossiping middlemen. If I later buy Blackphone or a Cryptophone, I'll think the same thing except there are fewer gossiping middlemen. Encrypted cell phone that's safe against TLA's isn't practical right now unless wealthy individuals or organizations decide to fund it at a loss.

However, such a phone is doable if you change "cell" to "mobile" or "fixed-line." I've already posted an abstract design of a brief-case sized device for that here with high design assurance. A rework by pro's would be needed for high implementation assurance. The resulting system is a few hundred to a few grand in hardware depending on things. And both parties must use it and use it right. Yet, how many potential buyers would want to lug around a suitcase of communications gear for the few calls it's used for?

Security engineering: the engineering discipline that is only pleasing to masochists. ;)

Mr. PragmaApril 12, 2014 11:30 PM

Buck (April 12, 2014 11:04 PM)

I think there is a simple explanation for that.

D is something like C++ done the right way. It tremendously enhances programming efficiency and it's still considered cool. If I came from a C/C++ background and were not willing to learn/use a different language/paradigm yet wanted to gain efficiency D would be what I'd use.

At the same time D still has many quirks and the tools incl. compilers, debuggers, etc. as well as D's library are still quite somewhat limited and/or floating. Which leads corporate users to not use it but to rather wait (or forget it).

Of course, this also translates to D a) being used to create lots of lousy code more efficiently and b) sooner or later looking very attractive to corporate customers ("efficiency -> lower cost") feeding back and adding to a).

Actually I consider D to be outright dangerous because it offers - and advertises - design by contract features which, of course, will attract lots of users ("Cool! Hacking in C and enjoying built-in security") who quite soon will discover D's powerful preprocessor (actually more like a dynamic compile time engine) and other major deficiencies considered to be great by D people.
In other words: D risks to become a more potent disaster tool with a nice DbC label on it.

It's about time that people, at least in sensitive areas, understood that readability is way more important than ease of hacking.

Just look up average response times ("Response" meaning problem solved) to bugs. C/C++ are *major* culprits for those shocking time frames.

strangebedfellowApril 12, 2014 11:50 PM

Many may expect the Tor Browser Bundle to have an update after the latest revelations, and low and behold there is a new version 3.5.4

Upon launching the new version of Tor Browser Bundle 3.5.4 it imediately tries to establish a couple of new connections, one an SSH connection, and another IP 213.163.64.74 that identifies as malicious according to Malwarebytes Anti-Malware. This may be a false positive, however the previous Tor Browser Bundle 3.5.3 does not attempt to establish this connection, nor the SSH connection. Now this may just be some monitoring the contributors are doing to check this new release of 3.5.4 and a connection to a new directory server with new authority key certificates, as these connections are immediate before Tor asks for relay descriptors and establishes a Tor circuit.

Here's the rub, may Tor Browser Bundle 3.5.3 and previous versions be exposed to an SSL vulnerability and leak information, and might Tor Browser Bundle 3.5.4 be being monitored via a new trick?

One may want to be a little warry about passing any sensitive information over the internet for some time until you can put some faith in the chain of trust.

BuckApril 13, 2014 12:02 AM

I may have been misunderstood here...
I'm not really trying to say that D is inherently 'bad'... but more that the advanced exploiters themselves seem to be distancing their own code from traditional C... Is there a reason for that? Probably ;-)
I think most of us can agree that technology is essentially neutral, while its application determines intent...
I guess what I'm trying to say is that blackhats are more flexible than the whitehats. No need to constantly patch stuff that depends on nobody else knowing about it :-\

hiddenserviceApril 13, 2014 12:11 AM

Notes on Tor Browser Bundle 3.5.3

Hidden services: Tor hidden services might leak their long-term hidden service identity keys to their guard relays. Like the last big OpenSSL bug, this shouldn't allow an attacker to identify the location of the hidden service [edit: if it's your entry guard that extracted your key, they know where they got it from]. Also, an attacker who knows the hidden service identity key can impersonate the hidden service. Best practice would be to move to a new hidden-service address at your convenience.

Directory authorities: In addition to the keys listed in the "relays and bridges" section above, Tor directory authorities might leak their medium-term authority signing keys. Once you've updated your OpenSSL package, you should generate a new signing key. Long-term directory authority identity keys are offline so should not be affected (whew). More tricky is that clients have your relay identity key hard-coded, so please don't rotate that yet. We'll see how this unfolds and try to think of a good solution there.

There is a new 3.6-beta2 release you could try instead of 3.5.4, but be warned that it needs much more testing.

DlangApril 13, 2014 12:16 AM

The language is irrelevant as these are crypto libraries and standards that are purposely designed to be crippled, obfuscated, and misleading. Just look at some of the TLS working group's chat log's

I may take some talking to convince DNSSEC library implementors to provide functions to verify DNSSEC records provided through a (standardized) flat memory buffer, rather than having it collect those records itself

Nothing could go wrong. Not like forming an illegal address and overflowing the buffer is dead simple with a flat memory model or anything. Let's cram this into the library!

DlangApril 13, 2014 12:24 AM

Damnit allowed HTML tags ruining my cut + paste.

[20:59:26] (mrex) I may take some talking to convince DNSSEC library implementors to provide functions to verify DNSSEC records provided through a (standardized) flat memory buffer, rather than having it collect those records itself

Anyways, go through the TLS working group logs and find all sorts of incompetence. All these standards should be burned, competent people like D. J. Bernstein appointed instead of industry hacks with close connections to US intel agencies.

Mr. PragmaApril 13, 2014 12:32 AM

Buck (April 13, 2014 12:02 AM)

vI guess what I'm trying to say is that blackhats are more flexible than the whitehats. No need to constantly patch stuff that depends on nobody else knowing about it :-\

Well, their jobs are usually quite different.

While the white hats usually rather use configurable tools or do some scripting, the black hats write lots of code.(Disclaimer: Some of what I say might be not 100% correct; it's a while since I played with D).

Where I need to care about details, for instance, allocating the necessary bytes on the heap via malloc, keeping track of it, making sure I don't overrun it, and finally freeing it, in D I have a simple new operation and a readily available string.

If you look at C code you'll find that there is a tremendous amount of housekeeping, using (and managing) libraries for rather mundane tasks, insane error handling with funny nesting in it, and so on. With D you can suddenly just hack away and concentrate on the job at hand.

Now, I don't know too much about black hats but I strongly assume that time is quite often of essence (you just heard of that 0-day and want to be the first to exploit it ...) so D's very much increased efficience somes in more than handy. Feeling very close to C I remember hacking away at a small (hand coded. There's a lib too, IIRC) parser within days. *Of course* that's very attractive for experienced C programmers looking for decisively enhanced efficiency.

But then, the really interesting point for us here is how to keep them away, how to avoid exploitable code in the first place. And in that regard all C successors and friends up to and including java will keep us in the losing corner.

With hardly more than the kernels (Linux, *BSD, *Solaris) and some few softwares not more or less rotten and pretty much all software written in C and its ugly descendants exploits are not a surprise - everyday without a new exploit *is*.

Mr. PragmaApril 13, 2014 12:39 AM

Dlang

Well, I don't see it as "either or" but rather as "both". Lousy standards made, at least in part, by people with dubious motivations are implemented in C/C++ and toys.
(I say toys because while, for instance, Python is a very useful tool, *any and every* language without static typing (and preferably being paranoid about it) is but a toy in terms of safe security tools).


Btw:

@Secret Police

Thanks for that link. Poul put our noses right into the dirt and quite probably his funny what-if scenario is way closer to reality than we'd like to believe.

fourEYESApril 13, 2014 12:56 AM

The beta version of Tor Browser 3.6-beta-2 checks out and does not launch any strange connections plus the SHA1 fingerprint for the site matches and the signature of the file matched when downloaded.

https://www.torproject.org/projects/torbrowser.html.en#downloads-beta

If only embedded hardware was so easy to fix, and that smartphone and tablet manufactures rolled out timely updates for the huge number of security flaws present in most. I completely avoid smartphones, but they are a very handy tool for remote network penetration testing and also many nefarious activities practiced by some individuals. I imagine smartphone manufactures and the software providers will update their client software and roll out patches sometime long after the next really bad exploit or bug is discovered leaving such devices always vulnerable.

ThothApril 13, 2014 1:06 AM

Anyone interested in launching a lightweight and concise crypto library that only serve a handful of algorithms with inclusive a "stupid mode" (which means the defaults are made to be secure like using CBC mode by default for programmers who don't know crypto) ? The requirement would also be to make the library modular so that if someone only wants the AES-CBC portion, they can simply compile only for that portion. I am thinking the set of algorithms to be supported should be very strict and small (Blowfish, Rijndael, Twofish and Serpent) with modes (ECB, CBC for now) and of course, no C/C++ allowed. Probably start off with Java while referencing the Bouncycastle library and other libraries. Documentations and consistency is highly sought after post-Heartbleed.

anonymousApril 13, 2014 1:32 AM

@Clive Robinson
"Now there has been a question over what some regard as an elementry mistake (not bounds checking correctly) however I don't think they are looking at it dispassionatly and are also using 20-20 Hindsight."

I agree on that. It is very difficult to find such bugs by reviewing C code (definitely C code in the way C software typically has been written a decade ago, like OpenSSL). A few years ago, I spent several weeks to find a bug that only showed occasionally in production in some relatively complex C server-side oftware. We first tried to also log more when it happened and activated cores, but for some specific reasons the logs were not written out then and also cores helped little, which is why we spent maybe more time that anticipated doing static code review. It really took two people, looking at the same code over and over again (on and off), several weeks to finally find the bug, even though the corresponding functionality had a relatively well-structured design relative to the rest of the code around it. So, yes, in general, unless maybe the NSA etc. has some much more advanced tools that can find such things automatically by static code analyis, there is a good chance that nobody can reliably detect even such issues that look so obvious in hindsight. In that sense, C may even help to protect.

Normally, I would just say "shit happens" with regard to the strange design of the heartbeat and with the bug in implementation and find it well possible that it would remain unnoticed for a long time, even for the largest organisations looking for such issues as their core business and with lots of money.

In other words, in dubio pro reo. However, the issue should be investigated. To me personally, this "smells" a lot like a planted backdoor, just a gut feeling, but it all looks so smooth and plausibly deniable that it might overall simply be to improbable to be real.

anonymousApril 13, 2014 2:12 AM


Many Devices Will Never Be Patched to Fix Heartbleed Bug

http://www.technologyreview.com/news/526451/many-devices-will-never-be-patched-to-fix-heartbleed-bug/

The good news is devices released before 2011 won't contain the bug, unless they were patched with updated firmware that introduced the HeartBleed bug. Lynksis posted that their routers are unaffected, but Cisco and Juniper have affected devices which they are releasing patched firmware for. Many other router manufacturers haven't commented on their forums about their devices. If some of these other manufacturers do eventually comment or release updated firmware it's unlikely many home users will update their devices anyway.

OpenSSL is used for remote administration and VPN for a large number of router/modem/firewall devices. You can and should disable remote admistration if it isn't already (home routers often have remote administration turned off by default but it's worth checking) then reboot your device.

WaelApril 13, 2014 2:38 AM

@Nick P,
Speaking of "C" and "D" programming languages, I'd be interested to read your take on the "E'" programming language and capability-based Operating Systems...
http://erights.org/index.html#2levels
http://www.eros-os.org/eros.html
http://www.cs.cornell.edu/courses/cs513/2005fa/L08.html
Also, you may want to consider adding ABAC; Attribute Based Access Control
http://csrc.nist.gov/projects/abac/
Perhaps that would make your design more secure? I think we talked about this briefly a while back...

yesmeApril 13, 2014 3:06 AM

@Mr. Pragma

"D is something like C++ done the right way."

... No sorry. You can't do C++ the right way.

Just think about simplicity. Until now the only "new" language that really impressed me with simplicity is Go.

@Thoth

The problem with modular approaches is that both sides at the end of the line needs to have *all* the crypto stuff. Why not specify 2 modes, one military grade that takes all energy of the galaxy and billions of years to crack, for doing banking stuff and that kind of things, and one for ordinary use that would take 100 years with all the computers of the world to crack. You know, simple stuff, for ordinary chatting and Facebook stuff.

If you have only 1 or 2 predefined modes, with also predefined settings (AES256, SHA2, etc), not only would it simplify the protocols but also the libraries that implement these protocols.

In short, I am not really convinced that modularity is a good thing in security software.

itgrrlApril 13, 2014 3:11 AM

It's interesting that the NSA have denied having prior knowledge of Heartbleed. They seem to prefer (at the moment) to claim incompetence rather than malice. If the NSA truly had no prior knowledge of Heartbleed, then it would seem to be failing *both* of its stated missions. Which then begs the question... why continue to fund it?

LApril 13, 2014 3:26 AM

The D language sound nice, but still way to immature.

I still believe that C++11 is the way to go if you start a new project now, but you actually have to learn it again. If you think it is just a little library update, you're definitely wrong, the design and programming in general gets much simpler and straightforward.

Of course you can do bad things in C++11. If you use it like the people in gtk or gcc, just as if it was "C with classes" you are obviously going to fail. Because you are using a language you don't know.

You can do bad things with all languages. The problem with C++ is that you really can do anything. That's why a maintainer who is knowledgeable with the language is needed, so that people do not write in C or do not overload all operators "just because"...

If you can find good coding guidelines then C++ is way safer than C.

Its the same in all languages. Without the right coding guidelines you can write unreadable code in *any* language, just think about the format, the declarations...

Any other language is either too esoteric, and then people won't not help you, or too immature, and it's not safe to use.

So IMHO, C++11 is the only sane choice for a new project right now if you do not want a big runtime env, a VM or interpreted code.

Anyway, I'm coding an authentication protocol in C++11, we'll see how it turns out :)

MirarApril 13, 2014 3:43 AM

@Dave Walker: I've heard complaints from people using OpenSSL that it gives heaps of problems with for instance valgrind (memory leaks etc). I find it quite possible that the static analysis tools will find a lot of problems in the OpenSSL project...

yesmeApril 13, 2014 3:55 AM

@L

Please take a look at this code sample from OpenSSL. Do you trust C++ with these guys?

Believe me, you can't write this kind of code in Go. You still can (and eventually will) write this kind of code in C++. You need a language where you *can't* take shortcuts, because in ten years time your code is full of it.

Let me put if different. Name me one old large C++ library that isn't full of crap.

That is the reality.

Edsger Dijkstra once said: "The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague."

Compare this quote with the code sample and then think about C++.

just wonderingApril 13, 2014 5:19 AM

What about modern, high performance Java networking frameworks like e.g. Apache Mina or Netty as an alternative to C/OpenSSL?

Regarding the sandbox, i.e. with regard to imposing higher restrictions than the privileges of the running process, Java has failed quite a bit. But otherwise? Anything that would be worse in a VM running with same privileges as a process written in C? And performance might nowadays be acceptable or maybe even superior depending on the application? And lots of libraries with mostly a good standard structure (unit tests, javadoc, ...) available in Java, so clearly more than an academic niche alternative?

And maybe today more open source developers interested in and able to contribute in Java than in C? Not a panacea but maybe a robust step forward in terms of security?

AlexTApril 13, 2014 5:44 AM

Regarding the NSA debate I don't really expect them to warn about such juicy exploits, no matter what they might claim.

That being said could we propose that whenever they identify an exploit they setup some monitoring (including honeypots) and if _else_ shows up exploiting they promptly warn whoever is concerned, possibility through side channels (such as M. Mehta) ?

WApril 13, 2014 5:58 AM

"The Federal government relies on OpenSSL to protect the privacy of users of government websites and other online services. This Administration takes seriously its responsibility to help maintain an open, interoperable, secure and reliable Internet."
If this were true, certainly the government would now invest a little money to review the OpenSSL code and fix all vulnerabilities.

DlangApril 13, 2014 6:40 AM

I think it's pointless to put hopes in functional Java, Rust, Go and other 'safe' languages to cure OpenSSL. The real problem is the entire infrastructure from standard libraries to essential protocols is, and has always been sabotaged. IETF Working groups are filled with shady state actors ensuring we maintain a 1990s standard of insecurity and centralization. They are still trying to get MtE block ciphersuites to work and refuse to acknowledge it's a design flaw. They are still pushing DANE and DNSSEC, a ridiculous and almost laughably bad solution to PKI/CAs. They still create needlessly complex standards nobody can safely implement in any language.

Here's a good example, this was drafted only last month http://datatracker.ietf.org/doc/draft-ietf-tls-encrypt-then-mac/

At this pace we should have TLS/OpenSSL fixed by 2025


Nick PApril 13, 2014 11:00 AM

@ Buck

I see what you're saying. Yeah, they're quite flexible. Flexible up to the point that one has the distinction of pushing more Scheme code (as spyware) than anyone else. Malware authors also do more self-modifying code as it's necessary for AV bypass. They also tend to work right in the abstraction gaps between running code & C/C++ language that most developer neither focus on nor really understand. I actually even learned about an ultralightweight OOP C approach from one. Definitely things to learn from their choices as the nature of the work sometimes forces them to adapt with exotic or clever approaches to problem-solving.

@ Dlang

"The language is irrelevant as these are crypto libraries and standards that are purposely designed to be crippled, obfuscated, and misleading. Just look at some of the TLS working group's chat log's"

I agree that a garbage standard can lead to data compromise regardless of how it's implemented. Yet, using a safe language *still* has benefits: an attacker using a memory violation to control your machine is *much* worse than recovering some plaintext off a SSL connection. The language and/or runtime is an important security layer that prevents and contains certain types of damage. And if the subversion is intended to target implementation, a safe language might make the standard safe despite subversion. Each link in security chain must be strong.

@ Thoth

"Anyone interested in launching a lightweight and concise crypto library that only serve a handful of algorithms with inclusive a "stupid mode" (which means the defaults are made to be secure like using CBC mode by default for programmers who don't know crypto) ?"

It's called NaCl and it's made by the guy mentioned in Dlang's request. So, you both win. Bernstein is also working on middleware like MinimalLT at the Ethos project.

@ anonymous

"So, yes, in general, unless maybe the NSA etc. has some much more advanced tools that can find such things automatically by static code analyis"

There's actually a ton of analysis tools and errors to catch. Talented white and black hats tend to use the better ones. Many memory violations and pointer issues can be detected with static or dynamic analysis tools. It just takes the money for the large number of staff to run through all the false positives looking for the real bug. NSA and their "offensive computing" contractors have that. So, any process creating stuff designed to resist such an organization must use every tool at its disposal to prevent exploits like this.

@ yesme

"Believe me, you can't write this kind of code in Go. You still can (and eventually will) write this kind of code in C++. You need a language where you *can't* take shortcuts, because in ten years time your code is full of it."

That's actually a good point. Either the language or the culture need to be designed to minimize hacks.

@ Wael

re E

It's an interesting language that was used in several practical apps such as chat and a web browser. The system allows one to concisely write distributed apps with good security properties from the start. The drawback is that it's implemented on top of the most attacked runtime in the world. It's essentially the same drawback as any of the safer languages I mentioned. They each give you better odds of writing a robust app. And each can fail because they must eventually link to and run on unsafe constructions. Leads to the next topic.

re capability based systems

I've posted those links here before. ;) My take on them is that capabilities are a powerful abstraction. Like tags, they seem to allow a single, highly-assured mechanism to enforce a wide range of security policies. Far as EROS link, click on "The KeyKOS System" to see how it was done in production & KeySAFE which extended it for Orange Book trusted system requirements. EROS was a clean-slate version of KeyKOS for x86. It had a nice networking and GUI components, too. Doing many highly assured software designs over the years I found that the low-level architecture always fought against my efforts. For some reason, it never dawned on me that maybe I should just throw out existing architecture and build a better one. If you need to narrow it down for time purposes, the Hydra, CAP, Intel, and System/38 designs were the most promising. System/38 proved a smart bet as it became AS/400 and hence only successful one.

I think capability machines have the potential to be a solution to our problems as they produce inherently more secure hardware that more easily maps to safe languages. Several such machines were exclusively programmed in higher-level, typed languages. An efficient, versatile mechanism is needed at hardware level and one of these machines might be key to discovering it. Ideally, the chosen language will be easy to learn or one already adopted. Alternatively, we can do it CLR- or VMS-style where certain low level aspects are standardized so many languages can seemlessly interoperate. For such reasons, my designs keep bouncing back and forth between tagged and capability style architectures. One implements both with the capabilities being managed in software, while tags ensure machine knows difference. At least one system in linked book did that but mine is expanded more. I'm also trying to integrate a secure I/O processor into it as most capability machines needed I/O offloading. I'm adding the requirement that I/O system transparently creates, removes, and enforces the tags/capabilities during device operations. Should prevent bypasses while unifying CPU and I/O security model.

Which of the capability machines interests you the most? And which approach do you think would be easiest to implement, esp by cheaper engineers? ;)

BuckApril 13, 2014 12:07 PM

Ha! Fantastic interview :-D Thanks for the link!

It was funny. It really showed me the power of gradualism. It's hard to get people to do something bad all in one big jump, but if you can cut it up into small enough pieces, you can get people to do almost anything.

David MApril 13, 2014 1:45 PM

I see tons of web postings stating that new certificates are being issued, yet no one is clearly stating that they are FIRST generating a new private/public key pair for those new certificates.

How can a typical end user know for sure that the keys were regenerated, short of each end user calling the affected site and asking???

Without positive knowledge that the keys were regenerated, once cannot positively trust that the SSL connection is secure even after a new cert is in use...

Clive RobinsonApril 13, 2014 1:57 PM

@ William Payne,

    It is nice to see the emphasis on the programming omission that made heartbleed possible. It is also nice to see the discussion encompassing issues like static analysis tools and code review.

Firstly as I've already indicated this is not where we should be looking dor the root cause of the problem. People should actualy be looking at the requirment for the heartbeat in the first place (questionable design) and the ludicrous nature of the soloution (heartbeat specification).

Nobody at the IETF took a step back and said "WTF are we doining and WTF does it need such a rediculous solution?"

Supposadly the payload "length" issue is to do with preventing two or more heart beat responses getting confused... A thirty two bit (ie 4byte) fixed field would solve that issue, why anyone would think 64Kbyte (2^524288 number) was needed in this lifetime or the next thousand or so beats me on the common sense factor...

But that aside, there is no such thing as a safe/secure programing language that would be of real use. This was known long before the first electrical/electronic programable store machine was ever built.

Likewise Turing's halting problem should tell you why any kind of "code analysis" system will always fail to catch the more interesting bugs.

And then there's the "dependancy problem", that is why should a code cutter even learn to program in a defensive manner to produce safe / secure code when an analyser can just be "run over it" and you then "fix the warning messages"...

It's only taken two generations of "mobile phone" users for serious "dependancy problems" with them to be very real world issues. It's fairly clear that many modern teenagers can not survive without the level of "connectivity" they provide. But two generations back people were easily able to cope without mobiles, by actually knowing how things could go wrong and planning ahead to avoid the problems, then think on their feet sensibly when unexpectef problems occured.

The problem with analysis tools is it won't take long for code cutters to become hooked and dependant and then incapable of responding to the problems the analysis tools don't pick up...

DBApril 13, 2014 2:08 PM

@ Clive Robinson

The fact that any tool can be abused is no reason to refuse to use it even properly.

Clive RobinsonApril 13, 2014 2:23 PM

@ Mr. Pragma,

I said,

    The bounds check is not the problem, nor for that matter is the usage of malloc and friends, or for that matter the use of C

And you reponded with,

    According to the OpenBSD guys, who damn know a thing or two about security, malloc was a problem or, more precisely, sailing blindly *was* a problem.

It's a shame that you did not go on to quote my paragraph that followed.

Just to reiterate the "problem" is the "need for a heartbeat in the first place" which resulted in a totaly "off the chart" specification for the heartbeat. One 20,000ft view of that protocol should have been raising "WTF questions"

Irespective of what the OpenBSD guys thought of the "custom malloc" I've already explained on this blog that a security leak problem would have occured with the traditional malloc and free, and that a programer with any security experiance would have been zeroing critical memory immediatly after use and most certainly befor freeing it.

I've also mentioned that the use of a secondary process to issolate KeyMat and other sensitive information has been standard practice in one way or another.

So I stick by what I've said in the two paragraphs and other postings prior to yours. And by the looks of it other people are taking a step back from pointless programing language discussions and starting to shine a light on what the OpenSSL and IETF were upto and it's raising some interesting issues

Clive RobinsonApril 13, 2014 2:40 PM

@ DP,

    The fact that any tool can be abused is no reason to refuse to use it even properly.

Whilst I agree with the sentiment it does raise an almost philisophical question as to what "properly" means.

For instance I can cut a mortice&tennon with a mallet and chisle, but I know that carving a statue with the same tools is beyond my current abilities. Further although I have "an engineers eye and touch" I don't have "an artists eye and touch" and don't see/feel the "hidden grain within". Thus whilst I would very much love the latters eye and ability, I'm aware of my --current-- limitation, thus do I stick with what I can do or do I strive for better? And if I strive what happens to the substandard pieces I produce along the hopefully sucsessfull journy?

Mr. PragmaApril 13, 2014 2:49 PM

yesme (April 13, 2014 3:06 AM)

'"D is something like C++ done the right way."

... No sorry. You can't do C++ the right way.'

Yes and no. The very concept of C++ is rotten and misguided. But D at least does it (mostly) right (mainly by not merely glueing object stuff to C) and adds some smart thoughts.

"Just think about simplicity. Until now the only "new" language that really impressed me with simplicity is Go."

I had a good look at Go. Let me put it this way: I expect some well designed code in certain segments (important to Google) and lots of problem riddled hacks.

"The problem with modular approaches is ...
If you have only 1 or 2 predefined modes, with also predefined settings (AES256, SHA2, etc), not only would it simplify the protocols but also the libraries that implement these protocols"

And you would severely limit users and hamper adoption of good new algorithms.

The problem isn't modules, neither as in many algorithms and modes (although one might want to clean out some) nor as in software modules.
The problem (as far as software is concerned) is the utterly lousy design and implementation. Just look at the openssl code.
What p*sses me off particularly is that it seems that some of that abomination did not "happen" or "creep in" but was actually intentional along the beloved C/C++ line of "real code *should* look obfucated!"

It's also about time to get grown up and to see the reality. Which is: A language feature that can be abused *will* be abused.


L (April 13, 2014 3:26 AM)

You basically bring up the usual arguments like "You can do bad things with all languages."
Yes. But there are languages strongly inviting you to design and code properly and there are those, like C/C++, that invite you to spill the contents of a drunk youngster nightmare on crack into code.

"If you can find good coding guidelines then C++ is way safer than C."

Sorry, no. Frankly, the only acceptable guideline for C/C++ is to not use it if it can be avoided. Period.


just wondering (April 13, 2014 5:19 AM)

Sorry to pick you (there are others, too).

Forget it! java being a language for safe code is a fairy tale.
For one it repeats many errors from C/C++ (like notation). More importantly though java is a bureaucratic nightmare drawing way to much attention to its quirks and needs; programmers using java tend to design around or along javas weirdness (and it's largely in the grip of one company). Don't get me wrong; one can do a lot of funny proggies in java - but here we're talking security.

A language should should work *for* the developer and *for* sound implementations. When it hampers you then that should be for the sake of sound programming and not because the language (or its designers) have diva allures.

More importantly though it's about time to understand that code is written only once but read - and hopefully understood!- many times.

With C/C++ code we just experienced what happens or happens not when it's read the fu**ing first time by the "controller". If openssl had been written in, say, Pascal (don't worry, I don't like it neither), chances are that that error had been spotted at the sign off.

Now multiply that (problem) by a big number of your choice and you have the situation 6 months or 3 years later when another person reads that code and is supposed ti fix a bug, preferably without creating 3 more.


Dlang (April 13, 2014 6:40 AM)

Yes, quite probably the algorithms (rather than the implementation only) should be audited, too. Preferably incl. people like Schneier and Bernstein who supposedly didn't sell out to governments.

But there's still the question of proper implementation and, linked to that, the question of language.


Nick P (April 13, 2014 11:00 AM)

"Either the language or the culture need to be designed to minimize hacks."

Good to see someone who is plus/minus looking in the right direction.

Actually I'm convinced that those two, language and culture, are quite closely related.

"It's fun" is a statement frequently heard in C (and somewhat less C++) circles. They even put it on slides. Fun seems to be *the* motivation. I don't think that's happenstance.

Let's look closer at your statement:

- static typing and casting being frowned upon unless absolutely necessary - Good thing to have. It makes developers put more weight on proper design rather than hacking away.
Formal verifyability is a major plus.

- readability - possibly *the* indicator for grown up professionals. College boys just hack away; there's no tomorrow. Professionals know, accept, value, and prepare for code being well readable.

- modularity - having real modularity (not the header files plague) and mechanism for proper interface design supports reusability and maintainability and at the same time offers a mechanism to painlessly include new algorithms/protocols.

C/C++ is not that. And java might somewhat sound like it but is more of a tricky promise than a solution.


There's more but I think one can already see a line

Mr. PragmaApril 13, 2014 3:03 PM

Clive Robinson (April 13, 2014 2:23 PM)

I think you took my statement harsher than it was meant. I merely opposed the "malloc wasn't a problem neither" part.

Also kindly note that, while I do concentrate on the language issue, I *did* state (in reply to someone here) that the protocol design/standards bodies and other issues are problematic, too. To elaborate on that, however, I leave to others more knowledgable in that area than I am. I prefer to stick to issues I know something about.

As for the static analyzers I widely concur with you. I also see the danger of (particularly) youngsters, the "it's fun!" hackers to let go what little safety consideration they might have because, oh well, there are cool tools to take care of it.

On the other hand I've painfully learned to become a fan of testing, be it unit testing, be it SAs or be it other forms. In particular, I feel that we need better formal verification for critical areas.

Sorry to bother you but I'm afraid all this touches the language issue, too. There is a reason for e.g. Ada being quite well verifyable but C being not (and, in fact, even hard to parse).

SomebodyApril 13, 2014 4:10 PM

@Clive Robinson: Likewise Turing's halting problem should tell you why any kind of "code analysis" system will always fail to catch the more interesting bugs.

This is a misuse of the halting problem. The halting problem says you can't divide programs into two classes: Those that can be proved to halt and those that can be proved not to halt.

But you can divide programs into the two classes: Those that have been proved to halt and those that have not been proved to halt.

If you need your program to halt you need to accept only programs that can be proven to halt, not reject programs that can be proven not to halt. Sure you'll reject some programs that halt, but so what?

In the context of code reviews, code that is hard to read is easy to review: you reject it and tell the author to do better next time. Unfortunately in most cases code reviews (and static analysis) are used to find bad code, rather to find good code.

SkepticalApril 13, 2014 4:22 PM

Let me ask a very uninformed technical question that is likely only to show my ignorance.

Would it not be an easier solution to craft prevention of such errors, and protection of information designated by a process as requiring extra protection against unintentional access, into the operating system? I'm told that C++ is a highly popular language with a huge number of useful libraries, and so it seems unlikely that it would be dropped.

DBApril 13, 2014 4:44 PM

Alright.... all you nay-sayers who say, "oh, but we can't do X security measure [even though that would be a big improvement], because it doesn't solve Y and Z [which also need to be solved, but are unrelated]..." fuck off and stop trying to derail improvements, you NSA tool!

No one tool solves everything, obviously, but every tool is useful for what it's intended/designed to be used for. So... USE THEM ALREADY.... Use all of them... If you're unable to use every tool today, use what you can today, and pick up and learn one more tomorrow. And another the day after. This is not something that will be solved overnight, it's a process. Start going down this road. It takes effort, put it in.

Use code analysis tools to improve and verify code quality.

Use improved languages that enforce better security models.

Invent new hardware designs and software methodologies that improve security too.

Write tests, aim for 100% code path coverage in your tests, try to test edge cases.

Use good old fashioned self discipline to think about what's readable and what's not, what's more safe and what's less.

Never stop refactoring and improving things.

Never stop pestering your politicians either.

Never stop educating the public either.

Once you've finished all these goals, make new ones. Set the bar higher yet again. Keep doing this... forever! don't stop.

Mr. PragmaApril 13, 2014 4:55 PM

Skeptical (April 13, 2014 4:22 PM)

The problem is a little more complex.

For one, putting security into the OS doesn't have nice sides only. It also, for instance, brings complexity with it (for the user, too).
Furthermore, in the end every language is translated into executable code and there is only so much an OS can put up against that.

Probably more importantly, you can divide attacks (very roughly) into two quite different groups:

- "hackers" i.e. people who attack systems with malice intent and for a (usually money-related) purpose.
Those typically act "commercially reasonable" that is, they create tools which address a very large part of the systems out there (windows, routers, ...).

- states/governments which usually can gain control of systems, networks, etc. in more than one way and they are usually very well funded.

"Protecting against attacks" therefore must address very different situations and scenarios. The Heartbleed problem is particularly bad because it kind of connects the scenarios.

When protecting from governments one must assume that some agents have physical access to the system (or network or ...) and such can circumvent most standard security measures.

This is in part what cryptography is about. An example would be that agents have physically taken away a server but, provided good protection, can not get confidential information.

Now openssl is "everybodys crypto-protection".

In order to achieve a better situation the whole chain must be verified and implemented properly.

As someone here correctly stated that would begin by looking closely at committees and standards bodies as well as the standards and protocols themselves and, last but not least, proper implementation as well as trustworthy OSs.

Unfortunately one of your remarks is sadly correct. Rather than adequate point the actually applied criteria usually boil down to convenience, what's wide spread, what everyone does/uses, and even questions of comfort.

At the same time those trying to create better alternatives have a hard time because usually they don't have gazillion budgets and open or covert control of relevant bodies. Even worse, it's hard to verify that alternatives aren't tainted in the first place.

In that situation one can either give up (or "trust the system" which is just another form of giving up) or start modestly. Small - and by no means sufficient - starting points could be to carefully choose algorithms, to choose a good (albeit that's relative) OS, to configure that OS properly, and to choose a language that supports safe programming rather than favouring unsound hacking.

Unfortunately that whole matter is so complex that even experts in one area make sad mistakes in another, e.g. an excellent cryptographer and generally bright mind like Bernstein coding in C and Go.

FigureitoutApril 13, 2014 5:17 PM

National Insecurity
--People are trying to be practical and solve problems that they themselves can solve. It's been stated many times that standards/protocols suck (no other word); but who's ready to take on that responsibility here? Likewise fabbing hardware where you don't have hidden radios or malicious circuits that allow code to physically attack and destroy parts of your machine (or just overwrite a HDD). It's a helpless feeling when you can't alter a chip w/o destroying it.

Are we ready to radically alter standards/protocols? Who's going to step up and take the reins and start kicking the current people out? (And not be another shill/sellout). Big questions; and it needs to be an older person (and not a politician, someone w/ real knowledge) w/ experience, not some rookie.

Nick PApril 13, 2014 5:47 PM

@ Mr. Pragma

So your hypothesis is that they're going for languages that give them plenty of freedom to hack away. I think this might help instead of hurt in that plenty of languages have this feature and are safer. Most are scripting languages, although LISP comes to mind.

The trick with them is that they are value typed instead of type declared. You can quickly crank out code with runtime figuring out types. Additionally, they make it easy to get strings and control flow right without programmer headaches.

So, the best route might be to design a systems language (or enhance existing one) with same principles. Make it fun and easy to code in, with type controls optional on per module or function basis. And constructs good enough for real work. My Ada+LISP project intends to do that if I get it going: prototyping and ultra-productive coding in a LISP subset, autogeneration of Ada/SPARK for static checking and/or final release. I'd probably recommend Python + Java/c#/Go for others as more popular.

Mr. PragmaApril 13, 2014 6:27 PM

Nick P (April 13, 2014 5:47 PM)

"So your hypothesis is that they're going for languages that give them plenty of freedom to hack away."

Yes. Mainly because of lazyness. "My friend develops in C/C++/java, too", "That's what I'm used to", etc.

I'm getting increasingly suspicious, however, of government entities having a pro-active desire in C/C++/java.

After all, they want to introduce weaknesses, yet have plausible deniability and very low risk of their manipulations being discovered.

Playing Poul (HK) for a moment ... if I were at nsa I would have my team find out expressly the kind of error that coverity and other often used analyzers do *not* find. Sure enough in C/C++ the outcome would be satisfying. And it provide a strong hint how we (nsa, ...) can get our poison into OS, cryptography, etc.

As for your plans with Ada and Spark I am obviously looking happy. As far as prototyping and a scripting language is concerned I'm not so sure. O.K. for a quick prototype Python or Lisp might be fine but then, why not use a typed scripting language right away? What's so inacceptable or difficult in specifying types? After all data is a vital part of prototyping in crypto.

George H.H. MitchellApril 13, 2014 6:46 PM

You describe the NSA's recent statement as "word-game free." Close, but not quite:

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report.

Which "private sector cybersecurity report"? Do we really know when the earliest one was?

SkepticalApril 13, 2014 7:18 PM

Hmmm... I appreciate the reply, Pragma. And I see what you're saying.

I tend to look at this from an accident causation vantage (and with enormous technical ignorance of programming).

My underlying intuition is that one of the enabling factors here is the fact that a procedure that is not being written with the purpose of handling especially sensitive information can nonetheless access that information.

Given a large number of lines of code, I would expect some procedures to be written less carefully, and with less attention, than others. This is especially the case given the lack of regulation and control over a programmer's environment. Someone can program with little sleep, late at night, with divided attention, i.e. under all conditions that favor human error.

Since programmers are human beings, they will bring varying amounts of attention and care to the writing of different procedures.

Moreover, different procedures will attract different levels of care and attention when being checked.

Procedures that are written with the purpose of handling and manipulating especially sensitive information will likely trigger more attention and care than other procedures.

While there is much that can be done to maximize the percentage of errors caught, and minimize the number of errors written, it seems prudent to enable a programmer to limit how certain information may be accessed.

In other words, a programmer should be able to make it hard to access certain information, easily. And this will in turn make unintentional capability to access certain information harder to achieve when writing procedures that need not access that information.

Is this not a feature enforceable (and surely already enabled) by operating systems, which can apply regardless of how an executable file was created? It seemed so obvious that I hesitated to even ask the question.

One additional technical question:

When code is reviewed in this type of production, is there commonly a checklist which indicates what was checked and was not checked? I don't expect that any such checklist would be comprehensive, but any checklist that hit some important notes would reduce error and increase accountability.

Mr. PragmaApril 13, 2014 7:54 PM

Skeptical (April 13, 2014 7:18 PM)

"When code is reviewed in this type of production, is there commonly a checklist which indicates what was checked and was not checked? I don't expect that any such checklist would be comprehensive, but any checklist that hit some important notes would reduce error and increase accountability."

Oh well, that's a wild animal. It ranges from "Huh? Nope" to paranoid anal obsession; the latter quite rare, the former shockingly widespread.

In the vast majority of cases the only measure is "The compiler groked it and it seems to work, so it's OK" (Note: A compiler is (somewhat simplified) a program that takes source code (the stuff written by programmers) and transforms it into something executable (stuff that runs, vulgo "program")

There are many methods and tools out there to manage to test code, basically non of them capable of finding all errors and most of them able to find more or less common and/or obvious bugs ("errors").

As a rule of thumb those tools are almost never used for prevention but rather for cure ("We leak memory" or "Around 0:00 our code crashes" so we must find the bug).

And then there is a plethora of code management tools that have basically two capabilities: a) they allow you to manage code (changes, docs, versions, etc.) and b) them being employed tends to slightly push benevolent developers to generally code somewhat more responsibly.

In the end, you got that right, it depends on the single human for the largest part. As a trend one might say that responsible developers (like Prof. Bernstein) act as grown up engineers with a somewhat anal attitude towards correctness and quality (which is a *good* thing) while particularly younger developer tend to be more on the hacking side.
In the end, good developers will produce good code even with poor tools while hackers will find a way around whatever discipline is put in front of them. If it's about the 4.321st PHP template engine one couldn't care less, if, however, it's about openssl it quickly turns into a nightmare.

Unfortunately that whole thing gets even more complicated when considering the human factor (which is rarely done but decisive). For instance, while software development is clearly engineering there are also aspects of art and many very human aspects like "being in the flow" (of coding).
Most companies (if they care at all) tend to dictate an assortment of tools (source management, style guidelines and checkers, analyzers, etc) but to a degree that's akin to handing out expensive weapons to the guys you happen to have; a few great developer will "fly" and produce excellent code, a few bad apples will fight against the "restraint" and the majority will be the same rather average shooters they were before albeit with an expensive weapon.

Very pragmatically speaking C is the most used language (in the wild) for basically 2 reasons: It feels cool and pretty everyone uses it (which also means there are loads of libraries and tools), the latter carrying far more sociological weight that one would like to think (hint: what do you think is by far the most prevalent motivation to produce open source code?)

WaelApril 13, 2014 8:03 PM

@Nick P,

I looked at Capability based Opertaing Systems a few years ago. They had their problems as well because of the need to envolve a human element. As for HW, if you think of a capability as a token or a crypto-proof an entity presents to gain access to an API or a privileged system call, then HW has to be supportive in enforcing the control. I agree with your take on "E". Which is cheaper... Hmm... The simplest system that delivers the designed functionality, I guess :)

yesmeApril 13, 2014 9:43 PM

@ made maid made

"Since Tor is being mentioned here, the current version of Tails is safe..."

At least until the next news item.

yesmeApril 13, 2014 9:47 PM

@ Figureitout • April 13, 2014 5:49 PM

Actually just came across an article: "Why I quit writing internet standards"

I only have one word: IPSec.

Nick PApril 13, 2014 9:51 PM

@ Mr Pragma

"I'm getting increasingly suspicious, however, of government entities having a pro-active desire in C/C++/java.

After all, they want to introduce weaknesses, yet have plausible deniability and very low risk of their manipulations being discovered.

Playing Poul (HK) for a moment ... if I were at nsa I would have my team find out expressly the kind of error that coverity and other often used analyzers do *not* find. Sure enough in C/C++ the outcome would be satisfying. And it provide a strong hint how we (nsa, ...) can get our poison into OS, cryptography, etc."

All quite possible and all the more reason to avoid such languages as much as possible.

"O.K. for a quick prototype Python or Lisp might be fine but then, why not use a typed scripting language right away? What's so inacceptable or difficult in specifying types? After all data is a vital part of prototyping in crypto."

Python and LISP are strongly typed, though, and that's the trick to them. They use the value to determine the type. LISP is more permissive as it's designed for ultra-flexible use. Yet, the popular scripting languages are all quite type and memory safe. The problem is they have complex runtimes, no safe ways of manual memory management, and a performance hit. So, my language proposal was one that's more like our systems programming languages but with compiler handling as much as possible. Then, compiler hints, types, unsafe modules, etc can be used to further improve performance where necessary.

Far as my LISP to Ada thing goes, the idea for it is to leverage LISP's exploratory programming ability. In it, you can throw apps together a function at a time and even compile a single function. There's always a live instance running in the background. So, testing is as easy as calling the function through REPL. So, I can write a bunch of code, test it, refactor it, etc without pausing for a compiler. This maintains my mental flow. When I'm ready, I can issue a command to translate it to Ada and run it through its rigorous compiler. I used to do this with BASIC and C++ to great effect. I also get real macro's in LISP, which have so many nice uses I couldn't begin.

Thing is, neither Ada nor LISP have had much takeup so I was just naming off other propular languages with similar properties that might get takeup. Python, particularly, has been written in itself and also extended for high performance code (Cython). LISP's are still faster and more powerful, though, along with being an abstract syntax tree that's easy to parse. Good for certified compilation, as well.

Clive RobinsonApril 13, 2014 11:37 PM

@ Nick P,

You forgot to mention the "P" word much beloved by managment ;-)

Over the years it's become accepted that "low level" languages are "less productive" than high level ones.

On *nix the tradition was "Prototype with sh Production with C" with LISP hiding very productivly in Emacs...

And back in the "CLI only" days hardware resources issues made "Production with C" sensible whilst the better than RAD "Prototype with sh" (or equiv) still holds true.

Then there is also the "CLI only" days *nix "3 layer" rule also made sense (and still does with most humans).

Since the old CLI days graphical interfaces shook things up for a while but in most cases the "3 layer" rule can still be seen though it tends to be horizontal these days with browser UI's talking to middle ware that talks to very non graphical back ends (with 3layers in each ;-).

Whilst human abilities (contrary to what some claim) have not changed hardware resources realy are not a constraint on most programmers any longer, and in fact it's increasingly the OS becoming the "bottle neck".

Thus as I've said befor why not have general application programers work with very high level safe/secure scripting leaving the development of the scripting apps/widgets they need to those who can programe safely/securely in low level languages such as asm/C.

Thus LISP or more preferably something "higher" should be that "application" developers tool of choice with it's "P word" benifits that managment crave, with low level C/asm being the languages of necessity for those who develop the system/OS and high level languages.

The reality we currently have is we "fudge" the issue we use low level languages to fake high level scripting by the use of C/C++ and masive labarynthine libs of often questionable code with incomprehensable interfaces usualy arived at by commity with all it's vested interest failings. Applications programers then want to "build it all" often giving seven or more layers, that even they can not realy "see in their minds", and we "act surprised" when systems built this way fail to give either safety or security....

Mr. PragmaApril 14, 2014 12:18 AM

Nick P.

I know what you mean, type deduction. Seeing "myvar := 5" the compiler is supposed to deduct that obviously(?) you want an integer.

Again, we're discussing security related or critical software.

And there is a golden rule, from which I'm not willing to deviate a single millimeter. Tell what you want and tell it as precisely and unmistakably as any possible. Period.
Not at last for formal verification purposes.

As for GUI stuff, PHP templates or the like, I don't care batshit. If anyone feels it's a great idea to develop using in PHP on net or whatever, great (I won't use his sh*t anyway).

If it's about prototyping some algorithm, yeah, maybe that could be done in Python, Lisp, Basic, whatever. Although, frankly, I fail to see the beauty of it. Why not prototype my algorithm with fully specified data right away? After all more often than not data *are* relevant. And I can still cut corners for a quick shot, for instance, by not cutting my types down to a certain range, by ignoring housekeeping, logging, and all the usual stuff. But that is me and my opinion, your's may vary and as long as the final code is on Modula, Oberon, Ada or similar I see no major problem.


I'm still somewhat haunted by the wake up call re. standards bodies and that whole shebang.

I'm not done thinking (are we ever) but right now my pov. is sth. like this:
Let's assume that a miracle happens and tomorrow morning we get proper standards and algorithms that are properly designed, properly verified, etc.

Then what? Then we adapt openssl? Ridiculous! If we tried we'd quite certainly introduce a sh*tload of new bugs and problems using C/C++ or, yuck, java.
When Bernsteins Salsa got introduced into the crypto libs, thank God, they quite probably just stole his reference implementation and introduced new bugs "only" in the library wrappings ...

Clive RobinsonApril 14, 2014 12:28 AM

@ Skeptical,

There is a dirty little secret when it comes to "code review" and it's that it's neither "sexy" nor "productive" and you are always on the wrong end of the stick as the reviewer.

That is managment don't want "their brightest and best" code cutters being unproductive doing code review and most code cutters have the view that code review is not good for the mojo etc, and view code reviewers as a hinderance at best.

Which has caused and in some places still causes a strange work dynamic. Managment generaly want code out the door on day X which is frequently way sooner than it should be. So code cutters are under preasure to deliver code, that "sort of works" (with patches to follow). The unfortunate code reviewers get this pile of "regurgitated dog food" to wade through as late as the code cutter can get away with, and thus you can be assured that the reviewers are "not going to smell of roses at the end of the day" and if they try to clean the mess up the code cutter just vomits up worse and managment start jumping up and down about scheaduls, often on the reviewers head.

More enlightend practitioners have seen why this might not be effective and thus have developed various "work flows" to reduce or supposadly stop such problems. In essence they revolve around what might be thought of as a "buddy system" where a small team of programers work together and review each others code as they cut code... Yup it develops other strange work dynamicssome even more pathalogical than the older process it's trying to replace. If you want to know more look up the aptly named "Scrum" Software Development model on Wikipedia [1] and as you read keep asking yourself "how can this go pear shaped" or "how would personality type XXX on the team effect this", then ask yourself about "groupthink" and external conflict issues and how that could become pathalogical (remember programers generaly don't believe that "the customer is right" nor do Marketing or often managment in larger organisations).

[1] http://en.m.wikipedia.org/wiki/Scrum_(software_development)

AnuraApril 14, 2014 4:02 AM

@Clive Robinson

When it comes to code review, I think people can easily get into the habit of skimming and not paying close attention for many of the reasons you mentioned. Pair Programming is a good way to solve this, as you have one person reviewing every single line as it is typed. On top of this, you have two people discussing the best way to approach the problem. This doesn't really work for independent developers, however.

Personally, I think automated unit testing is probably one of the best tools available for finding and preventing bugs. There are two main questions I think you should ask when writing unit tests: "What should I expect when this code is correct, and (more relevant to the point) "Are there any invalid inputs?" In test-driven development, you write your interface and tests first, verify that your tests fail, and then write your code to make them successful. In an ideal world, your tests will cover all of the specs, and code will do only what is necessary to pass the tests.

After you have completed writing unit tests and code, I think it's prudent to run the tests while walking through the code in a debugger, making sure every possible code path is covered by the tests, and step through each line while asking the two questions mentioned above, making sure the unit tests are adequate. I think this is what the code reviewer should be doing (the programmer should do it at least once, and the reviewer should do it at least once).

SkepticalApril 14, 2014 4:37 AM

Thanks Clive & Pragma for the excellent and helpful replies

Let me ask another question:

Is there a set of "best practices" guidelines governing the engineering and testing of software that will handle sensitive information?

If there is, then is there a body that will certify a product if the engineers have followed best practices, and that will refuse to certify a product if the engineers have not?

Part of the problem seems to be lack of easy transparency concerning how a given item of software was created. If it were widely known that a given instance of Self_Described_Ultra_Secure_Software did not follow a set of best practices in design, creation, and testing, many would be disinclined to adopt it.

Personally, I'd like to know more than "x number of people reviewed the code and decided it was ok, and person y wrote the code, and this program is used by z number of people."

I'd like to know if data in the software is appropriately segregated and impossible-to-unintentionally-escape limits are placed by default, easily, in a hard-to-f***-up-and-if-you-do-it's-a-screamingly-loud-f***-up fashion, on segments of code that do not need to access sensitive data.

I'd like to know that a review process requires checklists for checking known types of vulnerabilities, and that these checklists are submitted when an individual completes a review of proposed code.

I'd like to know that the entire set of best practices, which I lack any expertise whatsoever to enumerate, has been followed in the production of a given program.

And I'd like to know all these things simply by checking to see whether a program has received certification from an organization.

The certification need not be expensive to obtain, and for open source could involve simply documenting the adherence to best practices as one goes along (e.g. checklists submitted as part of a code commit).

It would be a big step forward from "it's open source and everyone uses it, so it's probably okay."

One other quick point and then I'll stop interrupting the thread.

Again emphasizing my technical ignorance as a disclaimer, it is astounding to me that sensitive information can easily be accessible, unintentionally, from procedures that need not touch such information. To me this is like designing an aircraft in which the programmable coffee-machine can access the data used by the flight computer. Personally, I don't just want the coffee-machine programmers to be really safe in their programming; I want the coffee-machine isolated from the flight computer data so that if an enterprising programmer finds a hack that gets us coffee 5 seconds faster, I don't even have to consider whether this could impact pilot performance beyond the availability of a hot beverage.

Is there a deep, sound reason for the lack of such separation that I'm missing in my technical ignorance?

Derek P. MooreApril 14, 2014 9:14 AM

Bruce,

There remains a known-plaintext attack vector in the "fixed" Heartbeat code, as I found when I decided to engage in a code review of the matter on Friday.

The Heartbeat doubles as an echo service. The length field of the echo payload can always be determined; as implemented, OpenSSL always utilizes the minimum random padding of 16 bytes. That's 1 byte for message type, 2 bytes for length of payload, plus 16 bytes for random padding equals 19 bytes, subtracting 19 from the length of the ciphertext in an intercepted Heartbeat packet gives you the legnth of the payload. Knowing the length of the payload gives you knowledge of what region of the ciphertext commits the cardinal cryptologic sin of encrypting the same message twice with the same key.

If interception is occurring in both directions, 3 of 19 bytes are known (message type & length prefix). If interception is only in one direction, 2 of 19 bytes are known. What is also known by this information is the twice-encrypted echo message.

Heartbeat RFC should be revoked at best, or implementations should always use a randomized length of random padding at worst, even when not engaging in Path MTU Discovery.

Nick PApril 14, 2014 9:29 AM

@ Mr. Pragma

Funny thing is INFOSEC community has even compromised with them by giving them safe[r] C and typed assembly (check that out). They can keep it pretty low level, make porting easier, and avoid many of most dangerous errors. Most won't even do *that* when I tell them about such tools. Both closed and open source INFOSEC development are very broken, with few exceptions.

@ Clive

"Over the years it's become accepted that "low level" languages are "less productive" than high level ones."

There's been empirical evidence of this. The results showed more about the specific features of languages rather than how low or high level they were. Anything that save's programmer thinking, typing or testing time is A Good Thing so long as it doesn't cause other problems.

"Since the old CLI days graphical interfaces shook things up for a while but in most cases the "3 layer" rule can still be seen though it tends to be horizontal these days with browser UI's talking to middle ware that talks to very non graphical back ends (with 3layers in each ;-)."

You forgot the 4GL's and CASE tools that dominated between CLI and GUI, while popping up from time to time. They were used to great effect in the past with its ugly 3GL's. Their creators even assured us they would replace all 3GL's in "the near future" and eliminate 90+% of all coding. Boy, am I glad they did that. ;)

"Thus as I've said befor why not have general application programers work with very high level safe/secure scripting leaving the development of the scripting apps/widgets they need to those who can programe safely/securely in low level languages such as asm/C."

It makes sense. It's *sort of* what happened in Java world. The end result has been a mess so the idea needs to be refined a bit.

"with low level C/asm being the languages of necessity for those who develop the system/OS and high level languages."

I don't think that's even necessary. So many projects, not to mention 1960's technology, showed higher level languages can program OS's. There has to be unsafe code in it, for sure. However, a few projects have shown it's only the tiniest bit right where memory, CPU state, and I/O resources are handled. That bit can be wrapped behind typed interfaces where the rest of the code experiences *some* safety benefit. C and asm largely don't have that so I recommend these better programmers you mention avoid them as much as possible for low level development.

I see something along these lines: app programming languages -> a language like Ada or Wirth's -> typed assembler -> assembler. Remember that C enforces a certain structure that's incompatible with some languages, even causing vulnerabilities in safe ones (eg Ada + C). Might as well take a safe systems language down to the bare metal by disabling safety features in just that module & using inline typed/untyped assembler for most primitive/performing stuff. That's my model, anyway. The first part of this is typical in Ada development. The second is my addition based on progress in type systems and verification for assembly languages.

"The reality we currently have is we "fudge" the issue we use low level languages to fake high level scripting by the use of C/C++ and masive labarynthine libs of often questionable code with incomprehensable interfaces usualy arived at by commity with all it's vested interest failings. "

And that's the rest of the world's model. :(

Clive RobinsonApril 14, 2014 11:40 AM

@ Skeptical,

First a word of caution about "best practice" very often they are nothing more than "myths and juju" you would find in "Cargo cults", the fact that lawers band them about in civil cases makes them even less reputable... so beware.

You should always look for the science behind them, failing that the mathmatics and in either case check both the axioms and measurands behind them. Often with "best practice" they are missing / wrong / inappropriate / untested or some combination there of.

As an example an institute produced an anual "best practice" "Top Ten" recomendations. It was at one time a self selected list of organisations who "claimed" not to have been breached during the previous year. The institute found the top ten things they had in common and said that was "best practice". Do you see the axioms and what's wrong with them? And can you see any maths that might be of use let alone science?

The other problem when it comes to the science of security --and there is actualy quite a bit-- is many Governments treat the information as classified. History tells us "long term" this does not work, however in the short term it's worth it's weight in gold several times over.

The problem with classifing such information is not just that only the "chosen few" on your side benifit whilst the rest of your nation loses, it also "chills" talking about it and this stops research. The thing about opren research is 'it lifts all boats' irrespective of if they are "saint, sinner or both" and as more tend to saint than sin the overal result is a benifit not a loss to society.

It's open research that is pushing the worst of "best practice" lists back to times of the "dark ages where snake oil was prized, and fakes and quack remidies abounded", but it's difficult to keep the light burning when powerfull vested interests don't want the steralising effect of the light of day shining in on their faux markets. Thus you can see the problem when one vested interest is a government.

This also raises the specter of "conflict of interest" when the test houses are not fully independent of the government, it's agencies or their influance, the NSA-NIST arangment might or might not have been benign the problem is it was not open thus it's suspect. Also some afencies won't test or approve some manufactures equipment, the reasons for this are unclear and frankly verges on good old fashioned "restrictive trade practices" which are technicaly in breach of international trieties. It's why you have the ludicrous situation of one Five Eyes Nation (US) "condeming" Chinese companies and another prasing them (UK). If the processes were fully open, independant and based on science and mathmatics not "best practice" then that situation could not arise. The equipment could be verifiably tested, and would pass testing or it would not and the cause of failure clearly identified and demonstrable.

Undortunatly for "science" to be real you need tests that follow the "Scientific model" of Bacon and Newton not Aristotal and Descartes. Whilst their is some science in the technology and some methods of information processing in much there is not.
The reason for this failing is lack of rational and suitable for scientific reasoning measurands. Which unfortunatly has a significant impact on Security, where much reliance is placed on the uncertain mathmatics of probability.

That is not to say security is lacking in method, in fact much is based on what could be termed "common sense reasoning". However there is a significant problem with "common sense" it's usualy based on observations of our tangible physical world ruled by finite energy/mass and the speed of light and thus comes with a large number of hidden assumptions. Unfortunatly many of these assumptions just don't apply in the intangible non physical information universe that could well be infinite and not constrained by either energy or matter except when being communicated, stored or processed by being impressed on physical objects.

The assumptions of the physical world give rise to issues such as "physicality", "locality" and "resource constraint" which can currently be seen to not apply to the likes of malware, yet many still reach for insurance style acterial tables to do their risk analysis, or reley on physical security methods where they can be shown to be inappropriate.

Thus many of the problems with security can be shown to be due to reasoning based on inappropriate or false assumptions. Which as I'm sure you will realise makes for "interesting times" and any kind of certificate near worthless.

But the problem gets worse, you mention "check lists" the problem with this is they only work with "Known knowns" and a very limited number of "Unknown knows" and by chance against some "uknown unknowns" which is by far the largest set of vulnerability instansiations and classes of vulnerabilities. Thus check lists whilst very useful in some areas are in other areas a case of "fighting the last war with the war befores tactics" and compleatly inapropriate for fighting the next war where the adversary is well versed in the weak points of the tactics and how to exploit them.

As for the in air coffee machine issue you hypothosize about, it's actually a very real problem and I've mentioned it before. Issolating systems in any kind of environment is at best difficult at worst impossible. To see why, both the coffee machine and the flight system are due to the closed environment "coupled", it's identified by the old question about the weight of an aircraft full of humingbirds in flight or all perched. The mere prescence of one system effects the other, you as an engineer have to recognise all methods by which they are coupled and then identify the bandwidth and antenuation of each coupling channel, then reason out how to mitigate and to what extent each channel based on reducing the bandwidth and increasing the antenuation. For example one coupling method is via the power supply. Even if you chose to use seperate fuses and wiring busses you get back to the power source. Even seperate generators run on seperate engines are still coupled by the flight dynamics of the aircraft which effects both engines. Similar reasoning applies to mechanical vibration and the acompanying "sound" and even gravity by the balance of the aircraft.

However aircraft are generaly only earning money when in the air thus anything that inceases the effective cargo or reducess the ground time will be used. Thus it's desirable to have aircraft systems talk to ground systems whilst in flight. One way of reducing cost and weight is to couple systems together in the same way office PC's get connected to a LAN that has a common "gateway" to a WAN and hence the rest of the world. In theory it would be possible for the coffee machine to indicate to a ground system it was low on sugar, this info is outside of the aircraft interacting with inventory systems which in turn interact with each other which could change the flight manifest causing changes in cargo and weight information that would need to be communicated to the flight system which means even though it's indirect the coffee machine does effect in theory the flight system. And it's also true in practice, so having identified a valid coupling the question falls to one of cause and effect and potentialy effect mitigation. That is the coffee machine orders a 1000grams of sugar, you do not want the flight system to be told 1Kg but the inventory system actually loading 1tonne ie 1000Kg (yes such mistakes have happened with fuel and conversion from one measurment system to another). But as I noted both systems inside the aircraft also share a common communications network, so there is potentialy direct coupling and thus invalid messages generated by one system could be treated as valid by the other, and people have demonstrated that such problems are possible on test systems at hacker conferances. What is not known is if anybody has dared test it on actuall aircraft systems and if so to what level. I personal suspect few have dared try it to the required level to verify one way or the other, simply because I have experiance as an engineer and I've seen what engineers do to get systems working. That is how vulnerabilities in "test harnesses" get left behind for the likes of "maintainance / repairs / fault finding" and how they can compleatly breach safety / security. If you want an example think back to CarrierIQ...

[]

Clive RobinsonApril 14, 2014 12:27 PM

@ Anura,

    Pair Programming is a good way to solve this, as you have one person reviewing every single line as it is typed. On top of this, you have two people discussing the best way to approach the problem. This doesn't really work for independent developers, however.

Yes "pair programing" is a "buddy system" by another name (I'll leave Shakespear to do the "smell test" ;-)

However it needs a minimum of two people as you note but also they need to be broadly comparable in skills/ability. To often I've seen managers and team leaders actually misuse it as a "sitting next to Nellie" system to "productivly" train up a junior code cutter without using formal training.

It also needs a non adverserial environment, where people can not misuse it to gain pecunary or other benift at their "buddies" expense.

I'm hoping that team leaders and managment above will learn from LEAN methods that adverserial environments realy don't work. However half a thousand years of employment history which gave rise to amongst other things revolution, Unions and workers rights organisations and political parties suggest an uphill strugle against the less inteligent psychopaths climbing the managment tree.

As for code analysers and similar, I welcome them as aids to productivity, but not as crutches for poor ability or learning. As I've said else where these tools are far from perfect and sometimes their output needs further mental work with considerable background skill to interpret correctly and get the best from.

I know many managment types want "Monkey see Monkey do" programing systems to not just improve productivity but drive worker cost down (history shows this with weavers etc). However they dail to realise that they actually increase the overal "product life" costs to the point of bankrupting themselves. In many respects it's a "slash and burn" methodology that leads quickly to a permanently damaged echo system which much to the blindness of a short term profit they dailt to realise beggers them as much as those they chose to exploit.

For some reason managment often forget their workers are also their customers either directly or indirectly and if the workers cannot aford to buy the product who else can or will...

Mr. PragmaApril 14, 2014 1:52 PM

Skeptical (April 14, 2014 4:37 AM)

Well that comes somewhat down to the holy grail question.

At first sight (and the way you put the question): No, can't be done. Because even putting security police people next to everyone at the development site won't cut it.

But then, one doesn't need to. What *can* be done is:

- Verify algorithms
Actually that's widely done in cryptography in the form of peer reviews, cracking attempts, etc.

- Verify design (in real world software there's more than your wonderful algorithm. There's processors, busses, networks, OS ...)
A "funny" example that's hammering our heads right now: What if your OS isn't trustworthy at all?
So design has to ask way more questions than how to implement that algorithm. That's also a, probably *the* point of concern in openssl and the likes.

- Verify tools
Obviously it makes a difference whether you use e.g. Ada and paranoically and anally tested compilers or John Does funny new superhyper cool language.

- Verify product
That's not only "It doesn't crash, so it's fine". It's also not just "We used unit test and they all passed". It's bombarding your product with uncommon, senseless, garbage, and even outright evil minded input, data, environment.

- Verify your verification and tools
Are your unit test friendly small town police routines or do they include evil spirited input, data, etc.? *Very* often unit tests are used only along the line "does the tested stuff do what it's supposed to do?". That next to worthless. Unit tests should attack, shake, and rattle your stuff.
Note: The earlier a problem is spotted the cheaper (and usually easier) it is to solve.
Are your assumptions correct?

So, if you want to have something to look at, an indicator of reliability, check the *chain*. And don't be stubborn! Example: Companies just love to have some other large and well established organization to check or verify their software (actually they want - and pay for - a nice seal saying "Tested, worked perfectly, buy!").
Better: (At least additionally) pick up some guys at your local university and ask them clearly to shake, rattle, and attack your software. And pay them well. Use their creativity, make use of their youngsters readiness to do shit with your software. They can't give you a nice seal but they can give you a nice amount of confidence.

Hint: You can do that as buyer, too.

Or, more generally and philosophically:
Companies tend to be quite organized and formalized and to use the services of other organized and formalized organisations. Problem is, the guys attacking your product are not, neither is nature and life. So, if you want proper testing you need a high amount of disorder, creativity, chaos.

Oh, and btw. if any possible do *not only* have your designers and developers write their own test cases. Not because of mistrust but because they are humans. They should write some tests. But not all of them. Neither should they write tests mutually (dev. A for dev. B).

Mr. PragmaApril 14, 2014 2:18 PM

Nick P (April 14, 2014 9:29 AM)

I know about Cyclone. And while I think that C and asm should be avoided wherever possible I'm aware that some low level stuff sometimes actually needs to be coded in Assembler. And yes I take the liberty to call C an assembler, too, because it *is* a meta assembler or at least that's how it should be seen and used (using C as a meta-assembler is about the only way I see where using C actually is a good thing to do).

And therefore I'm not all opposed to that. In fact I value Cyclone and typed assembly - if used properly and as little as necessary.

relaxing...April 14, 2014 3:42 PM

From the Official Statement:
"If the Federal government, including the intelligence community, had discovered this vulnerability [Heartbleed] prior to last week, it would have been disclosed to the community responsible for OpenSSL."

Winston Churchill:
"You can always count on Americans to do the right thing - after they've tried everything else."

me:
"Exploits like Heartbleed that almost everybody - as opposed to only the USA/NSA (due to its privileged position in the internet, computing power, etc.) - can effectively put to use are in the interest of the USA to be made public, because this protects US businesses and people better; especially it protects global business of US companies better, because it restores trust in global electronic business. Hopefully it will also be possible to "convince" the USA/NSA to change policies regarding some other aspects of electronic data handling in time. To the people of the World: You have the Power! - Oops, sorry, your money has the power, it can buy the USA/NSA, make it do (almost) anything you want - this is the country where "free" means both free literally and in terms of money - the USA is foremost a Plutocracy, never forget that..."

Nick PApril 14, 2014 9:22 PM

@ Skeptical

re security evaluations

I thought your question was important enough to merit an essay on the topic. I posted it in the squid thread here.

BackToThePointApril 14, 2014 10:01 PM

At least 4 examples of SSL private keys obtained reported by security sources yesterday, and probably many others we might assume that aren't being reported by the people that obtained them. Huge amounts of data are being fired at servers around the world, as well as record amounts of NTP reflection attacks, and other DDoS attacks. Many provider's front and back-end services are being affected intermediately, along with outages in some areas. Servers are rebooting all over the place, while users attempt to update passwords and providers assess their exposure.

We are seeing an escalating cyber arms race in reaction to a "Wild West" environment. Some no longer feel secure just wearing their firewall/AV vest and are increasingly turning to attack mitigation services, others are "tooling-up", and some still stumble around saying they don't need firewalls and AV.

reflectingApril 14, 2014 10:57 PM

Heartbleed used to steal Canadian taxpayer data.

http://www.itnews.com.au/News/382816,heartbleed-used-to-steal-canadian-taxpayer-data.aspx

Combine this threat with banking Trojans like Zeus, and Gameover which uses SSL to encrypt traffic making it difficult to detect, they can log keystrokes, steal credentials and launch DDoS attacks, and you get some idea of the sophistication of recent malicious code developments. The latest Trojans can not only be delivered through email, but served via a web page as an Internet Explorer document and fool any detection system by using an apparently legitimate digital signature.

Many modern Antivirus suites and Firewalls will allow a file with a digital signature to automatically connect to the internet if it appears legitimate.
Not only may users create an exception for a certificate they think may be legitimate, your security software may allow files with a digital signatures which appear legitimate to launch a variety of unseen actions and attacks.

Software publishers need to strongly protect the secrecy of the cryptographic keys used to create each signature, and strongly enforce the signing authorisation process – typically using hardware security modules (HSMs) which create a tamper-resistant environment for managing and using keys.Without an HSM, keys and processes are subject to a host of attacks since they can be ‘seen' in the processor's memory, easily copied and modified.

AutolykosApril 15, 2014 4:24 AM

@Mr. Pragma:
I don't think introducing monetary rewards into Open Source development would be healthy or good for quality. Usually, people who do something because they *want* to will be way more motivated and diligent than anything you can achieve by paying them (even quite unreasonable amounts). And paying motivated guys will shift their thinking (unless they're extremely disciplined). Once they see it as business, they will start to optimize payment per invested time, which results in lazy, half-baked code and shipping too early (only it's also written by amateurs).
If you want good, reliable Open Source software, you should pay for reviews and audits instead (which is usually pretty non-fun anyway, so it's hard to find skilled and motivated people who'd do it pro bono).
You can't really blame a bunch of hobby programmers for occasionally writing bad code (motivation alone doesn't cut it), especially if you took it for free. But you can blame a company with a budget competing with small nations to use that code in security-critical places without ever having it reviewed properly.

Kevin LydaApril 15, 2014 8:02 AM

Regarding https://tools.ietf.org/html/rfc6520 , isn't it a problem that the response must contain a copy of the data sent? Isn't that a possible attack vector?

From RFC 6520:
"""
When a HeartbeatRequest message is received and sending a
HeartbeatResponse is not prohibited as described elsewhere in this
document, the receiver MUST send a corresponding HeartbeatResponse
message carrying an exact copy of the payload of the received
HeartbeatRequest.
"""

For cryptanalysis, isn't knowing that the same text is being encrypted by the same key a useful thing to know? I'm not clear if TLS is using the same key to encrypt in each direction of course, but if it is this seems like a problem.

name.withheld.for.obvious.reasonsApril 15, 2014 1:27 PM

Anyone have an idea on the number of printers are vulnerable? A number of orgs leave printers forward facing for IPP printing off-site, some with VPN (using TLS/SSL gateways) as a precaution, but I am sure there are a number of devices that are not DMZ or layered away from production systems. What could be the exposure/risk matrix in this situation?

Nick PApril 15, 2014 2:00 PM

@ name.withheld

The printers all top out at EAL4 usually connected to a device or network maxed at EAL4. So, by definition, they're all vulnerable. Just got to identify the vulnerabilities. ;)

Mr. PragmaApril 15, 2014 2:32 PM

Autolykos (April 15, 2014 4:24 AM)

You are right, at least to to a large degree. But: That's not what I suggested.

Funny that most take it as that but "paying for OS software" is NOT identical to "paying the developers".

To name just two examples for things that rarely get done properly in OS projects, usually either because it's considered boring or because it's forbiddingly expensive:

- code audits, security evaluation

- documentation

There are more but I think those two rather striking examples will do for the moment.

Mike AmlingApril 15, 2014 5:15 PM

"I'm not clear if TLS is using the same key to encrypt in each direction"

It doesn't.

AnuraApril 16, 2014 12:10 PM

@Kevin Lyda

Regarding https://tools.ietf.org/html/rfc6520 , isn't it a problem that the response must contain a copy of the data sent? Isn't that a possible attack vector?
For cryptanalysis, isn't knowing that the same text is being encrypted by the same key a useful thing to know? I'm not clear if TLS is using the same key to encrypt in each direction of course, but if it is this seems like a problem.

Even if it was the same key, at best it falls under the realm of a known plaintext (but the plaintext itself isn't actually known to the attacker, so it's not even thant). Ciphers are designed to be secure against known and chosen plaintext attacks with ridiculously large numbers of plaintexts.

The big problem is that it's completely pointless to allow 65535 bytes of data. I don't see any reason not to just hardcode the damned thing to one byte (justifying the need for a heartbeat in the first place is another matter entirely).

NotAmusedApril 17, 2014 3:13 AM

There are many webservices to perform heartbleed-tests online (for the own or others websites), but I would like to know, if there are offline tools available for internal tests (intranet webservers), too?
The best would be, if these are open source tools, because I would like to compile it myself to be sure there are no unwanted features added.

Do you have any suggestions?

nsa word gamesApril 17, 2014 2:11 PM

Bruce,

You say the NSA statement is wordgame free, but it's not possible to trust that. We already know that NSA redefines terms so that things which sound wordgame free really aren't. For example, when they said they weren't collecting data and it larer turned out that they defined collect to include review as part of a court case if and only if a reviewer remembered to check the checkbox saying the data was reviewed.

Also, as someone else loints out, which private report did the NSA first learn about heartbleed from and what date was it? Was it the same time OpenSSL learbed about it? Or a private contractor working for the NSA who discivered it and reported it privately?

You simply *cannot* trust anything the nsa says. They are proven and ver ingenuitive liars.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.