Comments

Daddy WarbucksApril 18, 2014 7:55 AM

Heartbleed is getting very expensive. Just the cost of issuing new certificates is suprisingly high.
http://www.wired.com/2014/04/cost-of-heartbleed/
The programmers who made the heartbeat coding error, how much were they paid? No security audit on their work? And yet billions of dollars in transactions across openSSL depend on it.

maxCohenApril 18, 2014 7:57 AM

@Daddy Warbucks My hope is that more security audits actually come from this. Like with GPG.

PutinApril 18, 2014 9:18 AM

I am sure that NSA has been doing automated security audits on open source software for a while. After all some parts of how this bug was found sound rather vague...

LJorisApril 18, 2014 9:25 AM

After all these years, i still cannot believe such bugs can actually happen and only surface after so many years. Isn't this what peer-review should be about.

I'm under the distinct impression there's too much of a direct connection from developer to end-user and little in-between validation happens. Examples are becoming too numerous lately, or rather the last few years.

The open-source principles and key-values still stand strong, they just need a little support in the back.

BenniApril 18, 2014 9:49 AM

One may expect more bugs like this from a library that is coded like this:
https://lobste.rs/s/3utipo/openbsd_has_started_a_massive_strip-down_and_cleanup_of_openssl/comments/fkwgqw

The comments from these brave openbsd developers that now audit openssl should speak for themselves:


http://freshbsd.org/search?project=openbsd&q=file.name:libssl

Do not feed RSA private key information to the random subsystem as entropy. It might be fed to a pluggable random subsystem.... What were they thinking?!

ok guenther

Remove support for big-endian i386 and amd64.

Before someone suggests the OpenSSL people are junkies, here is what they mention about this:
/* Most will argue that x86_64 is always little-endian. Well,
* yes, but then we have stratus.com who has modified gcc to
* "emulate" big-endian on x86. Is there evidence that they
* [or somebody else] won't do same for x86_64? Naturally no.
* And this line is waiting ready for that brave soul:-) */
So, yes, they are on drugs. But they are not alone, the stratus.com people are, too.


- Why do we hide from the OpenSSL police, dad?
- Because they're not like us, son. They use macros to wrap stdio routines,
for an undocumented (OPENSSL_USE_APPLINK) use case, which only serves to
obfuscate the code.

My two centsApril 18, 2014 12:03 PM

"After all these years, i still cannot believe such bugs can actually happen and only surface after so many years. Isn't this what peer-review should be about. "

The real lesson here, as it was in the past, is that peer review of open source software is oversold. There are two major problems with open source software for mission critical programming (and OpenSSL is mission critical). The first is that the more complex the programming required the fewer people that exist that are actually competent to program and to review it. The second problem is that of those who are actually competent to program and review it there is a significant opportunity cost to doing so because by definition they are not getting paid for it. (To be precise, any compensation is not inherent in the model as it is in a for-profit model). Look at it this way, do the best surgeons in the world work for free? Not most of them. Some of the might occasionally volunteer in Doctor's Without Borders or something similar but only in their free time. The result is that the "thousand pairs of eyes" that is supposed to exist in theory often amounts to no more than a handful of people in practice.

The point that I am making here is not to be against open source. There is a cogent argument that it remains a viable model. But I do think that some members of the public have unrealistic expectations of what it can accomplish. A bug in Open Office is not expensive to the users, generally speaking. But as we have seen a bug in OpenSSL has turned out to be expensive. In my view the greatest argument in favor of open source isn't the fact that Heartbleed exists. The greatest single argument in favor of open source is that there are actually so few major bugs. But let's get rid of this idea that open source is going to produce perfect software--not gonna happen.

Colorado99April 18, 2014 1:52 PM

The OpenBSD developers are ripping out the cruft and crud from OpenSSL. Check out opensslrampage for the hilarious comments and see the truly idiotic things the OpenSSL people thought was OK.

While you're there, donate to OpenBSD to support their work in making the Internet safer for all of us.

WinterApril 18, 2014 1:53 PM

"The second problem is that of those who are actually competent to program and review it there is a significant opportunity cost to doing so because by definition they are not getting paid for it."

This holds for all Open Source. I think you are wrong on this account. We see the people from OpenBSD auditing OpenSSL. And they already write a kernel/OS for free.

And that student that showed that the Windows binary of TrueCrypt was bitwise identical to compiled source will not be without a job I think.

There must be a "better" reason why OpenSSL was not audited.

quixoteApril 18, 2014 5:38 PM

As to why OpenSSL could have problems: there are five (7?) regular OpenSSL developers. There are thousands of companies and government agencies that use OpenSSL. Oddly enough, very few of them seemed to feel that they needed to give anything back for this crucial code, neither money nor bug searches nor code development.

The question is not why OpenSSL had this problem. The question is why problems haven't been much bigger and more common and worse.

ThothApril 18, 2014 7:58 PM

The problem is with the programmers. Using C/C++ and any other languages that gives the programmers the chance to forget about the consequence of bad memory management (Bufferoverflow, pointers ..) is one of the key factors to the Heartbleed problem.

Human errors are one of the biggest contributing factor to security holes as well.

Probable ambiguity and choice in the SSL/TLS protocol is also another likely culprit. Regarding Keep-Alive protocols, why do I need to send the length of the message to the server / client when I could simply grab the message and throw it back ? It is only a simple Keep-Alive message whereby all you need is grab the incoming message and send it back. If there is a need for checking of the message, a hash checksum would have suffice (and a much better way than checking message length) to verify the Keep-Alive message.

East GermanyApril 18, 2014 8:13 PM

OpenBSD rewrite of OpenSSL has found dozens of non documented defaults hidden in wrappers so far. They also found private RSA keys were used to increase entropy because of feeble operating system PRNGs. It was also discovered the server/box timestamp is used for extremely feeble entropy. That whole library is a mess.

It should be noted that the OpenBSD audit/rewrite is completely OS kernel centric, so much of what they've done would be insecure if ported to another platform like the Linux kernel. Each OS platform should be writing their own OpenSSL implementation (and hopefully can find crypto engineers to do this correctly) which is one of the reasons why IETF OpenSSL is so bad and confusing, they are attempting to make it an independent library but they aren't crypto engineers nor are they kernel security engineers so everything they've done is wrong. Good example was that wrapper around malloc/free where heap memory was dumped on the app itself instead of being released to the protected OS malloc which was the cause of Heartbleed.

Dave KornApril 20, 2014 7:24 AM

Do not feed RSA private key information to the random subsystem as entropy. It might be fed to a pluggable random subsystem.... What were they thinking?!

They were presumably thinking that anyone with sufficient admin rights to install a malicious random subsystem could far more simply attach a debugger to the process and just read the keys straight out of RAM, without even having to figure out how to trigger the rare error condition that would lead to that codepath being executed. They'd already be on the other side of the airtight hatchway, to use a Chen-ism.

Chris AbbottApril 21, 2014 1:51 AM

@My two cents:

You hit the nail on the head. There's a HUGE difference between "It's open source so it's auditable" and "It's open source therefore it's obviously been audited and completely safe". People blur these together. I've mentioned this before. Open source is great for the first reason I mentioned above, no security through obscurity. However, you're absolutely right about the fact that the thousands of eyes actually aren't. We can learn from Heartbleed. Don't trust anything blindly. Too many people have done that for far too long. Open source and proprietary software have one major thing in common: they are created by human beings. We all bleed, slip on ice, pee, accidentally make embarrassing comments, ect. We can do our best to reduce errors but we can't eliminate them. Even if you're the best IT Professional/Cryptographer/Security Expert/Whatever, none of us are God. We must keep that in mind...

DBApril 21, 2014 2:16 AM

I've been doing a lot of promoting open source over closed source, you can see the history of my posts on this blog for evidence. However, I've always tried extremely hard to be crystal clear that open source is POSSIBLE to be audited, NOT that it's guaranteed to have been audited. The thing is, CLOSED source is much harder and/or impossible to audit, so it's definitely guaranteed NOT SAFE. Just because "open" is the opposite of "closed" doesn't mean that it's the opposite of "not safe" though. It just means it's at least POSSIBLE to become safe, someday. And in that sense, it's on the road to safer already, no matter how bad it is or how many bugs are found.

Nick PApril 21, 2014 12:46 PM

@ DB

"The thing is, CLOSED source is much harder and/or impossible to audit, so it's definitely guaranteed NOT SAFE."

It's actually easy: independent party under NDA evaluates it, pubishes the evaluation report, and includes a hash of what was evaluated. The more mutually suspicious parties the merrier. Remember that the products achieving highest security ratings through most rigorous evaluations ever done (A1/E6/EAL7) were closed source products. So, it's not only doable: only closed source products have done it. ;)

If that seems odd, it's probably because the evaluations are expensive and require a meticulous (ie unpopular) development process to achieve. Developing high assurance, certified code for millions of dollars doesn't motivate one to hand it out for free or let everyone see it to make knockoffs. The cost must be recovered, along with money made to continue development and certify (ie vet) the upgrades.

Open source, in theory, could be the best thing for high assurance. In practice, both models tend to avoid anything high assurance in favor of quick, easy development. One does this for profit, the other for pleasure. And few that do high assurance do it for business reasons and license the resulting code, hence it being closed. This is why Bell [of Bell-Lapadula] said the only route to high assurance infrastructure would be "selfless acts of security" by the big companies & governments needing the assurance. It has to be paid for, built by pros, maintained, integrated into existing infrastructure, and promoted for adoption. And whoever does all this will operate at a loss continuously.

And placing a bet on open source is risky because I've posted *every single project* here that made any progress. Those that can lead to secure and subversion resistant development I could probably name off in one paragraph. That's despite hundreds of thousands to a million open source projects existing. It's seems obvious to me that open source model won't work in high assurance because, despite plenty demand & OSS activity, it's never produced a high assurance system to completion. Never. Not once.

At least there are some focusing on good quality, security, design, or architecture. Many did or are putting in solid effort. I'll give them that. None meet minimal requirements for high assurance. Few that come close were sponsored by companies/governments and created with aid of professionals. So, my status quo still stands.

Nick PApril 21, 2014 4:11 PM

@ DB

A very important question whose answer varies with location and requirements. It's typically a private evaluation lab and at least one government agency's pen testers. If said agency uses it internally, it probably does what evaluator said it does. And might be backdoored for that govt. If govt restricts its availability, it's probably very secure to point they don't want opponents getting it.

In US, that was SCOMP, GEMSOS, and Boeing SNS. Today, the rule no longer applies as the restrictions were relaxed and they want more subversion options. So, my model I posted here previously was to make the review team composed of companies and countries unlikely to cooperate on a backdoor. I'm also adding right now that product can be integrated profitably into operations of big players in those countries to reduce likelihood they block it.

Of course, the question you ask applies equally to open source software as most users aren't qualified to review it. And it tends to depend on languages, libraries, OS' s, etc that invite plenty code injections.

DBApril 21, 2014 6:58 PM

@ Nick P

Reviewing from opposing entities that are unlikely to cooperate is an interesting idea.

The problem with the closed source model is that, suppose I don't trust any of those sources and I want to review it myself, then the NDA might become too high of a bar, not only for me, but for a lot of security professionals... I am a bit untrusting of your claim that open source has so much less review than closed source. Sure some of it does, even much of it... but all of it? even all the popular stuff? by default? hmm...

I've worked on both closed source and open source software projects myself, and frankly, they both usually looked like the same nasty crappy code to me... market forces are always causing companies to take shortcuts and rush stuff out before it's ready, it's basically a business requirement to stay afloat in almost all software companies as far as I can tell. The difference is, as an end user using the crap who happens to also be an expert in the field, there's no way I can fix the closed source one, but if I care enough, I CAN fix the open source one... eventually, at least. Additionally, when it's a labor of love, and no market force is coercing me (on pain of being fired) to rush it out before it's ready, I can take the time to "do it right" if I want to. The key here is "if I want to" which also goes along with "if I know how" and "if I care enough"... not all open source project leaders do this, but some do, I know of some like this. This "market forcing crap before it's ready" vs "labor of love" is what convinces me that "open" models are in general better than "closed" ones. Not always, obviously, just as a general matter of principle in my experience so far.

You speak of "high assurance" as if it's a specific kind of technique or methodology that nobody would ever do normally because it's the most un-fun thing in the universe... There isn't a way to do things in a much more "safe" way that isn't so terrible? I mean, real security, not just piling on paperwork and certifications and spending money. Even simple things like Test Driven Development (TDD) and separation of concerns and writing code for readability are not so bad, once you get used to them... and actually do save you a lot of time and hair pulling in the end... Surely less hair pulling is more fun? :)

name.withheld.for.obvious.reasonsApril 21, 2014 7:48 PM

Personally I white-list certificate(s) and issuing CA as I check the CRL status. No need to white-list root CA's that I know of--unless I am issuing publicly available certs or want to insure the certificate path. I don't believe the default behavior for this is as I believe it might or should be. For example, for illustrative purposes only and is considered literary art:

  1. Client contacts host providing x509 service, TLS/SSL
  2. Host replies with x509 header data (inital handshake begings)
  3. Client verifies the veracity of the x509 data;

  4. a) Check Validity, Fingerprint, and
    CRL Status/Info
    b) Verify Certificate Path/Chain
    c) Hop Count (verify host/client
    network/protocol delay/latency)
    d) Test Injection, Crypt/Decrypt
    (verify encapsulation
    NOTE: if I were a cracker, I would subvert
    clients by providing valid return codes all
    the time--kind of a modified goto fail.

  5. Negotiate connection/channel protocol bind connection

Nick PApril 21, 2014 10:31 PM

@ DB

"The problem with the closed source model is that, suppose I don't trust any of those sources and I want to review it myself, then the NDA might become too high of a bar, not only for me, but for a lot of security professionals... "

I understand the feeling. I've been feeling it a lot more recently.

" I am a bit untrusting of your claim that open source has so much less review than closed source. Sure some of it does, even much of it... but all of it? even all the popular stuff? by default? hmm..."

I wasn't saying OSS had less review. I was saying that it's had fewer high assurance designs, evaluations, etc. Almost all OSS code is the run of the mill variety you see outside open source. "crappy code" as you pointed out. Then, there's some made by good developers with a quality focus. OpenBSD and Bernstein's work stand out there. There's some with good security architecture, such as Dresden's TUDOS and Shapiro's EROS. I can think of very, very few that used an EAL6-7 style development process. Whereas, for closed source, I instantly gave you three that were also put in production use.

"market forces are always causing companies to take shortcuts and rush stuff out before it's ready, it's basically a business requirement to stay afloat in almost all software companies as far as I can tell."

That's *totally* true. I've said it here myself. Lipner said it best. He was worth listening to as he's done it both ways: shipping as much as possible with Microsoft's SDL & doing an A1-class VMM with Karger. This is a problem that can only go away if the model is nonprofit, subsidized, regulated, or subscription (revenue per company instead of per sale). I don't see the problem going away for majority of market anytime soon.

" I CAN fix the open source one... eventually, at least."

Rarely works out in practice for a complex application. It is a possibility, though, with open source so an advantage of it. Remember, though, that source can be shared with commercial users under NDA if needed. Company can also get service revenue by doing fixes, extensions, etc on clients behalf charging reasonable rates. The latter option aids secrecy of source, gets client what they want, and makes developer more money. That said, some model with open source wins on this one.

"You speak of "high assurance" as if it's a specific kind of technique or methodology that nobody would ever do normally because it's the most un-fun thing in the universe... "

If you doubt that, you've never done it. ;) Ok, certain kinds of people are attracted to it & get a perverse sense of enjoyment. (eg moi) There are also newer tools and methods that aren't so bad. The problem is that they're somewhat unproven. If we use "tried and true," it's an extremely detail-oriented process that requires the vetting of every aspect of design, development, config management, documentation, use cases, testing, covert flows, deployment, recovery, etc. (see EAL6-7 and my framework, then compare to OSS project of your choosing) According to Lipner, making one significant feature change on VAX Security Kernel took "several quarters." That it took that long shows you how rigorous and time-consuming their development process was.

So, quite simply, would you rather work at a shop that did code reviews + testing or a shop that did the above process? If you're like most developers (that care), then you will compromise by choosing option 1. As a bonus, you have more hair on your head, better health, and longer lifespan. :)

"and actually do save you a lot of time and hair pulling in the end... Surely less hair pulling is more fun? :)"

I mention hair then see this haha.

"There isn't a way to do things in a much more "safe" way that isn't so terrible? I mean, real security, not just piling on paperwork and certifications and spending money."

It isn't just about spending tons of money & doing paperwork. That's actually the parts of Common Criteria I'd rather ditch. The benefit is in the process that higher EAL's force on you. They absolutely force you to minimize bad stuff while maximizing good stuff. I mean, last thing you want is to put in that much time and money, then be told the product doesn't make the cut. That said, I do prefer private evaluations that focus on the good stuff while trimming the fat of the process.

And remember I'm talking about high assurance: medium assurance that stops plenty riff raff, but not high end attackers, is often possible at reasonable costs. More tools and precedents for that all the time. :)

DBApril 21, 2014 11:41 PM

Seeing that high end attackers have become so pervasive and intrusive, with virtually no practical restraint in sight... we have to somehow develop "higher" assurance techniques that are cheaper, easier, more fun, and can be more widely deployed. It's either that or give up and give in. Heil Obama.

Since so many projects (both open and closed source, actually) don't even HAVE a test suite... at all... (what is that, EAL0?) a REALLY HUGE difference can be made with just that... seriously! The problem is, people get lazy, they want to "write code" and "writing a test" doesn't feel like code to them, so it feels like a waste of time. Then they're under the gun to shortcut, so they skip it. Booo. It's totally the wrong way of looking at it. Writing tests SAVES TIME! It saves TONS of time. Try to add a feature to a pile of spaghetti code that has no test suite, vs one that's neatly structured and well tested. It's like night and day, and the time difference is several orders of magnitude for an otherwise identical project and added feature.

Which brings us to the issue of how to structure code. Makes a very big difference there too. And stop being afraid to refactor it. You've got the tests, refactor already. There's always going to be a way to make it "better" so have some pride in your work, and make it better. Better readability. Better encapsulation. Better separation of concerns. More efficient. Less copypasta. All kinds of things, the list goes on and on.

Then, run code analyzer tools on it... things that warn for overly complex looking code, things that warn for sections of code that aren't exercised by your test suite, automatically run your test suite with every checkin and scold you in front of your peers when you break it, and on and on... More and more of these tools are being invented all the time. Have some pride in your work, keep learning about them, and implementing them.

It's not rocket science. Doing these things will dramatically improve the quality of code... and we need more of them. There are more I haven't thought of yet, or haven't learned about yet, or just forgot to mention here. Even if it's not an arduous hateful EAL7 process, it DRAMATICALLY improves the security of code to have it functioning correctly with far far fewer bugs. And just being forced to think about what "functioning correctly" means also helps tons.

I still know people who write code and still haven't touched a version control system!! I'm like, "what the heck, dude... do you not care?" Apparently not. Well.... start to care I guess when it feeds your children's mouths? I dunno. Maybe look into a different profession too, if improving the way you do things doesn't interest you... :)

And that's the thing: when I write code for pay, I CANNOT write high quality code. I am forced by management to write shoddy code. Always. With no exception in my experience so far. It's the market, it's either that or bankruptcy. At every company I've ever worked for. Fortune 500's, little startups, nonprofits, everything that pays. But when I write for free in my spare time on an open source project, it's totally different. I'm free to create quality works of art with my code.

Sure there's plenty of dross in open source. But it's also possible to create works of art that can never happen in the regular for-pay software world. For those who care, anyway.

Nick PApril 22, 2014 10:26 PM

@ DB

"we have to somehow develop "higher" assurance techniques that are cheaper, easier, more fun, and can be more widely deployed. It's either that or give up and give in. Heil Obama."

Yes. There's ongoing work in this area. Making it work enough that average developer can jump in and produce results is a necessity.

"Since so many projects (both open and closed source, actually) don't even HAVE a test suite... at all... (what is that, EAL0?)"

EAL(-4) if they use signed integers, EAL0 if not.

re Tests, Refactoring, Code analysis, etc.

Yes, it certainly can make a big difference. One tip I can give here is that an objective review process of design, code, tests, etc. produces the largest security ROI. The other things you mention are all good. The focus on test-driven development is a modern thing that benfits quality, but not as much as a strong review process. So, I'd say a good choice of language, safe coding style, reviews of any artificts created, black box tests, fuzz tests, regression tests, and static/dynamic analysis. This combo doesn't take so much effort. And like you said of many of these things, it can make a "dramatic" difference in quality. The old Cleanroom and Fagan Inspection processes had empirical evidence backing that claim, too, so the facts are on our side. :)

"I still know people who write code and still haven't touched a version control system!! I'm like, "what the heck, dude... do you not care?""

I'll add that it's an explicit requirement even in Orange Book era, it benefits development in so many ways, and if properly set up it can help address problem of malicious developers. The latter is an issue in High Robustness projects & NSA's actions have proven it's especially relevant. Previously, I recommended Aegis and/or OpenCM as good designs for security-centered projects.

"And that's the thing: when I write code for pay, I CANNOT write high quality code. I am forced by management to write shoddy code. Always. "

That's sad and quite believable. In your case, what does that specifically mean? Do you have to use bad constructions? Do they force you to throw something together without testing? Do they ask you to leave the bugs in as they don't care or can fix them for money later? I'm curious as to what happens in your area.

" But when I write for free in my spare time on an open source project, it's totally different. I'm free to create quality works of art with my code."

Truly. It's amazing what I've seen people come up with when they have all the right incentives and time to make it work. It's why the volunteer and open models have so much potential. Yet, they haven't delivered on it by far in INFOSEC. My hypothesis has always been a combo of a lack of knowledge/wisdom transfer to next generation, inertia in the industry, and that finding flaws is more stressful than fun for majority. The problem must be solved or a continuous stream of problems will continue to occur. That simple.

DBApril 23, 2014 4:29 AM

@ Nick P

"That's sad and quite believable. In your case, what does that specifically mean?"

In my case, I usually code in an iterative way... get something working, then refine it, refactor it, etc... I might spend a lot of time making it "better" in ways that don't add ANY new functionality or features or visible improvements... but drastically improve structure, code quality, etc... Basically because my first try was only an experiment. Experiments are messy, because you don't know what you're doing yet. When this is done for pay, the emphasis is on pushing it out the door and start making money the instant something is visible, no time for such "nonsense" as refactoring to structure your code better... and there's no money to pay you to do it later. If something is noticeably broken you can fix that, sure, but nothing under the covers that isn't noticeable from the outside. And this attitude is contagious. If nobody else cares, why should I? And downhill it goes. You pay for it later, when the customer wants a feature added later, but nobody invests in their future, it's all in the here and now. That's why we are the largest debt society ever. And that is actually the term we use for it too, oddly enough... "technical debt"...

"My hypothesis has always been a combo of a lack of knowledge/wisdom transfer to next generation, inertia in the industry, and that finding flaws is more stressful than fun for majority."

I think all the above are true. But in the business world, it's not just what you're saying from a technical standpoint, it's also the nontechnical people driving the products and teams need education on it. But even education isn't enough I fear, I suspect the company would literally go bankrupt from "doing it right"... there's just no way to compete, end users won't pay for a secure product properly done that costs more than a shoddily done one that otherwise looks the same. Even the process of "bidding" on a job is fundamentally flawed in this respect. The whole system is just a race to the bottom and I see no real solution to it. This is why I hate closed source with such a passion, and why I'm so flabbergasted at people who seem to think the opposite (maybe I should be looking at what companies they work for, since apparently supposedly some companies somewhere somehow have the time to do things right maybe).

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.