Comments

Skeptical April 11, 2014 4:49 PM

Don’t want to push the Heartbleed thread into a NSA thread, so am posting a comment about Bloomberg story here.

two sources familiar with the matter is a bit tenuous.

However, it’s worth noting the extra details that Michael Riley, the Bloomberg reporter who broke the story, added in an interview on Bloomberg television. He noted that there are 1,000 people hired by the NSA to search through code like OpenSSL for exploitable bugs. Was this a known fact prior to Riley’s statement?

The US Government, surprisingly (seriously), has issued a strong denial.

“Reports that NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong,” Hayden said. “The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report.”

See Politico.

The Hayden in the quote is Caitlin Hayden, a spokesperson for the National Security Council, who is not related to Michael Hayden.

A spokesperson for the NSA also issued a denial via Twitter.

The article linked continues:

“If the Federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL,” she said.

Hayden said the government has “reinvigorated” an established system called the “Vulnerabilities Equities Process,” which brings various federal agencies together to assess what to do about emerging cybersecurity problems. She said that process usually leads to disclosing bugs or dangers in technology, but suggested that in some cases the government might not do so.

“Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities,” Hayden said.

I’m frankly surprised at the comment on the matter at all (I suspect I’m not the only one), and at the speed with which the US Government responded.

Riley has been reporting on cybersecurity issues for some time (at a RSA conference he moderated a panel on state sponsored malware, which featured among others Jacob Appelbaum), but I wonder if he’s been misled here by sources who represented their conclusions (to be charitable) as more certain than they should have.

If Riley was in fact deliberately misled, he should expose the attempt himself.

KnottWhittingley April 11, 2014 5:31 PM

Skeptical,

I’m leaving open the possibility that the NSA is lying. I hear spies have lied on a few occasions.

They may be gambling that nobody who really knows will say so for attribution, and they can just deny anything from vague “sources” while they hunt for the leaker(s) who told a reporter something highly classified.

Or maybe it didn’t happen. How would we know?

But if not, I think it probably reflects worse on the NSA. I would like to think that among their thousands of employees they do have some competent people doing code reviews and basic regression testing of critical infrastructure code like OpenSSL, who would usually catch such an obvious bug right at the interface of a new feature, and tip somebody off somehow.

I’d really like to think they arranged for somebody to uncover the bug because they saw that “bad guys” were onto it, and didn’t want to leave us wide open to attacks.

But I guess they were just too busy with their suspicionless surveillance to be doing their job of helping secure crucial basic infrastructure. Sad.

Buck April 11, 2014 6:00 PM

There’s already two threads dedicated to Heartbleed here… Can’t we talk about something else? Everyone see the latest from Snowden?
Edward Snowden: US government spied on human rights workers
http://www.theguardian.com/world/2014/apr/08/edwards-snowden-us-government-spied-human-rights-workers

Snowden said he did not believe the NSA was engaged in “nightmare scenarios”, such as the active compilation of a list of homosexuals “to round them up and send them into camps”. But he said that the infrastructure allowing this to happen had been built.

I found it quite interesting that he chose ‘homosexuals’ rather than, hmmmm shall we say… Muslims, Iranians, Russians, religious extremists, dissidents, or any number of other much more likely targets..?

The exiled American spy, however, said the NSA should abandon its electronic surveillance of entire civilian populations. Instead, he said, it should go back to the traditional model of eavesdropping against specific targets, such as “North Korea, terrorists, cyber-actors, or anyone else.”

“or anyone else”!!?? What the hell does that mean? That sounds an awful lot like “surveillance of entire civilian populations” to me… 😉

Snowden also urged members of the Council of Europe to encrypt their personal communications. He said that encryption, used properly, could still withstand “brute force attacks” from powerful spy agencies and others. “Properly implemented algorithms backed up by truly random keys of significant length … all require more energy to decrypt than exists in the universe,” he said.

Still recommending proper encryption as a panacea I see… Ignore that these council members likely have little to no understanding of the consequences involved in buggy implementations, malicious standards subversion, and wide-scale & highly efficient end-point compromise infrastructures…

The international organisation defended its decision to invite Snowden to testify. In a statement on Monday, it said: “Edward Snowden has triggered a massive public debate on privacy in the internet age. We hope to ask him what his revelations mean for ordinary users and how they should protect their privacy and what kind of restrictions Europe should impose on state surveillance.”

Yes, Snowden, you certainly have “triggered a massive public debate on privacy”… Now please, for your own good and for the good of security everywhere, please just stop talking, change your name & appearance, slink back into the shadows, and try to get along with a normal life! 😛 JK!

Human League April 11, 2014 6:01 PM

Highly informative. Note skeptical’s labored attempts on successive threads to make an offhand distinction between NSA and its particular act of sabotage, implying that they’re unrelated topics. The other interesting feature is the command decision to burn up more of skeptical’s residual persona-cred to shore up a denial that’s already drowning in public derision. Cass Sunstein is weeping over his dashed hopes for cognitive infiltration.

Spooks haven’t made an administration look so ridiculous since Nixon.

KnottWhittingley April 11, 2014 6:02 PM

Skeptical:

“two sources familiar with the matter is a bit tenuous.”

True, but if they really are in the in the know, presumably they’re people with security clearances who’d rather not lose them and be prosecuted for leaking very highly classified information that embarrasses the agency. The reporter’s not likely to say “two guys who did a code review of the NSA’s code for the exploit,” or “the manager overseeing the operation,” or “a sysadmin who peeked at the code,” or even vaguely hint at their job titles or level of seniority, or whether they work at NSA.

Fun hypothesis: it was Greenwald and Poitras, who read it in an NSA project report. They’re in NYC today, where Bloomberg is based—coincidence?—and wanted the NSA to have a chance to lie about it before they published chapter and verse. (NSA would be way less likely to lie about it if they knew it came from people with Snowden docs.)

I don’t think that’s particularly likely, but it would be hilarious. They should do stuff like that, leaking juicy stuff to random reporters on the condition that the reporters don’t reveal their source for a while.

(Maybe somebody should suggest that to them.)

Anonsters April 11, 2014 6:59 PM

More subtly, but perhaps much more nefariously, NSA’s denial of knowledge about Heartbleed includes these words (my emphasis): “Unless there is a clear national security or law enforcement need“. That NSA exploits computer security vulnerabilities in order to benefit domestic law enforcement is new. And profoundly troubling.

Skeptical April 11, 2014 7:01 PM

@Knott: No possibility is closed here for me either. However, the unusual denial, combined with the US Government’s knowledge that if they did know about it, that fact is likely leaked, inclines me to suspect the denial is true.

@VariousPseudonyms: As always, your perceptive analysis of structure is striking. I particularly like the idea that I’m trying to separate the NSA from the subject of Heartbleed by posting a comment about the NSA & Heartbleed on an additional thread.

Spaceman Spiff April 11, 2014 7:02 PM

That’s a nice squid! Can’t you put it in your garden? I’m sure the birds will love it! 🙂

various veins April 11, 2014 7:49 PM

Don’t try to shit me, I got buddies down the hall from you. I know the line before you do.

Skeptical April 11, 2014 8:56 PM

Here’s a link to the full statement:

http://icontherecord.tumblr.com/post/82416436703/statement-on-bloomberg-news-story-that-nsa-knew

No word games in this one.

NSA was not aware of the recently identified vulnerability in OpenSSL, the so-called Heartbleed vulnerability, until it was made public in a private sector cybersecurity report. Reports that say otherwise are wrong.

Reports that NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong. The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report. The Federal government relies on OpenSSL to protect the privacy of users of government websites and other online services. This Administration takes seriously its responsibility to help maintain an open, interoperable, secure and reliable Internet. If the Federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL.

@Anonsters: The implication of the statement is that the NSA doesn’t make those decisions on its own. Instead it sounds like several agencies or departments coordinate to do some type of cost/benefit analysis on whether to release a finding of a vulnerability, with what sounds like a rebuttable presumption to release.

I hadn’t heard of the “Vulnerabilities Equities Process” until this statement. Is this new information?

Anonsters April 11, 2014 9:07 PM

@leastSkepticalpersonontheinternet:

No, the implication isn’t merely that it’s a multi-agency process. The implication is that if law enforcement wants, for whatever reason, to exploit a vulnerability, NSA won’t inform the relevant community that the vulnerability exists and needs to be patched.

Also, “with what sounds like a rebuttable presumption to release.” Sure, and there’s a presumption of openness in government underlying FOIA. But hey, it’s the government, so trust them, right, “Skeptical”?

Benni April 11, 2014 9:10 PM

” If the Federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL.”

I rarely see NSA email adresses in bug reporting mechanisms of openssl or other libraries where security is critical.

If they really disclose dozens of bugs, can somebody give me appropriate links where I can see that someone with an nsa email submits a report?

KnottWhittingley April 11, 2014 10:07 PM

Skeptical:

No word games in this one.

Yeah, the NSA would never make public statements that don’t mean at all what they seem to clearly mean.

After all, they don’t collect any data on anything Americans do that isn’t relevant to an authorized national security investigation.

Now they say:

Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

If I had a jaundiced view of spies’ honesty, I might suspect that they have such a bias, but it’s not a particularly strong one, and they think they have a clear law enforcement need to be able to penetrate any system at any time, just in case, so in practice that bias, though real, is usually overridden.

Remember, they want to “collect it all” and “master the internet.” You can’t collect it all if you can’t get into all the computers, can you? You never know where the relevant information is, so it’s all relevant by default until you’ve looked at and decided it’s irrelevant.

They have needs, and they need your metadata. They also need your password, or somebody’s password on every server, so that they can break security and see anything they turn out to need to see. (Which, by the first rule, is everything until proven otherwise.)

I’m sure they sincerely don’t want to collect innocent people’s data or innocent people’s passwords, or secret keys or whatever, but hey, they have needs. They always have needs. Needs they got.

“Collect it all” implies infect it all—or if it ain’t fixed, break into it to collect passwords and private keys, so that you can collect what you need when you need it. You never know when they’ll be relevant to and necessary for an investigation, so they’re all relevant and needed until proven otherwise.

That’s the logical implication of the infinitely elastic and expansive meaning of “relevance” in NSA apologetics, isn’t it? How could it not be? Especially since they have refused repeatedly, even under direct congressional questioning, to articulate any principled limit on their need for any and all data just in case.

Private keys are “business records” too, aren’t they?

Or so it might seem to somebody with such a jaundiced view.

And you know, “Skeptical,” I’m pretty sure nobody here thinks you’re an honest broker about any of this stuff, including you.

gullibal April 11, 2014 10:19 PM

What do you mean, skeptical’s the honestest sincerest actual guy I know, on or off the internets. I would totally trust his oopinion on NSA’s criminality or lack thereof.

Gumble April 11, 2014 10:55 PM

Thinking back on the web camera snooping release where the GCHQ was collecting webcam snapshots and sending them to the NSA for inclusion in XKeyscore….wouldn’t the NSA have the world’s largest collection of child pornography in that case considering how many kids show their junk to each other via web cam? And this voyeurism is paid for by British and U.S. tax dollars?

I’m amazed no one has written an Op Ed about this. Might raise some eyebrows if “NSA has world’s largest collection of child porn” was the headline.

Also NSA folk reading this blog – there are some in the tech community that understand that you want to regain our trust. But many of us have been obsessively following the news regarding NSA snooping and have heard an unfathomable amount of wordplay coming from multiple members of the Intelligence Community in response to Congress. How can we possibly believe what your claims of innocence right now given that we don’t know how you interpret the language you’re using? History has proven your penchant for interpretations that are not “reasonable” or colloquial.

Zersetzung in Silicon Valley April 11, 2014 11:32 PM

Having been the target of zersetzung operations in Silicon Valley for a while, I’ve had some time to think about the reasons for this type of attack.

It seems that part of the objective of constant zersetzung may be to produce a Stockholm syndrome effect, to condition a person through abuse to the point of identifying with their abusers.

On the metaphysical level, the goal may be to make the target define their universe in social rather than natural terms, to produce a relativistic or dialectical type of world view, a kind of Matrix-like experience without absolute references or primary being.

DB April 12, 2014 2:19 AM

lol… so this is the Skeptical worships the ground the NSA walks on thread… they’d never tell a lie, just trust him already. 🙂

Wesley Parish April 12, 2014 2:29 AM

law enforcement need

And we all know just how various state and city police depts interpreted that in relation to Occupy Wall Street and the like, don’t we? Macing students carrying out their civic duty of gathering and protesting and the like …

DB April 12, 2014 2:51 AM

To quote the NSA rebuttal:

Hayden said the government has “reinvigorated” an established system called the “Vulnerabilities Equities Process,” which brings various federal agencies together to assess what to do about emerging cybersecurity problems. She said that process usually leads to disclosing bugs or dangers in technology, but suggested that in some cases the government might not do so.

Here’s a lesson in reading between the lines… since this has supposedly now been “reinvigorated”… that implies that they let such a bug reporting system go by the wayside.

The more accurate conclusion I’d suspect is: many years ago before 9/11 they’d participate in finding/reporting/fixing bugs, but post-9/11 they haven’t done this AT ALL… and just keep all bugs found by their “1000” man team to exploit. Now that they’ve come under such heavy criticism that they’re in a little danger of losing some power, they have “reinvigorated” such a program (i.e. started the ball rolling on paper moments before the denial), so that in the future, now they’ll give out one or two small crumbs once in a while to us to placate us.

I reiterate what someone else said here: show me all the giant piles of vulnerabilities disclosed by the NSA prior to today… to disprove me… put up or shut up.

Regarding this:

law enforcement need

haven’t you heard? They actually argued in court that ALL CARS in LA are part of a criminal investigation. They literally feel the need for… well… everything. Need I say more?

Clive Robinson April 12, 2014 3:57 AM

@ Skeptical, and others,

The NSA blanket denial is a “crock of sh1t” and thus should not be used as a basis for either a “for” or “against” argument.

It’s Defensive PR knee jerk at it’s worst.

Look at the time diff on Bloomberg piece and the NSA denial…

Then ask yourself if they had time to check an organisation of it’s size especialy at a time of year when higher than avarage numbers of people are on vacation?

The simple answer is no. Then ask the same question but from the posting of the original heartbleed bug, is the answer realy any different?

Probably not, therefor the blanket denial covering all NSA staff has no basis in factual checking and therefor is only going to be true by chance not by proof, thus it should be considered false.

This is a PR101 mistake, you always say “we are not aware” /”we do not beleive” before “any off our staff would have done / been involved” with a “but we will check…” then having bought some breathing space, either kick it into the long grass or find a couple of scape goats and chuck them under the bus (or appear to do so) before your closing note some time in the future of “lessons have been learned” or “proceadures have been updated”.

Which makes me curious as to why such bad PR from the NSA especialy as even the US President has had to very visably “back peddle” for them in the recent past.

As for the rest of it, it smells strongly of that bovine outpouring that the NSA and Executive/DoJ pre-prepare by the truck load for congress etc filled with the stupid word games.

So at the very least I would assume that the blanket denial is a PR mistake and the subsiquent words are pre-prepaird and have been for some time, thus are in effect a “stock response” designed for a more specific form of decite.

Skeptical April 12, 2014 6:25 AM

@Gullible: My analysis is based purely on an assumption that the US Government is acting in a self-interested fashion. And, of course, no evidence we have yet is conclusive.

@Knott: Remember, they want to “collect it all” and “master the internet.” You can’t collect it all if you can’t get into all the computers, can you? You never know where the relevant information is, so it’s all relevant by default until you’ve looked at and decided it’s irrelevant.

I’d hesitate to draw any conclusions from a phrase in some vendor’s powerpoint presentation. However there are certainly many questions about how the “Vulnerabilities Equities Process” works. As I said above, I have not heard about that process until now.

I also didn’t think the Bloomberg story was implausible when I first read it, though the reporter put much more emphasis on “why it makes sense that the NSA would know” in the interview of him that I saw than on the reliability of his sources. That raised an eyebrow, but not very much.

Here though I’m really focused on the nature of the denial with respect to Heartbleed. There isn’t any specialized vocabulary in it, and there isn’t any wiggle room. So if you think that the NSA has been playing word games, that makes this denial unusual (it’s unusual in other ways as well).

And you know, “Skeptical,” I’m pretty sure nobody here thinks you’re an honest broker about any of this stuff, including you.

Knott, I think you’re trying to hurt my feelings. I’m completely honest about my views and my reasoning, and I’ve engaged in lengthy discussions about them. Kindly show me the same respect I’ve shown you in our discussions. It’s true that my opinions often do not agree with the accepted wisdom one finds in these discussion threads.

@Clive: Then ask yourself if they had time to check an organisation of it’s size especialy at a time of year when higher than avarage numbers of people are on vacation?

They had ample time (you don’t think they only started checking when Bloomberg published this story, do you?). Also, this is the signals intelligence agency for the US Government, so they’re unlikely to allow vacation scheduling to interfere with capability.

It would require a person at a high level to authorize the release of a statement like this, particularly on a question that implicates whether NSA utilized a method over the past two years, or whether the NSA did not detect a vulnerability. If such a person wants a certain question answered, that answer will likely be sought, found, and reported back with a great sense of urgency.

As to PR from the NSA, over the previous year it has been slow and very cautious. It’s understandable; the NSA doesn’t like PR, and abhors talking about any of this. I fully expected them to give an answer along the lines of GCHQ’s stock response in this case.

So the unambiguous denial – not a “under this authority, we do not…”, nor a “the program named does not…”, but a blunt, zero-room-for-later-evasion, we did not know, period – is really quite surprising.

If they actually did know, then I agree that such a denial would be immensely foolish, particularly if a respected reporter claims to have two sources who say otherwise and are persuasive enough to allow Bloomberg to publish.

Sometimes bureaucracies can produce mistakes like that, even if everyone individually is intelligent, due to errors in communication and fractured authority. So the question is by no means closed. But to comment on something like this likely required the authority of someone high enough to avoid that kind of error.

At present, then, the probability is that the denial is true, but there are still many unknowns, so that’s not a very stable conclusion. Never underestimate the power of bureaucracy to screw up beyond reasonable expectations.

Clive Robinson April 12, 2014 6:45 AM

@ Skeptical,

With respect to “vacation” you are not looking at it the way I am.

For such a denial to be actually valid they would have to ask each and every employee otherwise it’s an assumption and thu not factual and thus not true.

If an employee is off camping on the other side of the world or even the state the chanced are their mobile etc will be off to conserve the battery etc. Likewise if they are of fishing/hunting or away at a retreat or any other de-stress activity. They would have to wait for the employee to surface.

Thus I don’t think they have asked every employee thus that part of the statment is at best a worthless assumption at worst an outright lie.

The first thing they teach you in PR101 is not to make lies you can be caught out in and eggectivly that’s what that statment is. All a journalist has to say is “have you asked all your employees?” At which point they either lie again or look like idiots or worse.

NobodySpecial April 12, 2014 11:06 AM

So if the NSA has 1000s of analysts poring over software looking for security bugs that would threaten the security of the Homeland(tm).
Then why don’t I remember the last announcement from the NSA that I should fix a bug in Opensource lib X or stop using Microsoft Server Y until a fix is available?

I would have assumed they would be the principle source of security advisories – is there a twitter feed i’m missing?

KnottWhittingley April 12, 2014 12:32 PM

I would have assumed they would be the principle source of security advisories – is there a twitter feed i’m missing?

I have no idea what they actually do, but if I was them I wouldn’t be taking credit for finding vulnerabilities.

Assuming that they use a fair fraction of the vulnerabilities for exploits (for a while) before getting them fixed, they wouldn’t want people to know that the NSA knew about them already.

But as I’ve said before, if their denials about Heartbleed are true, that counts against this hypothesis. If they’re systematically reviewing important, widely-used code to find vulnerabilities and use or expose them, they should have trivially found that bug and seen the huge potential for exploiting it.

More generally if the NSA was doing the job we pay them billions and billions a year to do, I’d expect our infrastructure code not to suck quite so much, with so many basic dumbass errors going unnoticed for months or even years.

That should be the focus of some congressional hearings. They’re a military outfit, and they’re leaving our flanks unprotected, on purpose. It’s an enormously expensive boondoggle.

That should be the focus of some Congressional hearings. I is the NSA unwilling to do its job, or incapable of doing it?

Mike the goat April 12, 2014 1:58 PM

Clive: first thing’s first… You wrote “either kick it into the long grass or find a couple of scape goats and chuck them under the bus” … I sincerely hope that myself nor any other of my kind will be subjected to a vehicular accident. Nice to be back. Hope you’re well.

When news of Heartbleed broke my immediate reaction was that this was an exploit that had been engineered and planted by an intelligence agency and that even if it wasn’t – they would have likely found it (we know they pay hundreds of people to carefully audit commercially used crypto) independently and made use of it. The “tiny error with big consequences” is their calling card.

Now I am not so sure of the former and suspect it was indeed just crap coding that wasn’t properly vetted during the sign off/commit process. Of the latter I am farily convinced. In fact I believe one of the documents leaked by Snowden referred to a method of retrieving data from SSLized sessions but noted that how it works is further classified and it is used on a targeted basis. It seems pretty clear to me that they had some way of retrieving the private key of the server through some exploit (or perhaps just a court order) – perhaps Heartbleed was their ace?

I agree with you that their response was pretty poor from a PR and propaganda point of view. Their quick denial coupled with weasel words in their “never, not I” statement only made the agency look guilty.

Heartbleed was likely the result of an overzealous young coder’s mistake. Even if the NSA somehow had no knowledge of its existence you can be sure that their TAO has a bucket full of 0days. There has even been the suggestion (which is not entirely without precedent) that the govt was buying sploits online.

It seems anyone who wants privacy and security is an enemy in the eyes of the US govt. Ultimately though their snooping will only lead to higher fences, better software and a public mistrust of anything the government is involved in. No… Wait. That’s already a reality.

Skeptical April 12, 2014 2:23 PM

@Clive: They don’t need to ask every single employee. These guys don’t employ lone wolf hackers, each with their own private store of vulnerabilities they’ve noticed. All of this is likely highly organized. You ask the people within the relevant units whether (a) this vulnerability has been noticed and (b) whether this vulnerability has been exploited. They check relevant records, and talk to relevant people. And if there’s one person you need with knowledge about this who has an absolutely outstanding reason for not being in contact, then you send someone to restore contact. And if restoring contact requires sending a Pave Low into some idyllic wilderness somewhere to find and extract someone from his vacation, then you do that.

Those are questions they would have asked when the vulnerability was first publicized. They’ve had ample time to check.

What’s unusual, to my eyes, is the unqualified denial in response to a pretty vague allegation about whether they’ve known about or used a vulnerability.

Someone did the math on the benefits of issuing the denial and the costs of issuing the denial, and decided to change the usual play that is run here. I don’t think Riley’s two sources, or Riley, expected the opposition.

If my read is right, then the US Intelligence Community has learned the danger of allowing a story, however vaguely supported, to stew in public given the context of so many other leaks.

If Riley’s two sources are people with knowledge of Snowden’s material, who perhaps did not have access to it temporarily and had to rely on memory to do so, and who gave an unsurprisingly incorrect account, then I would expect those two sources to scramble to publish a story on the NSA knowing about some vulnerability and using it, or to publish a story vaguely describing overall NSA strategy with respect to zero-days (perhaps purchasing them from private security firms). They’ll seek to achieve by innuendo what they could not achieve by facts.

Who knows, perhaps they’ll even offer Riley an implicit quid pro quo for not exposing the mistake they made.

Or… Riley’s sources are right, and the denial is wrong.

I guess we’ll see. Not sufficient evidence to establish a conclusion with high confidence.

Matt from CT April 12, 2014 3:44 PM

“The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report.”

If the NSA was not exploiting a simple programming error that could be found by a boring but straightforward code review, it represents a colossal failure of the intelligence community.

Since this the NSA and they parse their words with extreme care and precision, my translation is simple:

“We were receiving valuable intelligence from a friendly nation, and were told if we wanted to keep receiving it, to not look for vulnerabilities in OpenSSL.”

Figureitout April 12, 2014 4:21 PM

“NASA Back Up Computer Not Responding to Commands”:

http://www.nasa.gov/content/back-up-computer-not-responding-to-commands/#.U0mnrlf6-zZ

Little googling led to mildly interesting article describing Honeywell “MDM’s” being used for the ISS back in 1998 (386SX Intel processor); and some other familiar names (VAX, Sun Sparc). (Use script/ad blocking here to avoid annoyance):

http://www.militaryaerospace.com/articles/print/volume-9/issue-4/news/honeywell-engineers-enhance-space-station-computers.html

Hopefully they are able to state what went wrong…

anon April 12, 2014 5:20 PM

As i am going thru my certificates, frantically unchecking all options but “identify websites”, i started thinking about that map of NSA pwnd servers all over the world, and looking again at my cert authorities.

They don’t match up, right…..?

Anura April 12, 2014 6:33 PM

Kind of random, but:

Galois Counter Mode has always bugged me due to the linearity of the algorithm. It’s fast, but it’s security caveats can be problematic while also being easily reducible. Problem one is the weak keys: obviously, if the hash key is 0 or 1, it significantly weakens the algorithm, however unlikely. For a cost of one bit of key, we can always set the high bit to 1, significantly reducing the possibility of weak keys. Of course, now we’ve reduced the keyspace by one bit, which isn’t ideal.

Another problem is the linearity of the algorithm also makes it open to exploits. In the algorithm, E(IV|0x00000000) (E_0) is XORd to the hash, which means you don’t have to find the entire hash, just figure out what the difference in the hash would be in order to exploit it. Because of the linearity this can be easy to accomplish. A way around this is that instead of XORing E_0 to the hash, we can encrypt it directly, making it significantly harder to attack.

Of course, now we have opened another (potential) problem: the MAC is no-longer dependent on the IV. One way around that is to initialize the hash state to E_0. However, for a very small extra cost, we can modify the algorithm slightly to reduce the linearity of the code. Instead of initializing with E_0, we can XOR it to every block of the hash; then, instead of XORing each block to the state, we can add it modulo 2^128 (or larger if you want to extend it to a larger block size). This makes the algorithm itself nonlinear, making it significantly more difficult to run attacks against, and only for a very small performance hit. I believe this also makes up for the lost key bit from above.

Anyway, more of just thoughts than a proposal… Needs a paper, peer review, cryptanalysis… Instead I’m going to drink beer.

mike~acker April 13, 2014 7:08 AM

Secure Computing in a Compromised World

It’s not a State Secret: much of our Personally Identifiable Information (“PII”) has been leaked, hacked, sold, or otherwise distributed to most anyone interested, including disreputable re-sellers.

If we accept that as an existing condition ,– what sort of response might we make now?

The answer lies in the proper authentication of transactions.

Any miscreant may have my PII — or yours — or — untold thousands of files. This is the reality we must all live with today.

Which leads me to a key sentence in the testimony of Whitfield Diffie in behalf of the NewEgg Supply Co v TQP Holdings: *reference(1)

quote:
There was one other big need: proving authenticity.
“The receiver of the document can come into court with the signed document and prove to a judge that the document is legitimate,” he said. “That person can recognize the signature but could not have created the signature.”
:unquote

The CRITICAL POINT is well stated by Mr. Diffie here: a signature must be such that it can be authenticated — but not forged.

PGP signatures — are one answer. A miscreant might have your Social Security number, your date of birth and your dog’s name — but he would not be able to file a 1040 with the IRS or make charges to your credit card or log into your Credit Union — if proper use of PGP — including trust models — were common practice. *note(2)

Proper use of PGP should be taught in school. and especially the procedures for establishing trust models for keys — and protecting public keys from tampering. all of which is covered beautifully in Phil Zimmerman’s original essay. *reference(3)

all of us should have PGP installed, and have our own public/private key pair, and maintain a Trust Model in our keyring.

this represents a significant change in computing practices. Many of us see the need for change while many are unfortunately resigned to thinking hacking is inevitable.

Change is in the wind though,– *reference(4)

*reference(1): Whitfield Diffie
“http://arstechnica.com/tech-policy/2013/11/newegg-trial-crypto-legend-diffie-takes-the-stand-to-knock-out-patent/

*note(2): secure operating software required. like PGP this is available but not commonly used.

*reference(3): Phil Zimmerman
http://www.pa.msu.edu/reference/pgpdoc1.html

*reference(4) power of FTC to sue companies for poor security practice:
http://www.zdnet.com/judge-enhances-ftcs-power-to-sue-over-security-breaches-7000028357/

Benni April 13, 2014 1:25 PM

By the way, did you note that openssl has this strange financing method:

https://www.openssl.org/support/acknowledgments.html

“Please note that we ask permission to identify sponsors and that some sponsors we consider eligible for inclusion here have requested to remain anonymous.””

They disclose only 3 sponsors. But why should a sponsor for a security library want to be anonymous? Well in case of RSA, the sponsor NSA certainly had an interest to be anonymous.

The openssl foundation writes:

https://www.openssl.org/support/consulting.html

“Does your company use the OpenSSL toolkit and need some help porting it to a new platform? Do you need a new feature added? Are you developing new cryptographic functionality for your product?

To every secret service, this must sound like music in their ears. They can anonymously donate, and get “features” into openssl, similar to the situation with RSA.

The developers are mainly germans. A lead developer of openssl works near muich (Dachau). If he gets into a suburb train, he is in 20 Minutes at the headquater of the german secret service BND in Pullach. Given that the BND is, according to Spiegel, a major shareholder of Crypto AG, and has apparently a decade experience in weakening Crypto hardware, it would be naive, to assume the BND would not even try to influence Openssl.

The openssl foundation claims:

https://www.openssl.org/support/donations.html

“Please note that the OpenSSL Software
Foundation (OSF) is incorporated in the United States as a regular for-profit corporation. It does not qualify as a non-profit, charitable organisation under Section 501(c)(3) of the U.S. Internal Revenue Code. We looked into it and concluded that 501(c)(3) status would require more of an investment in time and money than we can justify at present. This means that, for individuals within the U.S., donations to the OSF are not tax-deductible. Corporate donations can of course be written off as a business expense.”

To me this argument looks quite nonsensical. Even the smallest linux distro can register themselves as a non profit organisation. But perhaps this does not hold if you want 50.000 a year from “Platinum” sponsors.

According to http://www.nytimes.com/interactive/2013/09/05/us/documents-reveal-nsa -campaign-against-encryption.html?_r=0

NSA says: “The various types of security
covered by BULLRUN include, but are not limited to, TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP. The specific instances of these technologies that can be exploited will be published in a separate Annexe (available to BULLRUN indoctrinated staff).”

So, who are the anonymous sponsors of openssl?

Is the german secret servide BND in the list of Sponsors?

To exclude this, the openssl foundation should at least declare not to take any money from any intelligence agencies what so ever.

Sad Clown April 13, 2014 2:03 PM

@Benni

Good post!! Scary!! Who hasn’t sold out? Why weren’t concerns about OpenSSL expressed earlier?

For example, from the FAQ at the OpenSSL site you linked to:

7. How do I check the authenticity of the OpenSSL distribution?

We provide MD5 digests and ASC signatures of each tarball. Use MD5 to check that a tarball from a mirror site is identical:

md5sum TARBALL | awk ‘{print $1;}’ | cmp – TARBALL.md5

Isn’t md5 totally broken and useless for this purpose against an attacker with any resources or skills? FOR YEARS?

Benni April 13, 2014 2:15 PM

A translation of the spiegel article

http://www.spiegel.de/spiegel/print/d-9088423.html

where it is said that major crypto Hardware manufactorers are influenced by BND is here: is here:

http://cryptome.org/jya/cryptoa2.htm

Given that they pulled things of like that below in the past, it would be completely naive tho think they would have no interest in Openssl, if a lead developer: http://engelschall.com/curriculum-vitae.pdf lives just a few S-train stations, i.e, 20 minutes away from BND headquaters:

“Eugen Freiberger, who is the head of the managing board in 1982 and resides in Munich, owns all but 6 of the 6,000 shares of Crypto AG. Josef Bauer, who was elected into managing board in 1970, now states that he, as an authorized tax agent of the Muenchner Treuhandgesellschaft KPMG [Munich trust company], worked due to a “mandate of the Siemens AG”. When the Crypto AG could no longer escape the news headlines, an insider said, the German shareholders parted with the high-explosive share.

Some of the changing managers of Crypto AG did work for Siemens before. Rumors, saying that the German secret service BND was hiding behind this engagement, were strongly denied by Crypto AG.

But on the other hand it appeared like the German service had an suspiciously great interest in the prosperity of the Swiss company. In October 1970 a secret meeting of the BND discussed, “how the Swiss company Graettner could be guided nearer to the Crypto AG or could even be incorporated with the Crypto AG.” Additionally the service considered, how “the Swedish company Ericsson could be influenced through Siemens to terminate its own cryptographic business.”

The secret man have obviously a great interest to direct the trading of encryption devices into ordered tracks. Ernst Polzer*, a former employee of Crypto AG, reported that he had to coordinate his developments with “people from Bad Godesberg”. This was the residence of the “central office for encryption affairs” of the BND, and the service instructed Crypto AG what algorithms to use to create the codes. (* name changed by the editor)

Depending on the projected usage area the manipulation on the cryptographic devices were more or less subtle, said Polzer. Some buyers only got simplified code technology according to the motto “for these customers that is sufficient, they don’t not need such a good stuff.”

In more delicate cases the specialists reached deeper into the cryptographic trick box: The machines prepared in this way enriched the encrypted text with “auxiliary informations” that allowed all who knew this addition to reconstruct the original key. The result was the same: What looked like inpenetrateable secret code to the users of the Crypto-machines, who acted in good faith, was readable with not more than a finger exercise for the informed listener.”

Benni April 13, 2014 2:39 PM

I think, given that these openssl people live in close proximity to BND headquaters, and given that BND “have obviously a great interest to direct the trading of encryption devices into ordered tracks” and that the BND “service instructed Crypto AG what algorithms to use to create the codes.”

And that the security of openssl is like this: http://www.nytimes.com/interactive/2013/09/05/us/documents-reveal-nsa-campaign-against-encryption.html?_r=0 “The various types of security covered by BULLRUN include, but are not limited to, TLS/SSL, https (e.g. webmail), SSH, encrypted chat, VPNs and encrypted VOIP.

You can pretty much bet that openssl has BND involvements, if it is not entirely run by BND.

Skeptical April 13, 2014 3:09 PM

I have to say that I’m a bit surprised at the number of people who say it implausible that the NSA did not know of the bug. There are a very large number of organizations in the world, including criminal organizations, with great interest in finding vulnerabilities such as Heartbleed. Yet thus far it appears that either none, or extremely few, knew of it. If the vulnerability were quite as obvious as some claim, then I would expect many more organizations to have discovered it.

Since that is not the case, there must be another explanation. Perhaps hand-checking every inch of code isn’t in the NSA’s, or any other interested organization’s, budget.

The Vulnerabilities Equities Process could be exactly what should happen when a vulnerability comes to light. Multiple departments, with different interests, discuss whether the vulnerability should be disclosed. This would help ensure that all factors, not merely the potential intelligence gains from exploitation, are taken into account.

Of course, who knows whether it actually works that way. The brief description of the Vulnerabilities Equities Process is all too brief. I don’t think there’d be any harm if the US Government were to disclose a bit more about the process.

Clive Robinson April 13, 2014 3:14 PM

@ Mike the goat,

Rest asured I won’t be chucking any goats under anything any time soon, I need them to keep the weeds/grass down 🙂

With regards,

    When news of Heartbleed broke my immediate reaction was that this was an exploit that had been engineered and planted by an intelligence agency and that even if it wasn’t – they would have likely found it (we know they pay hundreds of people to carefully audit commercially used crypto) independently and made use of it. The “tiny error with big consequences” is their calling card.

Even if this occurance was a coding error, have a look at the specification/protocol.

It’s asking for trouble, alowing a potential adversary to kick 64K of “payload” onto your machine is a security nightmare especialy when the rational appears to be “have a unique identifier” just in case an earlier heartbeat response is still trundeling across the network… it’s effectivly an invite to get a worm on board.

I would have to see very very extensive and well validated reasoning to let that one across my desk and then for it to have some serious health warnings plastered all over it.

So I’ll push my neck out a little and make an assumption, the Ed Snowden revelations indicate the NSA like to “finesse” and they also want “total ownership” of the Internet. Thus it follows that the IETF like NIST is a target on the “hit list” for finessing.

In some respects finessing is like fly fishing for salmon, you know they are not feeding on the way up stream your hook/bait is realy their on the “off chance” one will get hooked by chance…

If I wanted to potentialy open SSL/TSL up a protocol like that of the heartbeat is going to cause problems unless handeled by some one with experiance. So there is a reasonable probability of “getting a bite” of some kind.

Thus it is possible it’s both the NSA finessing the IETF and a “kid programer” mistake in the coding, especialy when you consider how many ways it could be got wrong.

Buck April 13, 2014 3:40 PM

@mike~acker

That regulatory ruling almost flew in under my radar, and I believe it probably deserves more prominent mention than an itty-bitty footnote!

U.S. FTC can sue hotel group over poor data security, court rule
http://www.reuters.com/article/2014/04/07/us-wyndham-world-ftc-idUSBREA361VP20140407

FTC Chairwoman Edith Ramirez said she was “pleased that the court has recognized the FTC’s authority to hold companies accountable for safeguarding consumer data.

While I suspect this decision will be overruled very shortly (at the very least, the FTC will have to publish a detailed list of proper security practices – good luck with that! ;-), in the meantime, the FTC seemingly has the power to sue nearly any company it has beef with…

Coyne Tibbets April 13, 2014 4:22 PM

@Skeptical
“If the Federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL.”

Oh sure, supposedly not much wiggle room. But the three-letter agencies have been nothing if not clever in their wording. Two things about the above:

I am amused by the separatist portion of the statement, to wit, “…Federal Government, including the intelligence community…” So let’s see, the intelligence community no longer regards itself as part of the federal government, or maybe it’s just concerned that some people might think such, so apparently it needs to make it clear that, “Yes, Virginia, we are part of the Federal Government. Really. Honestly.”

And then there’s that blythe, “…would have been disclosed…” Note the careful “past tense”, which really isn’t a past tense. In common parlance the phrase, “…would have been…” also has use as a defensive promise. For example, imagine someone speaking to the boss about some money they stole from the petty cash, “It would have been returned, just as soon as I got my next paycheck.” Not really past tense, in that usage, is it? More like a promise, and worse, a promise that may or may not be kept; especially if the boss never discovered the theft.

Yes, I’m sure if the intelligence community knew about the OpenSSL exploit then it would eventually have been disclosed, but when would that have been exactly? In 10 years? 20 years? 1,500 years? I’m sure they want us to think it’s past tense, as in, “…would [already] have been…”, but then that past tense is contingent on an admission that it was discovered. So, again, it doesn’t necessarily say anything like it purports to mean.

I’m sure the NSA/FBI/CIA/DOJ/DHS/TSA/DIA/…and on and on, would regard my analysis as unfair. But we have seen nothing but weasel words from those agencies, all of them, for decades. We need a better reason to believe that this is not weasel worded besides, “Don’t worry, they’re being straightforward this time.”

@Sad Clown
Yes, they’re using MD5, which cryptographers concluded was unsafe for cryptographic purposes back in 2008. They also offer SHA1, which was suspect as of 2010 and the government itself ruled should no longer be used last year. And a GnuGP signature…and from what I read on Wikipedia, it sounds like GnuPG signatures have weaknesses as well.

It concerns me greatly that a core cryptographic library can’t seem to secure its source with cryptographically secure algorithms, several of which are publically available. Maybe the industry should take a long-hard look at whether OpenSSL is only just incompetent, because ostensible incompetence can cover something much more insidious. At the very least, I think cryptographers should take a long, detailed look at a sequestered and properly protected copy of the OpenSSL source.

Virtual Man April 13, 2014 5:08 PM

The following analogy may be helpful in understanding what total information dominance entails.

Virtual machine technology enables several different operating systems to run on the same machine at the same time. Each OS thinks it has control of the hardware and acts accordingly. A hypervisor intercepts these actions and allocates the actual hardware to share resources across all isolated processes.

It appears that the NSA grand strategy is to virtualize human beings.

They can isolate every individual in a separate psychological bubble with each believing that they are in touch with reality, except that there is actually a man in the middle to produce that illusion, intercepting communications and shaping perceptions.

In order for this program to work, the NSA must position itself to deny access to universal truths. Conceptual knowledge as such would therefore be targeted for erasure, reducing a person’s horizon of awareness to only include concretes. People will become more event-driven rather than goal-directed as they become more virtualized. Does that sound familiar?

National Insecurity April 13, 2014 5:35 PM

Of course they knew about it. OpenBSD devs dropped in exploit mitigation to expose bugs and found memory leak potential. Then OpenSSL creates a wrapper around free and malloc so the app dumps memory to itself and not the protected OS malloc reversing their mitigation.

Matthew Green also wrote on his blog in January OpenSSL seemed prone to a remote memory dump. Every Intel agency probably knew about this.

Just wait until DNSSEC and requests through UDP are answered first by the NSA instead of the secure server you are trying to connect to. Or wait until your https traffic is routed to a ‘trusted’ proxy at AT&T to be decrypted for BS optimization reasons under HTTP/2.0

EXTINCTDINOSAUR April 13, 2014 8:29 PM

Unlike the idiot savants of NSA, the skips get some education with their training. So they know enough to worry about the consequences of covert network sabotage for international comity and rule of law. Here they’re thinking about it, partly in typical military lawfare terms of What-can-we-get-away-with? but partly in principled terms alien to the US government.

http://www.au.af.mil/au/awc/awcgate/ndu/iwil/iwilchapter2.htm

By contrast, DoD scrapes the bottom of the barrel for its lawyers and indoctrinates them to think that jus cogens is red tape and that the sanctified absolute rule of law A/RES/36/103 is bullshit.

yesme April 13, 2014 9:38 PM

@ EXTINCTDINOSAUR

“Unlike the idiot savants of NSA…”

Do you think the NSA hires idiot savants?

You know, lots of programmers have some sort of autism. And engineers too. Isaac Newton himself had probably autism. Bill Gates and Linus Torvalds have autism.

EXTINCTDINOSAURS April 13, 2014 9:57 PM

Yup, and Einstein. Difference is, autistics such as Einstein have ethics and decency. They don’t end up sniffing panties for NSA.

Benni April 13, 2014 10:21 PM

The chinese, ro someone with an army trojans containing chinese letters in them is attacking the space research institute of germany:

http://www.spiegel.de/netzwelt/web/dlr-mit-trojanern-von-geheimdienst-ausgespaeht-a-964099.html

They seem to infect all operating systems of researchers and admins.

If it is the chinese, then they lack character. Because the chinese and germans worked together with the germans having an experiment in the chinese spaceship Shenzhou 8.

One problem may be with hacking attacks from china, that in china, there are simply many people. Therefore, they have many who may hack in private and a chinese hacker who chacks Xi Jinping, gets a death sentence almost surely. So the chinese hackers are more linkely to hack into foreign computers.

If the chinese government or the chinese military is behind this, then shame on them. Copying designs of Ariane5 is not good behavior.

But it also could be the NSA who saw the german cooperation with the chinese as “suspicious” and now wants to make their fulltake of the german space centre, with chinese letters in trojans as a disguise. We do not know yet.

April 13, 2014 10:41 PM

@benni Right, German-Chinese cooperative endeavor, can’t have that, timesofindia.indiatimes.com/world/europe/The-new-silk-road-A-rail-link-from-Chinas-factories-to-heart-of-Europe/articleshow/32913440.cms Peaceful development threatens the US vital interest of great-power confrontation, Isn’t it?

Clive Robinson April 14, 2014 1:28 AM

@ Benni,

With respect to “China-Germany” and hacking for rocket technology.

I was chatting to a South Korean I know about this and they say that it’s just as likely to be the North Korean’s.

Apparently the NKs are sniffing around all rocket technology as well as “drone tech”. This came out in a very recent spat between the North and South. The SK premier has been in Europe and was discussing concernces about NK nuke power stations and the downing of three drones in SK and a possible SK-NK reunification in the future.

The NKs said “The South will pay dearly for those words”, apparently about the only thing to come out of SK-NK talks earlier this year was a “do not slander” agreement that the NK’s consider the SK’s to have breached. Prior to that agreement the SKs have repeatedly claimed the NK’s are not just hacking SK but are carrying out various types of “electronic warfare” including interfering with GPS and aircraft navigation systems (ie what some suspect might have happened to MK370).

Mike the goat April 14, 2014 4:37 AM

Clive: glad to hear it! 🙂 I agree with you wholeheartedly. As for the heartbeat extension… I won’t ask any questions – it is OpenSSL so nothing surprises me. To me the whole damn thing reeks of subversion but this may be normalcy bias since we are so used to everything being revealed as NSA.

This is bad bad bad for the Internet. The fact that we are even this suspicious makes me angry. We should be able to send our data in the clear knowing that the constitution will protect us from unwarranted surveillance. Unfortunately these days it is a “catch it all and sort it out later” philosophy. I would hate to think about the amount of data and the storage issues they face.

As a protest I PGP my email and chide those who don’t follow this simple rule. Hell, I even talk to my family about their pet cat in PGP’d email. Same deal we use zrtp for our VoIP. But I feel like this really is only a protest and doesn’t actually solve anything..

Pretend the crypto is good, and the program does all the right things and your machine or OS are not already owned…pretend that the govt agencies can’t decrypt a thing. You may have saved yourself (and obviously drawn attention to yourself if you are not dressing it up as other traffic or tunneling it) from eavesdropping but what of all those who aren’t as technically competent?

The solution is not to make encryption “easier” but to ensure that no govt can do this to its people. There should be people in the streets protesting as we speak but there is barely a whisper. The NSA (and their UK and AU counterparts) are criminal organizations operating outside international and local laws.. But nobody seems to care.

hats_on_spam April 14, 2014 11:33 AM

@Clive,

Ignore all that story timestamp stuff.You don’t understand how producing these stories work.

The writer spends more time vetting the story with the media property’s lawyers and editorial, and the subject of the story itself than she/he does writing the story.

Some people representing the NSA in that story know full well the story will appear, most of the facts in the story, and probably the day it will appear at least days in advance of it being published. This is how the corporate media factory works.

tom April 14, 2014 1:21 PM

New details added today. Stuff in [brackets] I added for clarity.

Trove of Software Flaws Used by U.S. Spies at Risk
By Michael Riley April 14, 2014
http://www.businessweek.com/news/2014-04-14/president-s-security-flaw-guidance-seen-as-hard-to-implement#p2

“Two people familiar with the matter said that [NSA] was aware of the flaw and had used it as part of the intelligence gathering toolkit, as reported by Bloomberg News last week…

The NSA has more than one way to circumvent the security of SSL and OpenSSL, a free version of the protocol, according to new information [!!!] provided by the two people, who asked not to be identified because they were not authorized to speak about it.

One work-around involves not defeating the SSL software itself but breaking into a different system on the targeted computer on which [SSL] software depends, according to one of the people. While disclosing that method might increase computer security generally, the NSA might consider that a hacking technique instead of an SSL vulnerability.

NSA spokeswoman Vines declined to comment [further] on NSA’s intelligence-gathering methods [after being exposed like Caitlin Hayden for last week’s lies — they are not briefed on these matters to begin with, only handed a statement to read].

The matter is further complicated because a bug like Heartbleed has to be turned into a specific exploit, a process that can branch out quickly, creating a class of vulnerabilities rather than just a single one. Small differences in the way a platform like OpenSSL is exploited could lead to differing conclusions about whether the exploits are the same.

“Maybe it’s not Heartbleed, maybe it’s what they call alpha green [more likelyALPHAGREEN], and alpha green is something that sends a packet to OpenSSL and creates an information leak,” said Syversen [Jason twitter JSyversen]. “It’s going to be challenging to conclude whether it’s the exact same technique or not.”

Implementing the new guidelines — described by the White House as reinvigorating an existing process for determining when zero days should be disclosed — will require institutional barriers to be swept away, said Jason Healey, director of the cyber statecraft initiative at the Atlantic Council in Washington.

TAO, for example, is not required to share all the exploits it uses, even with other units in the NSA, according to two people familiar with the procedures…. That includes the NSA Threat Operations Center, which is responsible for protecting government and military computers….

The White House discussion about the government’s policy for its arsenal of zero days represents a major step forward despite shortcomings in the policy itself, said Christopher Soghoian, principal technologist with the American Civil Liberties Union… “The policy has a loophole so big that you could drive a truck through it,” Soghoian said.

Additionally, it’s unclear whether the agency will apply the new guidance only to newly discovered vulnerabilities or whether it will also include the existing stockpile, which represents millions [billions] of dollars of research and development, the Atlantic Council’s Healey said. “I could see them grandfathering all of that in,” he said.

If those vulnerabilities are disclosed, it will be discreetly, through direct contacts with software and hardware vendors, Healey said.

The only way to detect that may be through a sudden uptick in software patches from major vendors who are suddenly fixing flaws only known previously by the NSA, he said.”

yesme April 14, 2014 2:54 PM

Poul-Henning Kamp [1] wrote an essay about OpenSSL [2].

Here is a quote:

And that is not the first nor will it be the last serious bug in OpenSSL, and, therefore, OpenSSL must die, for it will never get any better.

We need a well-designed API, as simple as possible to make it hard for people to use it incorrectly. And we need multiple independent quality implementations of that API, so that if one turns out to be crap, people can switch to a better one in a matter of hours.

It makes sense to me.

[1] http://video.fosdem.org/2014/Janson/Sunday/NSA_operation_ORCHESTRA_Annual_Status_Report.webm
[2] http://queue.acm.org/detail.cfm?id=2602816

tom April 14, 2014 3:05 PM

Snowden’s job at Dell’s site at the US Air Force command in Japan was to poke around characterizing systems facing the Asian internet, both theirs and ours.

The Asian vulnerability db entries went to TAO which subsequently went on to pwn some 131,000 Chinese computers (as of Apr 13 press reports), while US military and diplomatic (but not US public) were quietly queued for patching.

Snowden loved vulns and found lots on his own time starting with one in Acrobat at his CIA post in Geneva.

Despite its great prestige within NSA, Snowden was later to decline a position with TAO at Ft Meade. It’s been wrongly stated that he had already started his document stash and feared detection.

A half-truth: he had indeed copied stuff over at Dell but had just passed a full poly to get on with Booz, per Gen. Alexander public statements.

While Snowden would know which version numbers of SSL had vulns, he would not necessarily know what this meant specifically in terms of breach mechanism. He might however know the lesser-classified codewords for individual breaches as their applicability could require complicated combinatorics of routers, networks, operating system, anti-virus and security layers.

However a lot of stuff at TAO would be classified TS//SCI//ECI — we have seen only 2-3 ECI mentions in 1500-odd pages of releases. So not much was accessible. It is possible that none of the docs copied over mention ALPHAGREEN or whatever they call their variants of this OpenSSl one.

Even if Snowden knows what it is called and how it works, if it can’t be run thru mainstream journalism that describes NSA internal documentation, he’s not going to leak it because of the increased legal exposure.

That’s heartening that a couple of new sources have stepped forward to speak with Business Week. I wouldn’t necessarily assume rogue actors or grumpily retired here — TOC is ticked off. They have the defense mandate but haven’t been allowed to fulfill it.

In summary, these new sources are spokespeople for TOC, not wildcard leakers. They’re not risk-takers, TOC has their back from knowing where so many TAO bodies are buried. And the reporter has assurances too.

The Threat Operations Center honchos see from the latest round of lies that nothing is going to change on the consumer, govt contractor, general govt, or corporate defense side. Despite massive and ongoing theft of all and sundry. So bureaucratic turf war, they stick the knife in and give it a couple of preliminary twists.

KnottWhittingley April 14, 2014 4:07 PM

Tom, thanks for the heads-up on Riley’s elaborations.

I searched the ACLU’s database of the Snowden docs released so far, and didn’t find anything about alpha green or alphagreen.

KnottWhittingley April 14, 2014 4:24 PM

The Guardian and the NY Times just got the Pulitzer for Public Service, for their work on the NSA leaks. Individual reporters were not named.

They didn’t give an award for Feature Writing this year—the first time in 10 years—which usually means that the committee couldn’t agree on who to give it to.

I wonder if that’s because half the committee insisted it had to go to the heroes Greenwald and Poitras, and the other half said no way we can reward those traitors/accomplices. Maybe they split the difference and prized the papers but not the reporters. (I don’t know if that makes sense, though; it seems to non-expert me that they should have gotten the award for Investigative Journalism, not Feature Writing. I can see legit differences about either.)

I hope the Guardian sends Greenwald to accept the award for the paper at the awards dinner. (Not that Rusbridger doesn’t deserve it, too.)

KnottWhittingley April 14, 2014 5:03 PM

OOPS, I meant the Washington Post—not the New York Times—got the Pulitzer along with the Guardian. (The one where Barton Gellman leads the NSA reporting.)

Benni April 14, 2014 5:38 PM

@Yesme, having looked a bit on the openssl code, i somewhat agree with the link you posted. http://queue.acm.org/detail.cfm?id=2602816

Especially, I like this snipped of openssl code:

/*
* The aim of right-shifting md_size is so that the compiler
* doesn’t figure out that it can remove div_spoiler as that
* would require it to prove that md_size is always even,
* which I hope is beyond it.
/
div_spoiler = md_size >> 1;
div_spoiler <<= (sizeof(div_spoiler)-1)
8;
rotate_offset =
(div_spoiler + mac_start – scan_start) % md_size;

One can only wonder how many backdoors the BND has created in this software.

The dual EC generator was also implemented by openssl. But somehow, a bug prevented it from being run in real situations. It could be just use in test situations to generate one number or so.

german computer scientists wonder since then, whether this bug was deliberately introduced. Perhaps the BND was thinking there: “Well, dual EC is this NSA trick. We want to have an edge over our american colleagues, being able listen when they have it more difficult. So lets go and disable dual ec, pland something else instead….”

Nick P April 14, 2014 9:19 PM

OpenSSL leaks, Target hacks, NSA surveillance… We need a way to evaluate system security that works. What would it look like?

I finally put my proprietary development framework on this blog for free a year ago in a reply to another commenter. There was hardly any demand for effective, ground-up security that I’ve specialized in so why not be altruistic. Link below:

http://www.schneier.com/blog/archives/2013/01/essay_on_fbi-ma.html#c1102869

Then, the Snowden leaks happened and I was glad to see my framework addressed about every NSA attack vector, including in TAO catalog. Exception was physical implants although I’m always clear that devices enemies got possession of can’t be trusted. Far as source of my framework, what a TLA does becomes obvious if a person looks at previous failures, successful approaches, the nature of systems, each layer/component in a system, and risky interactions between them. Thinking along all these lines finds many more flaws and makes beating likes of NSA more realistic.

In any case, security certification isn’t like most others. There are numerous standards, best practices, policies, guides, etc. There’s some stuff common to each, but many differences as well. You could say it’s fragmented, incosistent, and redundant. There are also political and economic motives at play that can undermine the effectiveness of standards in both private and government evaluations. And there are numerous evaluations: Common Criteria, DOD C&A, CIA’s type, FISMA, and so on. So, if one is relying on standards for security, they aren’t in the best situation. Hell, most people making standards don’t even know how to secure a machine properly so it’s a no go from the beginning. If you doubt, look at the technical requirements of any security evaluation and compare them to my requirements. You’ll see that they’re missing anywhere from a few to many, yet every requirement I mention was a source of attack in real cases. You’ll also see that even NSA’s official solutions (eg NetTop) are weak in many areas despite being promoted as secure. (And the sad thing is NSA knows that as I got a nice chunk of my list from their old Orange Book requirements…)

So, what can we do? Well, let’s start with what the Common Criteria (main standard) is doing. Current CC practice is to create Protection Profiles (PP’s) for a given type of system (eg firewall), device (eg printer), or software (eg OS). It covers the security threats, protections needed, and minimal level of assurance. CC uses so-called Evaluated Assurance Level’s (EAL’s) 1-7 to rate the strength of evaluated security on scale from highest to lowest. EAL1-3 is garbage, EAL4-EAL4+ is commercial best practices, EAL6-7 is highly secure stuff, and EAL5-EAL5+ is medium.

An independent, private lab evaluates the product against the requirements with lots of paperwork. They’re ridiculously expensive. If it’s EAL5-7, local intelligence service gets involved, often gets the code itself, and pentests the product plenty before allowing a certification. Vast majority of products are under EAL4, much big name stuff maxes out at EAL4 (while getting hacked plenty), smartcards dominate EAL5-6 evaluations, and a few exceptional general-purpose systems are certified at EAL5-7. So, by government’s own standards, almost everything out there is insecure as can be, but at least it’s certified to be. 😉

I’ll briefly mention the Central Intelligence Agency’s classification. They improve on the other scheme by simplifying it with an additional three-part rating: confidentiality, integrity, and availability (hey it’s C.I.A.!). Each one gets a rating from Low to High. So, a prospective buyer that knows CIA evaluated it can look at the C.I.A. ratings (pun intended) to have a quick idea of how good protections are. However, CC is nice in that it lists each threat and countermeasure in the system, so more technical people can get a thorough idea of its strengths and weaknesses. I like having both types of measurement, to be honest.

Where to go from here is pretty open to discussion. My proposal is to do evaluation and security description the same way I do my security framework: layer by layer, component by component, interaction by interaction, and with environmental assumptions stated. Mainly because it works that way & didn’t most other ways. 😉 An EAL6+ OS like INTEGRITY-178B (an old favorite) running on a regular board won’t be secure: firmware, peripherals, etc might be attacked. A more honest evaluation showing each aspect of the system would list (example): processor (low), BIOS (low), peripheral code (low), kernel (high), key drivers (high), disk/networking middleware (unevaluated), virtualization system (unevaluated), and so on.

The features of each would also be specified, with it being possible that level of rigor is stated on a per feature basis. This allows the vendor to incrementally develop both the product and assurance of each component, while communicating clearly the strengths or risks to the user. The person interpreting it would be a technical professional, who would translate it to layperson speak for the decision-makers. The features, threats, and so on mentioned would also be adjusted in a reasonable period of time when new threats appear. Protection profiles would still be the norm and would be adjusted to the new evaluation process.

So, that’s my take on a thorough system evaluation. Standards in use right now aren’t good enough as they put a stamp on products that are easy to hack. The process requirements at high assurance standards, from rigorous design to qualified pen testing, do seem to accomplish what they intend to accomplish. However, they don’t cover all threats in a product’s Trusted Computing Base, the sum of what can affect its security. This must be changed as attackers always hit the weakest link. If your OS is HighSecurity & your network card is Low, your system is Low until proven otherwise. Period.

This means almost everything on the market, proprietary or open, is insecure against a motivated attacker. I’m almost for going further to create two categories at system level: Low and High. If it’s not proven High security, then it’s Low by default. The combination of that default and a weakest link evaluation approach should force vendors wanting to claim High security to invest heavily in bottom-up secure architectures for their product. Some have done this in practice so I know it can be done. As threats in my framework are added, the number of attack vectors and level of rigor of such bottom-up approaches would only increase.

Bottom line: Our evaluation process needs to be streamlined and the incentives changed to force anything declaring itself to have strong security to actually have strong security. It also needs an abstract rating for a quick glance evaluation along with a thorough treatment of strengths and weaknesses for a technical evaluation. I’d also recommend eliminating unnecessary costs and any incentives to loophole through requirements. Having mutually suspicious parties evaluate the security claims, while signing hash of resulting system image, might also reduce subversion concerns.

Evaluators should also work hand in hand with developers so they continuously have a clear understanding of the system with feedback going both ways when it’s needed most. The mechanisms employed for High security should benefit commercial sector as easily as government to maximize potential ROI for those developing robust solutions. Finally, governments involved in CC should each make a rule that only High security products are purchased for anything security-critical. That rule might give us half a dozen in each category from as many vendors within a year or two*. 😉

  • Note: This is exactly what happened when they did this mandate in Orange Book days. Putting a profit motive on High security is a proven way to make it happen. The solutions developed were also diverse and extendable, too.

tom April 14, 2014 10:30 PM

Hmmm, two google engineers posted code fix to Heartbleed at bugzilla on March 21. It had already been named Heartbleed establishing google collab with Codenomicon by that date. See code snippet at bottom.

Apologies for the long internet scrapes below but one key related link has gone dead already without a cache or wayback hit so I’m archiving it here …

=/=/ the gospel according to St. Google =/=/

http://www.nationaljournal.com/tech/google-knew-about-heartbleed-and-didn-t-tell-the-government-20140414

Google knew about a critical flaw in Internet security, but it didn’t alert anyone in the government.

Neel Mehta, a Google engineer, first discovered “Heartbleed”—a bug that undermines the widely used encryption technology OpenSSL—some time in March. A team at the Finnish security firm Codenomicon discovered the flaw around the same time. Google was able to patch most of its services—such as email, search, and YouTube—before the companies publicized the bug on April 7.

The researchers also notified a handful of other companies about the bug before going public. The security firm CloudFlare, for example, said it fixed the flaw on March 31.

But the White House said Friday that no one in the federal government knew about the problem until April. The administration made the statement to deny an earlier Bloomberg report that the National Security Agency had been exploiting Heartbleed for years.

“Reports that NSA or any other part of the government were aware of the so-called Heartbleed vulnerability before April 2014 are wrong. The Federal government was not aware of the recently identified vulnerability in OpenSSL until it was made public in a private sector cybersecurity report,” Caitlin Hayden, a White House spokeswoman, said in a statement.

“If the federal government, including the intelligence community, had discovered this vulnerability prior to last week, it would have been disclosed to the community responsible for OpenSSL.”

Hayden emailed to clarify that the “private sector cybersecurity report” refers to the April 7 announcement.

Asked whether Google discussed Heartbleed with the government, a company spokeswoman said only that the “security of our users’ information is a top priority” and that Google users do not need to change their passwords.

Companies often wait to publicize a security flaw so they can have time to patch their own services. But keeping the bug secret from the U.S. government may have left federal systems vulnerable to hackers. The IRS said it’s not aware of any vulnerabilities in its system, but other agencies that use OpenSSL could have been leaking private information to hackers.

The government encourages companies to report cybersecurity issues to the U.S. Computer Emergency Readiness Team, which is housed in the Homeland Security Department. US-CERT has a 24-hour operations center that responds to security threats and vulnerabilities.

Christopher Soghoian, the principal technologist for the American Civil Liberties Union, said the U.S. government only has itself to blame if tech companies don’t trust it to handle sensitive security information.

He said that because government agencies often share information with each other, there’s no way for a company to be sure the NSA won’t get information shared with another agency and use it to hack into private communications.

“I suspect that over the past eight months, many companies have taken a real hard look at their existing policies about tipping off the U.S. government,” he said. “That’s the price you pay when you’re acting like an out-of-control offensive adversary.”

/=/=/=/ the gospel according to St.Codenomicon =/=/=/=/=

http://readwrite.com/2014/04/13/heartbleed-security-codenomicon-discovery#awesm=~oBsWzMJB7dRSKm

I spoke with Codenomicon CEO David Chartier, who led the Finnish team that named and outed Heartbleed, to find out more about how his team discovered it, and how deep those vulnerabilities could go.

(I’ve requested an interview with Mehta via Google. Update: The company declined my request.)

Codenomicon first discovered Heartbleed—originally known by the infinitely less catchy name “CVE-2014-0160”—during a routine test of its software. In effect, the researchers pretended to be outside hackers and attacked the firm itself to test it.

“We developed a product called Safeguard, which automatically tests things like encryption and authentication,” Chartier said. “We started testing the product on our own infrastructure, which uses Open SSL. And that’s how we found the bug.”

The engineers found they could burrow in despite the cryptographic security layer, and were shocked at how much was up for grabs. They could access memory and encryption certificates, and pull user data and other records. “This is when we understood that this is a super significant bug,” Chartier said.

The revelation was startling, not only because of the access this hole could allow, but because of its insidious nature, Chartier said. “On top of that, we couldn’t find any forensic trail that we were taking this data.” The hack was completely untraceable.

See also: Protect Yourself Against Heartbleed, The Web’s Security Disaster
But how did something this egregious and widespread go on undetected for two years? The error is buried in the code. The only reason Chartier’s team found the glitch is because Codenomicon uses a rigorous testing process using a very large number of test cases to find weaknesses, just like hardcore hackers do, Chartier explained.

“The vulnerabilities you find after many tests are often more interesting than the ones you find right away,” he said. “When you find one that’s difficult, it’s more interesting [to hackers] because they can write an exploit, and it will take more time to be found.”

The odds of finding the flaw were slight, yet Google’s Mehta discovered it practically simultaneously, during a routine security check in March. Chartier chalks it up to happenstance. “Google’s one of the leading companies in the world, and it’s constantly testing for vulnerabilities,” he said. The company has been known to take security testing very seriously, so much so that it even offers a bounty for exploits on projects like Chrome. This allows it to find flaws and fix them before hackers can take advantage of them.

But not every company takes security that seriously.

A Fail To Remember

Codenomicon, being a Finnish company, alerted the Finnish National Security Cyber Center of its findings. Commonly referred to as “CERT,” the group urged the OpenSSL Project to provide an update and release it to the public. This was just days after Mehta notified OpenSSL on April 1.

The news wasn’t broadcasted after the first discovery, as OpenSSL wanted “to give time for proper processes” to let vendors patch the hole before making it public. The plan was to make an announcement on April 9. But when two independent research teams coincidentally found the error, it suggested a greater risk, which prompted OpenSSL to accelerate the announcement to April 7.

The report blazed across both tech and mainstream media headlines. Chartier has been impressed with how online communities have disseminated the Heartbleed information. “We’re better off today than we were a week ago, because of getting the word out there,” he said. “It’s making the Internet safer and more secure.”

Unfortunately, the Web is not where this problem ends. Other networks also need to apply the software update in both server and client devices. This includes gadgets like phones, computers and other communication devices. It also include numerous other technologies in the broader world, particularly as it relates to the Internet of Things.

Because Heartbleed affects OpenSSL, which is widely adopted, it can affect an extensive range of categories, including connected homes, citywide transportation, emergency services, power grids and other utilities—pretty much any large scale, connected systems. But locking all of them down can be difficult. …

Chartier thinks it could take up to a year or two before all or most of the old versions of OpenSSL out there get updated.

=-=-=-= the gospel according to Mark J Cox of OpenSSL, Glasgow -=-=-=

https://plus.google.com/+MarkJCox/posts/TmCbp3BhJma

We’ve had more than a few press enquiries at OpenSSL about the timeline of the CVE-2014-0160 (heartbleed) issue. Here’s the OpenSSL view of the timeline:

April 01 – Google contact OpenSSL security team with details of the issue and a patch. We allocate CVE-2014-0160 to this issue. Original plan was to push that week, but it was postponed until April 09 to give time for proper processes. Google tell us they’ve notified some infrastructure providers under embargo, we don’t have the names or dates for these.

(Due to my unfortunate-timed holiday this week I used one of my Red Hat team to help co-ordinate this issue on behalf of OpenSSL)

April 07 (0556 UTC) OpenSSL (via me) notify Red Hat. Red Hat internal bug created. This is the time Red Hat was officially notified under embargo, engineers and the Red Hat Security Response Team started working on the issue.

April 07 (0610 UTC) Red Hat contact the private distros list (http://oss-security.openwall.org/wiki/mailing-lists/distros ) and let them know an OpenSSL issue is due on Wednesday (no details of the issue are given: just affected versions. Vendors are told to contact Red Hat for the full advisory under embargo.

April 07 – OpenSSL (via Red Hat) give details of the issue, advisory, and patch to the OS vendors that replied — under embargo, telling them the issue will be public on April 09. This was SuSE (0815 UTC), Debian (0816 UTC), FreeBSD (0849 UTC), AltLinux (1000 UTC). Some other OS vendors replied but we did not give details in time before the issue was public, these included Ubuntu (asked at 1130 UTC), Gentoo (1414 UTC), Chromium (1615 UTC).

April 07 (1519 UTC) – CERT-FI contact me and Ben Laurie by encrypted email with details of the same issue found by Codenomicon. This was forwarded to the OpenSSL core team members (1611 UTC)

April 07 – The coincidence of the two finds of the same issue at the same time increases the risk while this issue remained unpatched. OpenSSL therefore released updated packages that day.

April 07 (1725 UTC) OpenSSL updates, web pages including vulndb, and security advisory (1839 UTC) gets made public.

Mark J Cox
Apr 12, 2014

Akamai note on their blog that they were given advance notice of this issue by the OpenSSL team. This is incorrect. They were probably notified directly by one of the vulnerability finders.

Mark J Cox
Yesterday 1:08 PM

I’ve filled in some more blanks, made it clear who OpenSSL notified in advance. There are still a few more times to put in here which I’ll do this week.

Mark J Cox
Yesterday 11:01 PM
I’ve added the times that Red Hat on behalf of OpenSSL notified various distro vendors

Mani Gandham
4:49 PM

Hi Mark,

This bugzilla report shows a patch for RedHat on March 21, 2014 by Bodo Moeller and Adam Langley: https://bugzilla.redhat.com/attachment.cgi?id=883475

Does this mean RedHat already knew of this issue before April 7? It seems their patch was included directly into OpenSSL right?

So to be clear, OpenSSL notified only the following organisations prior to the public release of the issue: Red Hat, SuSE, Debian, FreeBSD, AltLinux.

[[no clarification from Mark as of midnight 14 Apr 14]]

/=/=/= the fix from Moeller and Langley dted Fri 21 Mar 14 /=/=/=

https://bugzilla.redhat.com/attachment.cgi?id=883475

commit 5e5f9bb1fe5587ccd26f108c3e98811546ccaef6
Author: Bodo Moeller and Adam Langley
Date: Fri Mar 21 10:23:12 2014 -0400

heartbeat_fix

Add missing bounds checks for Heartbeat messages.

diff –git a/ssl/d1_both.c b/ssl/d1_both.c
index 7a5596a..2e8cf68 100644
— a/ssl/d1_both.c
+++ b/ssl/d1_both.c
@@ -1459,26 +1459,36 @@ dtls1_process_heartbeat(SSL s)
unsigned int payload;
unsigned int padding = 16; /
Use minimum padding */

  • if (s->msg_callback)
  • s->msg_callback(0, s->version, TLS1_RT_HEARTBEAT,
  • &s->s3->rrec.data[0], s->s3->rrec.length,
  • s, s->msg_callback_arg);
  • /* Read type and payload length first */
  • if (1 + 2 + 16 > s->s3->rrec.length)
  • return 0; /* silently discard */
    hbtype = *p++;
    n2s(p, payload);
  • if (1 + 2 + payload + 16 > s->s3->rrec.length)
  •  return 0; /* silently discard per RFC 6520 sec. 4 */
    

    pl = p;

  • if (s->msg_callback)

  • s->msg_callback(0, s->version, TLS1_RT_HEARTBEAT,
  • &s->s3->rrec.data[0], s->s3->rrec.length,
  • s, s->msg_callback_arg);
  • if (hbtype == TLS1_HB_REQUEST)
    {
    unsigned char *buffer, *bp;
  • unsigned int write_length = 1 /* heartbeat
    etc etc

Clive Robinson April 14, 2014 10:59 PM

@ Nick P,

One of the big problems in ISsec is the path from not secure to secure and what you mean by secure when you say it’s secure.

Further as a consumer I have to view it by it’s function and placment. It is possible to have a low seurity device in a high security environment and not effect the security rating of the environment.

Take for instance a simple network hub you are primarily interested in it’s advertised usage of taking network packets from one port and delivering them to one or more other ports via some criteria that is fixed or semi-mutable (configurable). In this primary interest you want to be able to trust it doing what it’s supposed to do and further trust the configuration not to change or being changed at any time other than that you as the owner/operator chose.

Thus it has two sets of functions those of actually moving the network traffic and that of configuration managment. If the configuration can be changed from any of it’s ports you will probably want a high security rating on the configuration managment at all levels. However if configuration managment can only be done from a dedicated port then you may not care how secure the actual configuration managment is, not having any authentication or auditing may be quite OK for the use in question.

The problem is nearly all security ratings cannot deal with this simple situation which usually means that the device has to be rated on every port at the highest security rating of the device. This causes all sorts of design issues for the device manufacturer which almost always results in a more vulnerable design.

This is down to a couple of things, firstly the manufacturer will almost certainly go for “maximum usability” thus the design will be such that configuration managment can be done from any port. This results in a much much larger code base with considerably more complexity thus vulnerability. And as we know the cost of testing is related to complexity which rises as a significant power law of the number of components present.

In effect what should have been a 40USD four or eight port hub is going to be +1000USD and thus needs twenty four or more ports to justify the price… this cost has several knock on effects…

The main one being it prices security out of the majority of networks and significantly compromises the design of the few that can justify the cost.

What we need is some way to significantly reduce the cost of security such that it’s only a small percentage increase in cost which makes buying it a “no brainer”. And I can not see any of our current evaluation / certification scheams doing anything other than drive the cost up at the speed and trajectory you would expect of an anti-missile missile….

Figureitout April 14, 2014 11:35 PM

Nick P RE: linked post
However, if you get any of the rest wrong the code quality will simply be something the attacker enjoys observing as he toys with his new machine. 😉
–This is the biggest reason why I leave pretty much all my machines open right now; I as an individual can’t close all the covert channels I don’t know about and I won’t give attackers info they want. That and some others regarding “physical intrusions”; which means that attackers may be observing or listening to your physical movements so even if you never plug in the infected computer they watch your recovery methods. It’s also why I made the observation that one of the hardest things about recovering from an attack (if you even know the extent) is if the infection persists then an attacker is observing your security methods and getting the info they want. How do you know the info being displayed is true? Do you download a LiveCD recovery disk on your infected router w/ ports open and what if a malware is so bad you just have to throw away the computer b/c it’s not worth the time to recover it…

So I’ve really been thinking hard when a good time to make a jump finally and “cut the lights out” and I invest in a few new computers and start making annoying opsec SOP again. Still have major issues getting a secure OS, don’t want from internet and I can’t write a custom functional one by myself. It’s hard, and I know there’s others out there thinking about this…

Few nitpicks (not to piss you off, just to maybe improve or clarify):
–You deride standards (they deserve to b/c they’re neglected), but what if the EAL standards are likewise not up to par? Are they “the best”? Where are potential holes?
–I didn’t see any design really, just some nice brainstorming and extensive thought-mapping and some other projects. No real “fully-put-together” design.

OT
–Thought of posting this here, and since you like BASIC so much maybe you’d enjoy a little.

http://www.instructables.com/id/Arduino-BASIC-Shield/?ALLSTEPS

Not saying the Arduino platform is a secure one to develop a secure computer at all. Just can better visualize (and practice designing) perhaps using Clive’s idea of mitigating insecure chips/architectures by creative/customized designs on top. Also it could fit in a small compartment on you at all times and could be powered w/ 9V battery and use tiny screen.

OT
–Truecrypt audit is making progress, no major errors were found. Means the errors weren’t found, right? 🙁

https://isecpartners.github.io/news/2014/04/14/iSEC-Completes-Truecrypt-Audit.html

https://opencryptoaudit.org/reports/iSec_Final_Open_Crypto_Audit_Project_TrueCrypt_Security_Assessment.pdf

Staatssicherheit April 15, 2014 12:06 AM

The TrueCrypt audit hasn’t completed, they are paying for cryptanalysis after this. Meanwhile OpenBSD has decided to fix OpenSSL themselves by gutting and rewriting it. http://www.openbsd.org/cgi-bin/cvsweb/src/lib/libssl/src/ssl/

“So the OpenSSL codebase does “get the time, add it as a random seed”
in a bunch of places inside the TLS engine, to try to keep entropy high.
I wonder if their moto is “If you can’t solve a problem, at least try
to do it badly” http://www.openbsd.org/cgi-bin/cvsweb/src/lib/libssl/src/ssl/s2_srvr.c?rev=1.19

Clive Robinson April 15, 2014 1:06 AM

@ Figureitout,

A quick read through the basic sheild indicates it’s actualy got a “serial out” console which then gets converted into a video signal by the other board.

Athe thought occurs to me why do the conversion to standard video signals at all if you want to go portable?

Why not use the serial out lines to drive a serial in LCD display similar to the fairly standard 4×40 AlphaNum ones?

As for basic it’s self it’s very easy to write the interpreter “except” in one area “memory” and the attendant “garbage collection”. Ignoring this difficulty it’s very easy to analyse and thus code reasonably securely (thus moving any security issues upto the basic scripts people write for it). Thus you might want to leave out PEEK and POKE anf one or two others. However to reduce resource usage issues you do want to be able to bit bash certain I/O addresses which means explicit naming and the inclusion of ‘bit’ set/clear and AND, OR and XOR on ‘char’ and some way to pack / unpack them from ‘ints’.

I must admit that TinyBasic would not be my firdt consideration for a minimalist solution. Have look around for the “obsficated C contest” entrant that did a whole basic interpreter in a few lines of C. If nothing else the process og unpacking it into a more understandable form will tell you a lot about C and your own abilities to “code review” what you might regard as “hostile coding” 😉

T April 15, 2014 1:46 AM

@Staatssicherheit
“”So the OpenSSL codebase does “get the time, add it as a random seed”
in a bunch of places inside the TLS engine, to try to keep entropy high”
It might work out, if the randomness is high, it will be hard to crack, if its low, that will just make it easyer, they need to test it. If the entropy is 55% adding something that is 0% order, will make 25.5%(something like that) order, which if order is a large number of 1 or 0, will beable to use static to workout, but if it is high say 90%, then adding 0% makes it 45% out of 50%, which should be close to imposable
They need to workout what they are trying to add to,there is a poster that is mocking RSA, but that is the algo I would chose for randomness, at quick glance, as there no way to know…time is against us, and the algo cut that down to a timeframe, without takeing into account what the leave out, basing the theory that OpenSSL has high entopy and that adding order will make it harder, could be correct or it could be a mistake(What are they working from)

yesme April 15, 2014 2:36 AM

@tom

Why post a 10 page story? Nobody will read that.

@Benni

“One can only wonder how many backdoors the BND has created in this software.”

OpenSSL didn’t do much to give me the impression they knew what they were doing.

Backdoors? Yes, it is possible. It wouldn’t surprise me. The only thing is that when this news gets out, they have a serious problem. Because if data, including passwords and the keys, could have been stolen from a long period of time, there could be serious financial issues. Then it’s sueing time. Who is gonna pay the bill of a deliberate backdoor that costs the world (you, me, anyone) billions of Euros?

AFAIK, Euros are still way more important than “National Security” because when your bankaccount has been robbed you have an instant problem.

But… we don’t know.

That said, the OpenBSD team is starting to refractor OpenSSL.

These guys do know what they are doing!

And they addressed my biggest pain points right on.

KnottWhittingley April 15, 2014 7:33 AM

EFF has gotten new FOIA docs that show FBI is building a face recognition database that will have over 50 million faceprints in it by next year, including millions of people with no connection to crime. I wonder how many it will have the year after that, and when they’ll start scouring Facebook, etc. for pictures of the rest of us.

http://arstechnica.com/tech-policy/2014/04/fbi-to-have-52-million-photos-in-its-ngi-face-recognition-database-by-next-year/

KnottWhittingley April 15, 2014 7:48 AM

That article mentions that the State Department’s face database has “over 244 million” records, and that DoD shares info with FBI:

The company responsible for building NGI’s facial recognition component—MorphoTrust (formerly L-1 Identity Solutions)—is also the company that has built the face recognition systems used by approximately 35 state DMVs and many commercial businesses. MorphoTrust built and maintains the face recognition systems for the Department of State, which has the “largest facial recognition system deployed in the world” with more than 244 million records, and for the Department of Defense, which shares its records with the FBI.

The FBI failed to release records discussing whether MorphoTrust uses a standard (likely proprietary) algorithm for its face templates. If it does, it is quite possible that the face templates at each of these disparate agencies could be shared across agencies—raising again the issue that the photograph you thought you were taking just to get a passport or driver’s license is then searched every time the government is investigating a crime. The FBI seems to be leaning in this direction: an FBI employee e-mail notes that the “best requirements for sending an image in the FR system” include “obtain[ing] DMV version of photo whenever possible.”

If State and FBI have these things, what does NSA have that secret FISC rulings say they don’t have to admit to when getting FOIA’d?

Figureitout April 15, 2014 9:27 AM

Clive Robinson
–Yeah I didn’t have an interest in putting BASIC on my pc nor do I consider it a “secure” design; already got that on a TI calc. There was this kid who is very articulate and I liked his PC though. He used BBC BASIC.

http://benryves.com/projects/z80computer

I’m planning on CamelForth w/ CP/M; so my initial design isn’t original, just piecing together parts. I may cheat and use a microcontroller for the screen too; I’d really rather not though. And yes I struggle w/ some terse C; linked lists, nested loops, and malloc, etc.. takes a long time to trace out what’s happening exactly (I think…).

tom April 15, 2014 9:46 AM

Nice effort by Ben to put together a timeline. He’s looking for additions and corrections if you have any. bgrubb@fairfaxmedia.com.au

https://www.schneier.com/blog/archives/2014/04/friday_squid_bl_419.html

Heartbleed disclosure timeline: who knew what and when
April 15, 2014 by Ben Grubb

Ever since the “Heartbleed” flaw in encryption protocol OpenSSL was made public on April 7 in the US there have been various questions about who knew what and when.

Fairfax Media has spoken to various people and groups involved and has compiled the below timeline.

If you have further information or corrections – especially information about what occurred prior to March 21 at Google – please email the author: bgrubb@fairfaxmedia.com.au. Click here for his PGP key.

All times are in US Pacific Daylight Time

Friday, March 21 or before: Neel Mehta of Google Security discovers Heartbleed vulnerability.

Friday, March 21 10.23: Bodo Moeller and Adam Langley of Google commit a patch for the flaw (This is according to the timestamp on the patch file Google created and later sent to OpenSSL, which OpenSSL forwarded to Red Hat and others). The patch is then progressively applied to Google services/servers across the globe.

Monday, March 31 or before: Someone tells content distribution network CloudFlare about Heartbleed and they patch against it. CloudFlare later boasts on its blog about how they were able to protect their clients before many others. CloudFlare chief executive officer Matthew Prince would not tell Fairfax how his company found out about the flaw early. “I think the most accurate reporting of events with regard to the disclosure process, to the extent I know them, was written by Danny over at the [Wall Street Journal],” he says. The article says CloudFlare was notified of the bug the week before last and made the recommended fix “after signing a non-disclosure agreement”.

Tuesday, April 1: Google Security notifies OpenSSL about the flaw it has found in OpenSSL, which later becomes known as “Heartbleed”. Mark Cox at OpenSSL says the following on social network Google Plus: “Original plan was to push [a fix] that week, but it was postponed until April 9 to give time for proper processes.” Google tells OpenSSL, according to Cox, that they had “notified some infrastructure providers under embargo”. Cox says OpenSSL does not have the names of providers Google told or the dates they were told. Google declined to tell Fairfax which partners it had told. “We aren’t commenting on when or who was given a heads up,” a Google spokesman said.

Wednesday, April 2 ~23:30 – Finnish IT security testing firm Codenomicon separately discovers the same bug that Neel Mehta of Google found in OpenSSL. A source inside the company gives Fairfax the time it was found as 09:30 EEST April 3, which converts to 23:30 PDT, April 2.

Thursday, April 3 04.30 – Codenomicon notifies the National Cyber Security Centre Finland about its discovery of the OpenSSL bug. Codenomicon tells Fairfax in a statement that they’re not willing to say whether they disclosed the bug to others. “We have strict [non-disclosure agreements] which do not allow us to discuss any customer engagements. Therefore, we do not want to weigh in on the disclosure debate,” a company spokeswoman says. A source inside the company later tells Fairfax: “Our customers were not notified. They first learned about it after OpenSSL went public with the information.”

Friday, April 4 – Content distribution network Akamai patches its servers. They initially say OpenSSL told them about bug but the OpenSSL core team denies this in an email interview with Fairfax. Akamai updates its blog after the denial – prompted by Fairfax – and Akamai’s blog now says an individual in the OpenSSL community told them. Akamai’s chief security officer, Andy Ellis, tells Fairfax: “We’ve amended the blog to specific [sic] a member of the community; but we aren’t going to disclose our source.” It’s well known a number of OpenSSL community members work for companies in the tech sector that could be connected to Akamai.

Friday, April 4 – Rumours begin to swirl in open source community about a bug existing in OpenSSL, according to one security person at a Linux distribution Fairfax spoke to. No details were apparent so it was ignored by most.

Saturday, April 5 15:13 – Codenomicon purchases the Heartbleed.com domain name, where it later publishes information about the security flaw.

Saturday, Apr 5 16:51 – OpenSSL (not public at this point) publishes this (since taken offline) to its Git repository.

Sunday, Apr 6 ~22:56 – Mark Cox of OpenSSL (who also works for Red Hat and was on holiday) notifies Linux distribution Red Hat about the Heartbleed bug and authorises them to share details of the vulnerability on behalf of OpenSSL to other Linux operating system distributions.

Sunday, Apr 6 22.56 – Huzaifa Sidhpurwala (who works for Red Hat) adds a (then private) bug to Red Hat’s bugzilla.

Sunday, April 6 23.10 – Huzaifa Sidhpurwala sends an email about the bug to a private Linux distribution mailing list with no details about Heartbleed but an offer to request them privately under embargo. Sidhpurwala says in the email that the issue would be made public on April 9. Cox of OpenSSL says on Google Plus: “No details of the issue are given: just affected versions [of OpenSSL]. Vendors are told to contact Red Hat for the full advisory under embargo.”

Sunday, April 6 ~23:10 – A number of people on the private mailing list ask Sidhpurwala, who lives in India, for details about the bug. Sidhpurwala gives details of the issue, advisory, and patch to the operating system vendors that replied under embargo. Those who got a response included SuSE (Monday, April 7 at 01:15), Debian (01:16), FreeBSD (01:49) and AltLinux (03:00). “Some other [operating system] vendors replied but [Red Hat] did not give details in time before the issue was public,” Cox said. Sidhpurwala was asleep during the time the other operating system vendors requested details. “Some of them mailed during my night time. I saw these emails the next day, and it was pointless to answer them at that time, since the issue was already public,” Sidhpurwala says. Those who attempted to ask and were left without a response included Ubuntu (asked at 04:30), Gentoo (07:14) and Chromium (09:15), says Cox.

Prior to Monday, April 7 or early April 7 – Facebook gets a heads up, people familiar with matter tell the Wall Street Journal. Facebook say after the disclosure: “We added protections for Facebook’s implementation of OpenSSL before this issue was publicly disclosed, and we’re continuing to monitor the situation closely.”

Monday, April 7 08.19 – The National Cyber Security Centre Finland reports Codenomicon’s OpenSSL “Heartbleed” bug to OpenSSL core team members Ben Laurie (who works for Google) and Mark Cox (Red Hat) via encrypted email.

Monday, April 7 09.11 – The encrypted email is forwarded to the OpenSSL core team members, who then decide, according to Cox, that “the coincidence of the two finds of the same issue at the same time increases the risk while this issue remained unpatched. OpenSSL therefore released updated packages [later] that day.”

Monday, April 7 09:53 – A fix for the OpenSSL Heartbleed bug is committed to OpenSSL’s Git repository (at this point private). Confirmed by Red Hat employee: “At this point it was private.”

Monday, April 7 10:21:29 – A new OpenSSL version is uploaded to OpenSSL’s web server with the filename “openssl-1.0.1g.tgz”.

Monday, April 7 10:27 – OpenSSL publishes a Heatbleed security advisory on its website (website metadata shows time as 10:27 PDT).

Monday, April 7 10:49 – OpenSSL issues a Heartbleed advisory via its mailing list. It takes time to get around.

Monday, April 7 11:00 – CloudFlare posts a blog entry about the bug.

Monday, April 7 12:23 – CloudFlare tweets about its blog post.

Monday, April 7 12:37 – Google’s Neel Mehta comes out of Twitter hiding to tweet about the OpenSSL flaw.

Monday, April 7 13:13 – Codenomicon tweets they found bug too and link to their Heartbleed.com website.

Monday, April 7 ~13:13 – Most of the world finds out about the issue through heartbleed.com.

Monday, April 7 15:01 – Ubuntu comes out with patch.

Monday, April 7 23.45 – The National Cyber Security Centre Finland issues a security advisory on its website in Finnish.

Monday, April 8 ~00:45 – The National Cyber Security Centre Finland issues a security advisory on its website in English.

Tuesday, April 9 – A Red Hat technical administrator for cloud security, Kurt Seifried, says in a public mailing list that Red Hat and OpenSSL tried to coordinate disclosure. But Seifried says things “blew up” when Codenomicon reported the bug too. “My understanding is that OpenSSL made this public due to additional reports. I suspect it boiled down to ‘Group A found this flaw, reported it, and has a reproducer, and now Group B found the same thing independently and also has a reproducer. Chances are the bad guys do as well so better to let everyone know the barn door is open now rather than wait 2 more days’. But there may be other factors I’m not aware [of],” Seifried says.

Wednesday, April 9 – A Debian developer, Yves-Alexis Perez, says on the same mailing list: “I think we would have managed to handle it properly if the embargo didn’t break.”

Wednesday, April 9 – Facebook and Microsoft donate $US15,000 to Neel Mehta via the Internet Bug Bounty program for finding the OpenSSL bug. Mehta gives the funds to the Freedom of the Press Foundation.

Who knew of heatbleed prior to release? Google (March 21 or prior), CloudFlare (March 31 or prior), OpenSSL (April 1), Codenomicon (April 2), National Cyber Security Centre Finland (April 3), Akamai (April 4 or earlier) and Facebook (no date given)

Who knew hours before public release? SuSE, Debian, FreeBSD and AltLinux.

Who didn’t know until public release? Many, including Amazon Web Services, Twitter, Yahoo, Ubuntu, Cisco, Juniper, Pinterest, Tumblr, GoDaddy, Flickr, Minecraft, Netflix, Soundcloud, Commonwealth Bank of Australia (main website, not net banking website), CERT Australia website, Instagram, Box, Dropbox, GitHub, IFTTT, OKCupid, Wikipedia, WordPress and Wunderlist.

Many thanks to: Nik Cubrilovic, Yves-Alexis Perez, public mailing lists, emails with OpenSSL core team, emails with the National Cyber Security Centre Finland, Google Plus posts, and emails with people who volunteer at Linux distributions.

Correction: Some Codenomicon dates were wrong. They have been fixed.

Nick P April 15, 2014 11:36 AM

@ Clive Robinson

“The problem is nearly all security ratings cannot deal with this simple situation which usually means that the device has to be rated on every port at the highest security rating of the device. This causes all sorts of design issues for the device manufacturer which almost always results in a more vulnerable design.”

“firstly the manufacturer will almost certainly go for “maximum usability” thus the design will be such that configuration managment can be done from any port. ”

What are you talking about? The common solution in business products is a management port that talks to a privileged process (eg shell), optionally requiring authentication. Comms on other ports are forced to go through the security policy. It cost almost nothing extra to do this, allows integration with automated mgmt products, and saves money when assurance goes up. Maybe this was a problem when you were in the field but today it’s mostly an issue with the manufacturers that put cost above… everything. One can always avoid those.

” In this primary interest you want to be able to trust it doing what it’s supposed to do and further trust the configuration not to change or being changed at any time other than that you as the owner/operator chose.”

That’s definitely an issue across the board.

“In effect what should have been a 40USD four or eight port hub is going to be +1000USD and thus needs twenty four or more ports to justify the price… this cost has several knock on effects…”

Now we’re getting to it. Even I’ve pointed out the assured, certified guards cost around $100,000 per unit with much less functionality than a $1,000 UTM appliance. I said in the past the reason was mostly certification and niche market demand. Ok, let’s knock those out and just focus on most minimal security engineering. The LOCK project was an A1-class security kernel with hardware enforcement, hardware crypto, and a UNIX layer for legacy. My calculations based on their labor allocation show that the percentage of labor hours dedicated to validation, verification, and formal methods total to 32.8%. That’s very different than $40 vs $1,000 or $1,000 vs $100,000. And reasonable.

So, it seems that at least the engineering part can come down in cost to be reasonable. We’ve actually seen this with some of the products I mentioned. So, are there any other issues that are inflating the cost? The issue I keep coming back to is that the architectures themselves are machines to just throw data around. It’s why I’m looking at processor modifications that make information flow control, or at least execution integrity, much easier. Solve that problem, you solve most security problems. Adding a fast compartmentalization (eg memory keys) ability let’s something like MINIX 3 run very fast despite decomposing the system into thousands of de-privileged components. Certain projects that focused on transparent memory protection stopped the code injection attempts without changing legacy code. And so on.

I think there’s tiny changes that can be made to architectures that don’t slow them or cost much, yet provide immense benefits to security. I’m aiming for a foundational layer that can be built on. It’s why I’m looking at all these capability, tagged, and HLL language processors. I think there’s a shortcut in there. Any slowdowns will be fixed by multiple processors & there’s existing ways to deal with eg cache coherency or message passing.

@ Figureitout

The BASIC chip is pretty neat. Shows that the little language is versatile doing microcontrollers, GUI’s, game engines & business apps. Btw, here’s one for Oberon on 32-bit microcontrollers. Still compact & fast, yet uses language aspects to catch more errors than C/asm.

“You deride standards (they deserve to b/c they’re neglected), but what if the EAL standards are likewise not up to par? Are they “the best”? Where are potential holes?”

Potential holes? Just compare standards and EAL’s to my framework you’ll find plenty. 😉 The EAL’s just measure the rigor of the development process applied to a given design/feature. As you go up, you must put even more effort into your design, implementation, configuration, testing, covert channel analysis, etc. I could see ways they could get improved for sure, yet they’re pretty good at measuring assurance. It’s actually common sense as amount of security is often tied to amount of effort and higher EAL’s just represent higher effort. Cygnacom’s description is actually very simple & will convey the concept well. Main failing, I think, is their focus is too much on design and too little on code, where many errors happen.

One caveat: a product is evaluated against a Security Target or Protection Profile containing a set of specific attributes. The Target of Evaluation says what they’re actually evaluating and at what assurance. The EAL only applies to that. One common trick with companies in high assurance is to say “We got EAL(high number) so we’re secure” when parts of their TCB (eg firmware, drivers) might have been evaluated with less rigor as they were outside the TOE. Such bullshit is why I advocate a component by component approach that encompasses entire TCB, not an artificial TOE that attackers step around.

“I didn’t see any design really, just some nice brainstorming and extensive thought-mapping and some other projects. No real “fully-put-together” design.”

I’m not following you. My post was about strengthening an evaluation process and corresponding standard. The linked post mentioned my basic abstract process of development and a piece-by-piece way I look at system security issues. You’re mentioning some kind of “design” and “projects.” What are you referring to?

“Truecrypt audit is making progress, no major errors were found. Means the errors weren’t found, right? :(”

Yeah, probably haha.

Squish April 15, 2014 12:10 PM

A Dutch girl “Sarah” apparently got arrested for tweeting a bomb threat at an airport.

Some friends of mine are debating whether arresting teenagers for jokes about bombs is security theatre going overboard or airports needing to take all threats seriously. I didn’t see a dedicated post related to this, it doesn’t seem like a big deal, though. Thoughts?

http://www.washingtonpost.com/blogs/style-blog/wp/2014/04/14/dozens-of-teenagers-are-now-tweeting-bomb-jokes-to-american-airlines/

tom April 15, 2014 12:26 PM

@Benni Thanks for neat link there, fun to watch the attacks roll in against Deutsche Telecom in real time. Looks like about one per second.

But why is Chile among the leaders? And shouldn’t the UK be a lot higher?

Nice of them to provide everything you need to add your own honeypot to their collection of 180.

http://www.sicherheitstacho.eu/

“Instructions for connecting a mod_security sensor to the Deutsche Telekom AG early warning system
http://www.sicherheitstacho.eu/pdf/HowTo_mod_security_1.2_en.pdf

Winter April 15, 2014 2:29 PM

“A Dutch girl “Sarah” apparently got arrested for tweeting a bomb threat at an airport.”

She has already been released. I do not expect she will face severe consequences.

Benni April 15, 2014 2:39 PM

@Tom: You can also change the “Zielländer” i.e “target countries”.

Interesting is: Whilst most attacks on german honeypots come from the US, selecting united kingdom gets this:

China 21.743
Germany 1.996
United States 134

Well, you see its good to be in the 5 eyes group.

By the way, when NSA has botnets, bnd has that too, of course, and the german secret service also likes it to hire escort services for making up honey traps:

https://wikileaks.org/wiki/German_Secret_Intelligence_Service_(BND)_T-Systems_network_assignments,_13_Nov_2008

BND has one single plane, using plane spotter pages, one can follow, where BND staff usually flies to:

http://www.stern.de/politik/deutschland/flugbewegungen-was-macht-der-bnd-in-kasachstan-580458.html

http://www.flightradar24.com/data/airplanes/d-azem

Must be nice to see so much of the world, usually, they stop in the near or far east.

tom April 15, 2014 2:52 PM

Sound promising but let’s see how far it gets. Look at the debris left in there despite a one million USD per year budget for a fair number of years.

“OpenBSD has started a massive strip-down and cleanup of OpenSSL:

Changes so far to OpenSSL 1.0.1g since the 11th include:
• Splitting up libcrypto and libssl build directories
• Fixing a use-after-free bug
• Removal of ancient MacOS, Netware, OS/2, VMS and Windows build junk
• Removal of “bugs” directory, benchmarks, INSTALL files, and shared library goo for lame platforms
• Removal of most (all?) backend engines, some of which didn’t even have appropriate licensing
• Ripping out some windows-specific cruft
• Removal of various wrappers for things like sockets, snprintf, opendir, etc. to actually expose real return values
• KNF of most C files
• Removal of weak entropy additions
• Removal of all heartbeat functionality which resulted in Heartbleed

Commits are happening pretty fast, but the API is not being changed.”

http://undeadly.org/cgi?action=article&sid=20140415093252&mode=expanded&count=0

https://lobste.rs/s/3utipo/openbsd_has_started_a_massive_strip-down_and_cleanup_of_openssl/comments/fkwgqw

https://en.wikipedia.org/wiki/Kernel_Normal_Form

Nick P April 15, 2014 4:32 PM

@ tom

Damn, that is a massive overhaul. I’d expect no less from those doing it. They’re good developers. I thank you for sharing the link because it makes for a good list of everything that was wrong with OpenSSL.

Shawn Smith April 15, 2014 6:26 PM

Clive Robinson,

Ask (about the IOCCC BASIC interpreter) and ye shall receive:

Here’s a link to the source
And here’s a link to a description.
The linking page is here.

It’s actually 1536 chars (with all but one line no more than 80 chars.) When it’s expanded out to a more standard format and then has the compressing / obfuscating #defines substituted it’s around 200 lines long. There are no error checks, gets() is used, there are only global variables, and if() statements are replaced with logical && operators and tertiary operators (?:) where possible.

Get past those small hurdles, and it’s actually quite traceable.

And I wouldn’t know for sure, but I’ll bet a similarly reasonable FORTH interpreter will be a similar size.

Benni April 15, 2014 10:00 PM

@Shawn, on Heise.de a commenter also asked whether the openssl devs used perhaps an obfuscator sometimes. Perhaps they should consider to submit some of their lines to IOCCC. They have turned OpenSSL practically into a closed source package by their coding standards and making money as “consultants” just because nobody can read their code.

Figureitout April 15, 2014 11:49 PM

Nick P
–Pretty neat, always like little computers. 🙂 Need to try out some of those languages sometime. Here’s a reddit thread and someone commented how BASIC is still used in industrial control systems…? Maybe you’ll be a rich baby sucking those t*ts. :p

http://www.reddit.com/r/AskReddit/comments/2340xa/hackers_of_reddit_what_are_some_coolscary_things/cgtfshw

Just compare standards and EAL’s to my framework you’ll find plenty. 😉
Bah. Yeah that was a nice quick overview, I’d like to view the process myself but I guess that’s a “security risk”. I want to see the tools they use…

I’m not following you.
–You kind of answered it. You’re describing design processes, where I want to see an actual design of a secure computer that isn’t some 8-bit (or less) serial bit-banged pc w/ 2k RAM. Having a secure computer w/ some extra “juice” is so handy; my top 2 reasons are file encryption and digital radio. I can’t do sufficient FFT’s on a tiny pc (or my ‘duino); won’t get all the little jiggles. I can’t get any of the software on tiny pc’s.

Shawn Smith
–Besides it being the most butt-f*cking ugly thing I’ve ever seen and horribly insecure coding; separating it out to normal syntax and looking up a couple things (question marks?! Dafuq!), it’s actually pretty cool it worked. Scary, but cool. And yeah there are Forths out there w/ an interpreter, kernel, and compiler for near the same or less amount of bytes. Such a weird “language” though lol; I’m thinking how I could change all the syntax to something I like the best way (just look into it a little if you don’t know what I’m talking about, the first few pages of “Starting Forth” will show what I mean).

yesme April 16, 2014 2:46 AM

… and while the OpenBSD team cleans up the OpenSSL code it also fixes a couple of bugs. See this reddit link.

Jakub Narebski April 16, 2014 4:18 AM

@tom: Why “Removal of all heartbeat functionality which resulted in Heartbleed” instead of fixing it? It looks quite important wrt. network performance.

Jonathan Wilson April 16, 2014 5:52 AM

My view is that the NSA should be prohibited from using any information it might have for anything other than threats to national security. That means it would NOT be allowed to use stuff it intercepts, security holes it knows about, information it has gathered, backdoors it has created or whatever else for law enforcement purposes (e.g. helping the DEA catch drug dealers or even helping the feds catch someone like the Boston Bombers who are not IMO a threat to national security and probably aren’t even deserving of the title “Terrorist”)

Nick P April 16, 2014 7:47 AM

@ Figureitout

“You’re describing design processes, where I want to see an actual design of a secure computer that isn’t some 8-bit (or less) serial bit-banged pc w/ 2k RAM.”

Best I can do for you in few minutes I have are these links. First two might have hardware, firmware, or code issues as those were lesser known at the time. I’d look at the design, layering, covert channel analysis, etc. Problems can be knocked out during new implementation.

A1-certified GEMSOS Security Kernel
http://www.aesec.com/eval/NCSC-FER-94-008.pdf

(Note: One of their versions is on Intel 286/386 with custom firmware. That’s 16-bit and 32-bit respectively.)

B3-class KeyKOS microkernel capability system
http://www.cis.upenn.edu/~KeyKOS/KeyKOS.html

(Ran on IBM’s mainframe architecture. A knockoff called EROS was created for Intel. It’s GPL and awesome. Google it.)

Capability machines redid hardware to support ground-up software safety, least privilege, and high level languages. Look at CAP, Hydra, and System/38 particularly. S/38 exists today as IBM i albeit not custom hardware anymore (same architecture though).
http://homes.cs.washington.edu/~levy/capabook/

B5000 architecture with hardware to software security/safety
http://homepages.ihug.com.au/~ijoyner/Ian_Joyner/Burroughs.html

FLEX Machine and Ten15 VM deserve some mention
http://en.wikipedia.org/wiki/Flex_machine

(Note: There’s little reproducable design data on this one. It’s just the combo of described features & techniques might be worth emulating to some degree.)

The architectures I showed you before like CodeSEAL, SecureME, etc handle threats like untrusted devices on the board. Such designs could be combined with the others. I plan on doing that if resources come in.

name.withheld.for.obvious.reasons April 16, 2014 12:14 PM

@ Nick P, Figureitout

…serial bit-banged pc with 2k/RAM.

Reminds me, as a hobbyist in the 70’s, of purchasing static RAM DIP 1Kx8 for $10 a piece for building a Z80 based project during my wasted years of technological learning/training. Today a hobbyist could be labeled (and this has happened) as a hacker and automatically selected for drone targeting by the U.S. government to assist in the application of lethal force….

Be very, very, quite…I’m hunting hackers. Something Elmer Fudd would say, and the DoJ/DoD but without the comical effect.

BJP April 16, 2014 12:49 PM

@name.withheld: Source on “(and this has happened)” hobbyist hardware hacker subjected to lethal force via US government drone strike? That’s a heavy claim.

Shawn Smith April 16, 2014 12:57 PM

Figureitout,

If you’re calling the BASIC interpreter “the most butt-fucking ugly code I’ve ever seen,” then I would think that you have not seen very much. Go back to the first year (1984) and behold the joy of seeing a C program where main is defined as an array of shorts with a mix of integers and characters–advertised as portable to both VAX and PDP. LOL. And I used to work at a small startup that was an offshoot of the company that employed the guy who wrote the winning entry in 1985 (shapiro.c). Although I never actually saw the stuff he did, I heard that his entry wasn’t that much different from the stuff he wrote for work.

And yeah, I’m aware of the basic structure of Forth (postfix, like Postscript) and lately I’ve decided I would like to learn it, and possibly use it to swap out the ROMs on my Apple IIe at home. It’s not the way I normally think, though, as I’m more of a prefix (LISP) kind of thinker. An anecdote along those lines–in the late ’80s – early ’90s Microsoft had a postfix-based screen editor, where you would select the text you want to work on first, and then issue a command on that text. Those who could think like that were able to get their editing done quite quickly, but I was never one of those people. My guess is that it was probably a result of my having learned Unix ed as my first programming editor. Oh, well.

Anura April 16, 2014 1:04 PM

https://blogs.oracle.com/security/entry/april_2014_critical_patch_update

Also included in this Critical Patch Update were fixes for 37 Java SE vulnerabilities. 4 of these Java SE vulnerabilities received a CVSS Base Score of 10.0. 29 of these 37 vulnerabilities affected client-only deployments, while 6 affected client and server deployments of Java SE.

Is it just me, or does it seem like Java needs a more frequent release schedule? Rule number one of keeping yourself secure on the internet: Disable Java in your browser. It’s right behind keeping your software up-to-date and running behind a firewall.

Clive Robinson April 16, 2014 2:02 PM

@ Figureitout,

A thought occurs,

Once upon a time, being a “Hacker” carried respect, then journalists corupted it to a “mark of Cain” or equivalent, now you would at best call yourself “an old school hacker” if using the word. And as we have seen Federal prosecutors use such changes in meaning to help obtain prosections…

I thus wonder how long it will be befor other old school terms become “crimes in the mind” where the meaning is twisted and those who use them or have used them become “enamies of the state. George Orwell kind of played with this idea in his later works…

Thus how long do you dare to use the term “Bit Banging” before it mutates… and being “a serial bit banger” becomes a serious crime in the mind of those who would put themselves of judgment over you?

It would be just a silly musing / joke if we could not already see it happening…

name.withheld.for.obvious.reasons April 16, 2014 3:04 PM

@ Clive Robinson
Thanks for stating the less apparent–many are unaware of the extent, scope, and breadth of fascism in the U.S.

@ BJP

If you want sources; Public Intelligence, Muck Rock, Federation of American Scientists and others have FOIA document releases of information confirming what was stated. So many documents to reference that it represents a reading list that I posted a while ago.

There are two important components of my statement; the first is the qualification of “could” versus “has” in suggesting the action versus the act. Not trying to get all NSA up in here, just qualified the statement without hyperbolic (unbelieveably given the subject) prose.

The second component is from multiple sources; the codification in law of domestic targeting, the actions of the DoJ/DoD, the statements made by politicians and government officials, and programs/actions carried out in recent months. In essence public, and non-public, law supports the targeting of individuals using unknown processes, standards, and verifiable systems and have been used to target anonymous members, Julian Assange, the Guardian (a short list), and others on any one of the FBI’s various list. Not judging the actions of any these persons/entities–the issue is–neither should the government without a) public law, and b) due process.

So the decision matrix and action list looks like this:

  1. Lists of hackers, that could be labeled hobbyists;
    Barnaby Jack, Aaron Swartz (FBI Lists)
  2. FBI admits active campaign against “hackers”
  3. FBI Morphs its charter/mission/legal framework to a INTEL bureau
  4. Legal rationale for U.S. domestic drone strikes,
    (Paul filibuster in protest of CIA Brennen nomination)
  5. DoD cyber warfare capacity/powers
    agency head of AN IC can declare war…PPD 20
  6. What are the list of IC’s (Intelligence Community)members
    (Important question, statues and new authorities
  7. Statements made recent months by Pentagon officials (not off record, calling for summary execution of Snowden/Assange/Anonymous)

In summary:
Add it up–there is an organized campaign to target hackers–irrespective of the label hacker and how it was acquired.
All these secret lists, laws, files, and targeting systems (irrespective of the action) add up to the potential for what one would believe unthinkable. I want to know who is the maintainer of these lists–is there a listserv for subscription? People don’t seem to understand the level of the invisible hand of power…the trigger finger twitches ready to fire without cause/reprisal/identification. Dead men don’t tell tales.

Nick P April 16, 2014 5:48 PM

@ Clive Robinson

“Once upon a time, being a “Hacker” carried respect, then journalists corupted it to a “mark of Cain” or equivalent, now you would at best call yourself “an old school hacker” if using the word. And as we have seen Federal prosecutors use such changes in meaning to help obtain prosections…”

Blame it on the likes of Morris, Mitnick and Dr. Chaos. What made them famous made future people like them infamous as those in power realized information and a computer were force multipliers in the right hands. The media followed through with enough sensational use of the term to cement its meaning in the public’s mind.

I just call myself an inventor now. 😉

Benni April 16, 2014 6:11 PM

Why the hell does somebody write macros to wrap stdio routines, for an undocumented OPENSSL_USE_APPLINK use case

http://freshbsd.org/commit/openbsd/8a6680833c42bde7de74b9ddb70bbad193c5359b

Is this done for deliberately obfuscating the code, in order to have it more easy to introduce backdoors, or is this to make it

more easy to introduce deliberately faulty file reading and writing mechanisms? This can not come from pure incompetence.

No one writes a wrapper for stdio that has entirely no function without reason.

It makes no sense whatsoever, unless some evil agency is involved in all this, which is likely, since some of the developers can make it in 20 minutes from their home to the BND headquaters in Pullach.

Clive Robinson April 16, 2014 6:47 PM

@ Shawn Smith,

You might find this of interest,

http://www.txbobsc.com/scsc/scdocumentor/

It contains quite a bit of info on the Apple ROM.

For those that don’t know the reason for calling it AppleSoft Basic is MicroSoft wrote it, and later developed a CPM card for the Apple ][. I hated that card as it was the only one to cause crap to happen with my 02-2Meg modification card. Basicaly it alowed you to run a 2MHz 6502 processor in the Apple ][ with the CPU speed droped back to 1MHz for I/O operations, thus giving a significant boost to performance.

@ Nick P,

If it was just Hacker I would not be to worried, but it’s not, the Federal Authorities appear to be clamping down on people not because they have done anything specificaly wrong, but because they have upset “commercial interests” that subsidise those political layabouts in Washington…

Calling yourself an ‘inventor’ is tantamount to ‘mad scientist’ ‘doomsday weapon territory’ especialy when you mention ‘force multipliers’ in the same paragraph. Thus it will not protect you if some “influential” IP stealer thinks you are going to upset their financial model either directly or even indirectly.

We are unfortunatly due to the faux financial “austerity” as a society reverting to the mentality of the “Witch Finders”. Where society is positivly encoraged to burn people to be moraly right at the direction of the controling classes. And the problem is such pyres have to be fed regularly for the control to be maintained, at the moment it’s faux terrorists, but as there are insufficient of them the focus will move to others.

Hundreds of years of human history show this just look at the aftermath of the French Revoloution, or in more recent times Comunism and the likes of Pol Pot etc. How about Rwanda just twenty years ago…

As the words of the saying has it,

    First they came for the…

Benni April 16, 2014 6:52 PM

It gets even better at openbsd:

http://freshbsd.org/search?project=openbsd&q=file.name:libssl

Clean up dangerous strncpy use. This included a use where the resulting
string was potentially not nul terminated and a place where malloc return
was unchecked.
while we’re at it remove dummytest.c
ok miod@

quoth the readme:
NOTE: Don’t expect any of these programs to work with current
OpenSSL releases, or even with later SSLeay releases.
ok miod

As I walk through the valley of the shadow of death
I take a look at my life and realize there’s nothin’ left
Cause I’ve been blasting and laughing so long,
That even my mama thinks that my mind is gone
Remove even more unspeakable evil being perpetuated in the name of VMS.
(and lesser evils done in the name of others.)
ok miod

Nick P April 16, 2014 7:33 PM

@ Benni

I liked this one:

” – Why do we hide from the OpenSSL police, dad?
– Because they’re not like us, son. They use macros to wrap stdio routines, for an undocumented (OPENSSL_USE_APPLINK) use case, which only serves to obfuscate the code.”

Figureitout April 16, 2014 9:30 PM

Nick P
–Thanks, looks like I have “a little” reading to do…(awkward trailing off laugh). I’ll see if one of those fit on an old pc eventually when I feel like it (OpenBSD wouldn’t boot for my older pc’s, wrong chips; fingers crossed on 2 more and then my laptop pfft). Funny enough, my mom’s pc which has some…issues…at least runs liveUSB’s. The last thing she remembers is “Everytime time I clicked, it goes “Meow”. ” LOL. Previously thought it was toast and never fired it up; it used to be the most powerful pc in the house.

name.withheld.for.obvious.reasons
–Yeah I wish I lived back then; people seemed happier and had a brighter outlook. Now in addition to building my own computer (as far as I reasonably can, component-wise) I have to be ready for interdiction of parts in the mail and then I can’t just leave my lab like it is now just out in the open and exposed. Need to lock it down and carry the critical parts w/ me at all times as some agents like to “make themselves at home”. The OPSEC detracts from me thinking about the actual damn computer design. Funnily enough, my dad’s making some homemade sauerkraut in the basement right now (another one of those, “Why dad, why?!” moments); it reeks. What started off as a nice “poopy diaper” smell has gotten much worse when he said it would get better, where I don’t even know how to describe it; it’s just bad. So at least the agents will be having good smells lol.

Shawn Smith
–You’re right. I really haven’t seen enough C-code at all; I need a lot of work on linked lists the most and there’s some other quirks that I’m not familiar w/. What I would really love is to just walk thru code w/ an expert.

RE: shapiro.c
–Lol, did that guy like hate life or what? If he coded normally like that…I wanted to see it so I grabbed the code and compiled it (not copy/paste, re-typed w/ some proper formatting, to really “savor the hack” lol). What a dirty hack lol. I knew there was going to be some “explosive issues” w/ the “1+2147483647” bit; and more damn ???question marks??? !!! Won’t completely ruin it if others want to see, but it actually employed a bit of an optical illusion on you (you may recognize it immediately) and I got the good ole “segmentation fault”. Don’t know why I haven’t, but I want to check out those other winners.

RE: ROM swapping Apple IIe
–Make a security application to it and post it here if you end up doing something w/ it. Would like to see. All my computers are originally windows computers…I walk past a nice display of lots of retro computing at my school everyday; none of my computers are that old, would be nice to have a functioning one.

Clive Robinson
–I’m just an explorer (a gentle one); I call myself “a bridge” b/c I like to bring people together and be everyone’s friend (unless you attack me in which case don’t talk to me). Leads to all kinds of awkward situations but they’re temporary. One thing I know for certain is to not mention “revolution” and “fundamentally changing the political system” if you don’t want Fed and State agents raining down on you. Unless you’re trying to study them…lol.

Benni April 16, 2014 9:35 PM

I have, until now, not found out where one can contribute to their openssl revorking project.

Much what is needed there needs not much experience in coding. Stripping wrappers for Ansi C libraries, removing code for windows or even, yes MSDOS, formatting all into readable code…

If you have any money, donate it to Openbsd. They are just getting BND and NSA out of the Unix World here. This is one of the most, if not the most important undertaking in the open source scene since years.

One can only hope that they do not use Openbsd system functions too much, so that everything can be re-integrated into ordinary Linux.

Coyne Tibbets April 16, 2014 11:59 PM

There’s a discussion by Henry Baker, of the broken OpenSSL allocation strategies in Risks Forum, OpenSSL Mallocware = Malware.

Points made include the fact that the strategy obscures even gross bugs; that it is clear that OpenSSL has not been tested without the broken allocation strategy, even though it supposedly supports a direct-malloc model; and that bugs reported related to the fault he primarily discusses have gone unfixed for at least 4 years.

The primary bug he discusses is one where OpenSSL frees a buffer to its own free pool and then, immediately following in the code, re-allocates the buffer, expecting it to contain the same content. The only reason that this code works is because the OpenSSL free pool is a LIFO model, meaning that the re-allocated buffer is exactly the same content as the one just disposed.

He then points out that this bug means that OpenSSL has not been tested with the compiler switch OPENSSL_NO_BUF_FREELIST. This switch is ostensibly used to turn off the internal memory allocation pool, causing the object program to use normal operating system malloc calls. But if the switch is used, the compiled OpenSSL is inoperable. It is therefore clear that OpenSSL has not been tested for at least 4 years (below) with this switch enabled.

Finally, when he reviewed bug lists, he discovered that this same bug had been reported twice, by other authors, with the oldest report going back 4 years. Yet, despite its seriousness, the bug has not been fixed (even though he discusses an easy correction).

I guess, despite my comment above, it turns out to be incompetence after all.

I’m not a cryptographer, but software reliability is a concern of mine; and I wouldn’t trust software like this to bill a customer, much less to secure communications.

Clive Robinson April 17, 2014 6:20 AM

OFF Topic :

A couple of links to raise a smile,

Firstly why do people insist on “Doing a Clarkson” [1],

http://m.washingtonpost.com/blogs/the-switch/wp/2014/04/15/this-reader-mocked-heartbleed-by-posting-his-passwords-online-youll-never-guess-what-happened-next/

Secondly and reminding me of the Douglas Adams comment “They still thought digital wrist watches were pretty neat”,

http://leapsecond.com/pages/atomic-bill/

[1] So called after “Top Gear” predenter Jeramy Clarkson who also has a column in the UK’s “Sun” redtop newspaper did not belive in ID theft and put his bank details in his column. And then had to admit he was wrong after he was signed up for charity giving to around the equivalent –at the time– of nearly 1000USD/month, http://www.theguardian.com/money/2008/jan/07/personalfinancenews.scamsandfraud And you don’t need to fet hold of the electrol role to finf his adress. For a TV program he was making he purchased an English Electric Lightning that against his –now ex– wife’s wishes he put in the front garden it takes about five minutes in Google to find along with other indormation that quickly gives his “holiday home” address…

Clive Robinson April 17, 2014 6:52 AM

OFF Topic :

Is this wise?

BAE systems are moving a big chunk of their “cyber security” business very much into the “Chinese sphear of influance” by uprooting it and translocating it in Malaysia where costs are expected to be 30% less,

http://www.defensenews.com/article/20140415/DEFREG03/304150023/BAE-Shifts-Cyber-Software-Development-Malaysia

And speaking of things that are potentialy unwise,

http://gcn.com/articles/2014/04/07/hr-analytics.aspx?admgarea=TC_EmergingTech

Big data for US Gov HR, not the hidden “find the next Snowden” agender with in it.

yesme April 17, 2014 10:05 AM

@Benni,

The OpenBSD team are now digging up lots of dead bodies out of the OpenSSL library.

This one is very smelly. It turns out that OpenSSL uses RSA private keys as random seeds…

I don’t believe in conspiracy theory, but this stinks more than just plain incompetence.

T April 17, 2014 10:21 AM

@yesme
I thought RC4 would be good for random-seeds, why is it plain incompetence? more to the point why isn’t it good for random seeds?

yesme April 17, 2014 10:48 AM

@T

You are using a RSA private key as a random seeder. That random seeder can be used with anything and have a custom engine. And if the randomiser isn’t turned on you have the private key data.

Clive Robinson April 17, 2014 12:08 PM

@ yesme, T,

The method of generation of an RSA private key and a BBS secure random generator key are basicaly the same (first find two primes pq that make an appropriate product for use).

As I’ve indicated on this blog in the past it’s not overly difficult to hide a short cut to one of the primes in the public key, thus it’s possible to have a fully broken system this way…

If you want to know more on this read the Adam Young and Moti Yung book (there are now PDF copies to be found on the web, but I don’t think it’s with the authors or publidhers permission).

Clive Robinson April 17, 2014 1:02 PM

It appears that Microsoft have dropped the price of continuing support for XP,

http://www.zdnet.com/microsoft-chops-the-price-of-custom-windows-xp-patches-7000028512/

According to some significantly less than 4% of the original quote to some large organisations.

However there is a significant sting in the tail, to get the reductions you have to have a Microsoft approved migration policy in place. And from what I’ve been told the terms are unreasonable and possibly illegal under EU legislation, and involve a very significant spend on MS products that are not just not wanted but won’t run on systems bought as little as a year ago…

Nick P April 17, 2014 2:39 PM

@ Clive Robinson

Re BAE

Oh my that’s a ridiculously bad idea. The US and UK governments are very worried about foreign subversion. Yet, this and other defense contractors are moving security sensitive work to a country where people can be bribed for less than the license cost of their products. It shows how neither US govt nor defense contractors can be trusted to provide real security.

Note: It will be funny if some XTS-400’s get hacked because a Malaysian employee gave backdoor access to the $100,000+ device. Or if several of them disappear when shipped over there with similar devices appearing shortly after in foreign militaries. 😉

Re 6 pack

That was hilarious and clever.

Anura April 17, 2014 2:55 PM

@Clive Robinson

A serious attack vector to be aware of,

This is why you don’t open anything from an untrusted source without first verifying the signature.

Benni April 17, 2014 7:51 PM

@Yesme,

On this openssl code where the key is fed into the prng seed I basically think the same as you.

Having looked at this code myself, I am disgusted by their “exploit mitigation code” where they mention that “on some systems malloc is slow. So we can not use free and malloc all the time but use freelists”. What the hell? Their code, I ve looked at it,looks purely insecure. They should just use free and malloc. And if this is too slow then a user should upgrade.

Windows has now a crypto API. But the prng of the windows version of openssl still uses its own prng stuff, copying the screen content as a bitmap, feeding it as a seed into the prng generator. The photo of your screen is certainly not a high entropy source.

I was amused by the hacks for the MS-Dos plattform. Can you imagine, an MS-Dos user is making an openssl connection! The problem is that this operating system is unsupported. Therefore, insecure. But the openssl support is still there, along with various outdated hardware and software support. For example, there are hacks for visual studiu 5, the software I learned C, C++ and Basic with, some 15 years ago or so.

And then there are these wrappers. Why should anyone write a wrapper for file write and file open operations of stdio.h, along with an unsupported option?

The wrapper does nothing. In this version of openssl at least. But it could be easily turned to subverted file writing and reading stuff.

Knowing that the private key is given into an external random number generator, the bnd just needs to deliberately modify these devices.

http://cryptome.org/jya/cryptoa2.htm

http://en.wikipedia.org/wiki/Bundesnachrichtendienst

A further laudable success involved the BND’s activity during the Czech crisis in 1968. With Pullach cryptography fully functioning, the BND predicted an invasion of Soviet and other Warsaw Pact troops into Czechoslovakia. CIA analysts on the other hand did not support the notion of “fraternal assistance” by the satellite states of Moscow; and US ambassador to the Soviet Union, Llewellyn Thompson, quite irritated, called the secret BND report he was given “a German fabrication”.[8] At 23:11 on 20 August 1968, BND radar operators first observed abnormal activity over Czech airspace. An agent on the ground in Prague called a BND out-station in Bavaria: “The Russians are coming.” Warsaw Pact forces had moved as forecast.[11]

Yes, BND Cryptography can listen to things that CIA and NSA dream of. In the case of the Czech invasion, it was because the stupid russians bought crypto boxes deliberately weakened by BND. So the experience is there.

Recently, NSA wanted their copy of BND programs Mira4 and Veras, because these programs are much more capable than NSA’s X-keycore.
http://www.spiegel.de/politik/deutschland/bnd-leitet-massenhaft-metadaten-an-die-nsa-weiter-a-914649.html

Since 1970, BND is tapping undersea fibers. When GCHQ’s tapping was at 10GBit, Bnd was already at 100Gbit, and BND is currently installing its tapps in crysis regions, where the NSA can not get a foothold in, for making sure that BND has a larger archive of tapped communication than these stupid NSA colleagues, who do not understand anything from tech. (According to the Spiegel book THE NSA COMPLEX)

Of course the BND just goes after foreign matters. This is perhaps why it also tapps the internet connections of german providers that route traffic almost exclusively within germany: http://www.spiegel.de/spiegel/vorab/bnd-laesst-sich-abhoeren-von-verbindungen-deutscher-provider-genehmigen-a-926221.html

With some Openssl developers living in Dachau, only 20 tube minutes away from Pullach, where BND has its headquaters, it would be entirely naive that BND is not involved in Openssl, given that they even rigged russian Crypto boxes before.

For deliberately weakening an open source project, you first have to write code in an undocumente manner, perhaps with many wrappers around standard functions, and without comments or documentation, ensuring that no one can understand the codebase. Then, the developers must get their money from doing contracting work for anonymous sponsors. Thats all what’s needed and that is openssl.

@Clive: This press article on time is just nonsense.

An arrow of time emerges from statistical physics if you consider a system with special starting conditions. This is known since Ehrenfest’s dog flea problem. Consider a dog full of flies next to a dog without flies. If every fly can jump with 0.5 probability from one dog to another, the second dog will soon be populated and the system will eventually reach equilibrium with a fly having the probability of 0.5 to be on dog 1 or dog 2. This creates the impression of a time arrow with entropy increasing in time. But it is not. The basic equations that govern each individual fly are the same on every time step, and completely time reversible. Similar it is with quantum mechanics, the basic building block of quantum physics is schroedingers equation, which is time reversible. The statistical arrow of time comes with entropy increasing in time comes here from the special starting conditions that we had at the big bang. In fact, time began there even as a physical parameter on the manifold. the equation that governs the quantum universe at time of the big bang describes a functional of a three dimensional metric (wheeler de witt equation). Hawkings demonstrated in the 70’s within this setup that one can even make a plot how time develops from these conditions, but i am too lazy to digg out the paper now.

T April 18, 2014 8:26 PM

@Benni, Would you class the Ehrenfest’s dog flea problem as high or low entropy, if time was the stronger or variable measured with.
Any way to change differential equations, maybe tack another equation on to produce 50/50 fleas on each dog, or would it always be fluctuating around the 50/50 mark by small or large swings?

Thanks

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.