Schneier on Security
A blog covering security and security technology.
« Statistical Distribution of Combat Wounds to the Head |
| Prepaid Electricity Meter Fraud »
September 21, 2010
I stayed clear of Haystack -- the anonymity program that was going to protect the privacy of dissidents the world over -- because I didn't have enough details about the program to have an intelligent opinion. The project has since imploded, and here are two excellent essays about the program and the hype surrounding it.
EDITED TO ADD (10/13): Two more articles.
Posted on September 21, 2010 at 6:55 AM
• 21 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Well, what can I say? Maybe the most important thing to teach the public is that public opinion or press reports are not an indicator for the security level of some security software.
The problem with that is of course that telling people that they have a certain lack of competence will trigger a deny and ignore response in many of them.
I see a lot of criticism about Haystack, but what I don't see is how Haystack was supposed to work. Does anyone know? I imagine even a 3 or 4 line overview might explain why some hackers came down on it so hard.
I am very curious about what the underlying algorithm theoretically was that would allow Haystack to be undetectable. Most of the covert channel stuff that I have read has a success probability (i.e. odds of undetectable transmission) which goes down exponentially as the signal to noise ratio increases. So, if you really wanted to have a "safe" covert channel, you might need to actively browse websites for days to hid a twitter message.
Can anyone point me to the papers which lay out the algorithm Haystack was going to use?
>will trigger a deny and ignore response in many of them
And how is this possibly worse than not informing them in the first place?; not to mention that not informing them also denies information to those who will benefit from it.
Obviously Haystack appears to be fatally flawed; let's get that out of the way.
I heard Evghiny on NPR last sunday. He sounded like an idiot. He's been complaining about peer review, transparency, etc., since the project's founding, and frankly he sounds way too pleased that things turned out this way. (I know he's not literally pleased, but he has that air of smugness and I-told-you-so that I just hate.) That interview was the first I had heard of Haystack's problems, and frankly, I went away from it 95% sure that there was probably nothing at all wrong with Haystack except for a personality dispute. (Obviously, as I said, I was wrong.)
He sounds like he is not a data security expert. He sounds like he is 10% more technical than the journalists he's spending his time bitching about. No journalist is technical enough to understand this stuff; otherwise they'd be practicing in the area. That's the point - only practitioners in this space understand it, and not (nearly) all of them.
He spent a ton of time in the interview pointing out reasons that Haystack shouldn't have been trusted, none of which were remotely technical. He was upset with Heap because of Heap's propaganda machine and occasional boneheaded and often immature comments. But, ad hominem attacks don't carry any weight in civil discourse, and particularly so for security review. Ever hear of djb? Theo? Hell, there's a long list of unpopular but qualified security people.
No, there are VERY few general lessons to be learned from this escapade. The press loves to generalize on failures just as much as it loves to generalize on potential successes like Haystack. And above all, let's not forget that the press loves self-flagellation.
Let's take a deep breath, recognize this as a failure, re-learn the lesson that security is really hard, and move on without the ad hominems and grandstanding.
There, I feel better now.
@billswift: I did not say not to inform them. Of course it should be tried to inform them. I just said that for many, it will not help.
@Johnny Appleseed: The lesson here is simple and quite old: If somebody claims a revolutionary new security technology, but refuses to give details and there is no independent review by some real security experts, then they are good candidates for Bruce's Doghouse (if they are at least mildly entertaining, of course)
@Gweihir: I realize you didn't say not too tell, but I have seen others claim that as an excuse not to inform people about things they should know, and wanted to point out that it is not an adequate reason.
So should we keep using Tor?
Any application has security bugs.
@Grande Mocha: That's why this is so frustrating. There is too much secrecy surrounding this project.
Their website is still misleading. The only indication there is something wrong is a message on the front page stating, "We have halted ongoing testing of Haystack in Iran pending a security review. If you have a copy of the test program, please refrain from using it."
Meanwhile, the FAQ says,
"5. Is Haystack secure?
"Yes. We go to great lengths to ensure that any traffic between our servers and our users looks like perfectly normal, innocuous, and unencrypted web traffic. It would be exceptionally difficult to detect and block automatically.
"However, even if our methods were compromised, our users' communications would be secure. We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read. This cryptography is strong enough that the NSA trusts it to secure top-secret data, and we consider our users' privacy to be just as important. Cryptographers refer to this property as perfect forward secrecy."
I am wondering what kind of steganography is used. The obvious parts of this program are that it does (or ought to) work like tor: bouncing traffic between peers peeling off one layer of encryption at a time and each node only knowing about adjacent ones. The intriguing part is how it masks this as "normal" traffic. Anyone have details on this?
It looks like the projects goals are pretty much standard for anonymous p2p. You want to broadcast a location (IP) and request data to be sent to said location, but want to keep both secret.
Does this sound like the "DRM problem" to anyone else? Is anybody surprised that it doesn't work? DRM works only as long as a hardware black box remains secret (assuming the rest of the security is sufficiently close to perfect to not be worth breaking), this project can't even provide that.
On Writing, Funding, and Distributing Software to Activists Against Authoritarian
Writing software to protect political activists against censorship and surveillance
is a tricky business. If those activists are living under the kind of authoritarian
regimes where a loss of privacy may lead to the loss of life or liberty,
we need to tread especially cautiously.
A great deal of post-mortem analysis is occurring at the moment after the
collapse of the Haystack project. Haystack was a censorship-circumvention
project that began as a real-time response to Iranian election protests
last year. The code received significant levels of media coverage, but never
reached the levels of technical maturity and security that are necessary
to protect the lives of activists in countries like Iran (or many other
places, for that matter).
@billswift: I agree with you. While I would say going to extraordinary length convincing those that do not want to listen is wasted effort, those that do want to listen should have access to expert evaluations and insights.
It's interesting that Haystack collapsed so quickly, and yet I can't find a single article that discusses the technology behind it. It almost seems like the whole thing was a scam and no-one knows quite what to make of it.
Nice article by Wendy Grossman: "Lost in a Haystack" compares Haystack's launch and reception to PGP: http://bit.ly/cfZ3ge
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.