jeremiah December 6, 2007 2:35 PM

I love how programs that people believe will increase security, actually decrease it. Wait, no, that’s hate. I hate those programs.

@Andy: he didn’t just self-sign his certificate, he’s using it to impersonate folks whose WWW connections are coming from his exit node. He impersonates the client for the server, and impersonates the server for the client, and he holds all the SSL keys that are used in the encryption. He is then able to see all the https traffic coming in and out of his exit node in clear text. Man in the middle.

He has everything he needs to fetch out all kinds of personal information, and using that information fraudulently is a breeze.

Cam Soper December 6, 2007 2:43 PM

@jeremiah: Right, but this attack only works for a self-signed certificate. A third-party-signed certificate is still safe. He can’t impersonate the server if he doesn’t have the server’s private key. Well, he can try, but he’s not going to get anything usable out of it.

kyle December 6, 2007 2:53 PM

Just setup Online Certificate Status Protocol Verification in your browser, this could circumvent allot of this problem. Also, is there anything being developed that is more secure than tor? Online privacy and security has never been worst since the NSA and other organizations sniffing/tracking our internet lives…

darkuncle December 6, 2007 3:40 PM

Tor isn’t about security. Tor is about anonymity. There’s a significant difference between the two, and judging by the news lately, it’s a distinction the average Tor user is having trouble grasping.

dragonfrog December 6, 2007 4:04 PM


Unless I’m missing something, OCSPV has to do with checking the revocation status of a certificate in a timely fashion. In this case, as I understand it, the TOR exit node was just presenting a self-signed certificate.

As such, there’s nothing to check with OCSPV – the cert was never valid to begin with; users would have to click on ‘accept’ to accept the bogus certificate anyway.

TOR was just a way that the attacker was able to place himself “in the middle”. The user had to take deliberate (if reflexive) action to actually accept this man in the middle.

nedu December 6, 2007 5:59 PM

Question: Is someone like “daTruthSquad”, who wishes to preserve his pseudonymity against a governmental actor, better off relying on…

  • Tor
  • Open WiFi access points
  • Lawyers and a judge
  • All of the above

(Background reading on “daTruthSquad”: )

While on the whole, I haven’t been altogether displeased by the recent judicial decisions regarding online anonymity, all the same, I don’t think that all the power should be reserved solely to the lawyers and judges.

Balancing that, though, I suspect that many bloggers who end up having a legitimate case for online anonymity, started out without having any definite realization that they were really going to piss someone else off. And if you set up your blog without using–for instance–an open wifi access point lent by a kindly soul, then the horse may already be out of the burnt barn.

Anonymous December 6, 2007 6:04 PM

In this case the “attacker” apparently wasn’t even an attacker, just some guy who set up an FTP server and tor exit node on his network and somehow managed to misconfigure them in a weird way. The owner of that exit node is replying and explaining what happened in this forum:

The IP addy resolves to, which happens to be a T-Online dialin user. T-Online is Germany’s biggest ISP.

RC December 6, 2007 7:14 PM

It doesn’t matter if the exit node did a MITM attack in this case or not. The point is that such attacks are possible on the TOR network and so it cannot be relied upon for anonymity.

hex December 6, 2007 8:06 PM

No, it can be used for anonymity, it just can’t be used for privacy. There’s a difference.
TOR is to be treated as untrusted network; treat it the same way you treat public unencrypted wifi: don’t send unencrypted passwords over them that you don’t want everyone around you to have.

Jamie Flournoy December 6, 2007 8:23 PM


It doesn’t matter if the exit node did a MITM attack in this case or not.

I think it’s fairly interesting. In this configuration there’s too much anonymity for the secure conversation that someone might expect.

If you think you’re totally anonymous and a the same time talking to someone in particular, there’s a conceptual contradiction there. If you haven’t authenticated them, then accepting their assurance that your conversation is secure (via their self-signed SSL cert) is foolish.

Any fan of spy films knows that when two anonymous agents meet and want to have a private conversation, a pre shared secret is used for authentication first. Otherwise you could be telling the bad guys all your secrets!

tathyract December 6, 2007 11:49 PM

I’ve been using Tor for most of the less-than-two years I’ve had a home computer, but I have a lot to learn. At least I’ve learned something about how important network configuration and browser preference settings are; e.g., a browser checking site proved to me that leaving Java enabled can leave one’s unique IP address exposed.

I’m not yet sure how the configurations of what traffic an exit node accepts and rejects are found. The data at the linked site doesn’t look totally inscrutable, so maybe even Tor beginners should learn how to find and look over exit note configurations.

My thinking is that some exit nodes are much more likely than others to be run by honest operators. (I’ll grant that there are no absolute assurances.) An exit node at the computer science department of a prestigious university, for example, seems less likely to be compromised than some. I’m thinking that I might try switching exit nodes until I find one that I trust before logging in somewhere with a username and password that I would not want to share, when https log-in is not available. That could be a time-consuming process, though, so it will often be tempting to roll the dice.

I don’t even know whether cookies used for logging in can be dissected by an unscrupulous exit node operator in order to obtain passwords. I’ve seen my username in an ID cookie from one site.

Tor developers are always coming up with innovations in the arms race, and they never play down a vulnerability or inadequacy in Tor. Their efforts keep them very busy. I’d like to think that some day there will be a thorough primer for Tor users, suited especially for beginners. Easy for me to say; I haven’t learned enough to write one.

anonymous December 7, 2007 12:24 AM

H0w L0nG CaN YoU ImAgInE
A Pr0gRaM lIkE ToR ReMaInInG
FrOm WiThIn Or WiThOuT?

ThE sAmE FoR AnY OrGaNiZaTiOn,
GrOuP, MoVeMeNt, CoMpAnY,
EnTiTy SuPpOrTiNg PrIvAcY,
FrEeDoM, LiBeRtY?

Tarkeel December 7, 2007 3:31 AM

I’m more curious why we haven’t seen any reports of this being done with open wifi APs. I’m sure someone is doing it.

SteveJ December 7, 2007 4:11 AM

If you think you’re totally anonymous and a the same time talking to someone in particular, there’s a conceptual contradiction there.

No, there’s no contradiction. If you think you’re totally anonymous and I think I know who I’m talking to, then there’s a contradiction. But if I don’t mind you knowing who I am, and I don’t care who I’m talking to, then I can prove my identity to you without needing to know who you are. This is exactly what CA-based PKI does[*], which is why our “security instincts” about SSL need to be different according to whether the server certificate is CA-signed or self-signed.

  • Assuming that you can trust the CA not to make a pig’s ear of doing their job, that is.

Cereal Killer December 7, 2007 9:56 AM

@Nobody in particular…

“The point is that such attacks are possible on the TOR network and so it cannot be relied upon for anonymity.”

Ironically, even a full blown compromise of the SSL portion of a SSL connection made through Tor wouldn’t necessarily break your anonymity. Or rather… If it did, your anonymity would already be broken because you’d be transmitting identifying data over the Tor network. Even if an evil Tor node hadn’t learned who you are the person holding the “real” SSL certificate would.

Privacy and anonymity are two different things. Privacy is your doctor not publishing your medical records on his MySpace page. He knows who you are, but keeps the information to himself. Anonymity is telling everyone you broke your leg from behind a curtain. Your medical condition is no longer private, however since it’s not specific enough to ID you you’re still anonymous.

The lines blur at the edges of course, but in general Tor is an anonymity tool and SSL is a privacy tool. Tor can help keep information private as a side effect of also encrypting data before it leaves your immediate control, but at it’s core Tor’s objective is to disassociate your identity from the data emerging at the other end of the Tor network. Data than can be anything, including your bank account numbers.

This latest “evil Tor node” episode isn’t remarkable. MITM attacks against SSL are probably about 11 seconds younger than SSL itself, and it’s certainly not the first time a Tor node has been caught doing it (assuming evil intent here, I realize the operator is trying to explain). In fact previous incidents like this have produced at least one project I’m aware of which revolves around scanning the Tor network for just this type of attack…

Approximately 100% of all SSL MITM attacks are trivially detectable if the user’s software isn’t incompetent, and the users themselves aren’t asleep at the keyboard. In the end, I think that what we’re really looking at here when observing the effects of the somewhat easier perch Tor might give an attacker, is both a testimony to the robustness of SSL and a real life warning to some users. Those who would sacrifice security for usability by disabling or ignoring common sense security settings and warnings. 😉

anonymous (sent by Jim Sarkowski) December 7, 2007 11:26 AM

Don’t tell anybody this, but I don’t understand any thing you all are talking about. I want to keep it private.

Mark December 7, 2007 12:32 PM

TOR doesn’t generate self-signed SSL certs on the fly – whatever he says, this guy is actively attacking SSL sessions running through his node. He’s using something like ettercap. From the ettercap man page:

“SSL MITM ATTACK – While performing the SSL mitm attack, ettercap substitutes the real ssl certificate with its own. The fake certificate is created on the fly and all the fields are filled according to the real cert presented by the server. Only the issuer is modified and signed with the private key contained in the ‘etter.sll.crt’ file.”

Perfectly detectable to anyone who actually looks at server cert before accepting the connection.

UNTER December 7, 2007 3:48 PM

Isn’t this an example of why, in practice, the entire CA idea sucks the big one? In practice, I don’t really care about the “authority” – the “authority” tells me nothing more than that it is less likely an MITM attack is going on.

For that, you don’t need a CA – you just need to identify the certificate through a separate channel than you’re accepting it. A distributed system works just as well (if those connections are encrypted and you have the public keys for the distributed node).

But the CA method does create a social attack. Since CA certificates cost money and take (some) time to setup, a significant portion of SSL certificates are self-signed. That results in training people to click “accept”, particularly if they know that the certificate is self-signed, even though that is an MITM risk.

Certificate authorities are just money making schemes. SSL should be redesigned to be CA-free.

antibozo December 7, 2007 5:04 PM

UNTER> Certificate authorities are just money making schemes. SSL should be redesigned to be CA-free.

Hear, hear!

DNSSEC-based PKI solves this problem and permits unlimited generation of certificates with a properly constrained namespace (unlike X.509), and enables global opportunistic encryption using IPsec. Of course, certain companies who happen to control major DNS TLDs whilst making boatloads off signing certs wouldn’t have any financial motivation to delay DNSSEC and IPsec implementation as long as possible, would they?

nwf December 7, 2007 7:10 PM


DNSSEC has its own set of problems, notably including re-enabling AXFR transfers (via TSIG chaining) and reliance on . and tld. keys to remain uncompromised… which is, TA DA!, equivalent to the CA situation. So no, a “DNSSEC-based PKI” will not solve this problem. Please see

If you really object to CAs, you need a method for collecting and verifying fingerprints. Something like the PGP PKI may be of use to you. First start encouraging businesses to put their TLS fingerprints on business cards next to their websites. 🙂

antibozo December 7, 2007 11:03 PM

nwf, if you’re referring to NSEC/NXT-based zone enumeration (TSIG is a shared-secret-based MAC for DNS transactions), that problem (if it really is one) is solved in NSEC3. The DJB article you quoted, furthermore, is about five years out of date.

Yes, the trust anchors have to remain secure, which is the case with any PKI implementation. DNSSEC is vastly superior to X.509 in that regard, since there are far fewer important trust anchors to protect (. and TLD anchors vs the private keys for all of the ~100 CA certs in the typical default trust database).

And protecting trust anchors is only one of the many problems with CAs. The biggest one in my view, as I noted before, is the unbounded authority of any trust anchor in X.509: any of the ~100 typical X.509 trust anchors (or the unknown number of chained CA certs) can be used to sign a certificate for any domain name, whereas with DNSSEC, the zone signing key is only good for subordinates of that zone.

This problem is exacerbated in X.509 when people stand up enterprise PKIs (again because it’s too expensive to buy certs for your enterprise infrastracture) and import the company’s CA cert into the users’ trust databases. Once that’s done, the company PKI operators can sign certificates for just as easily as Verisign. This is bad design.

We use TLS to make sure the domain name we reached is the one we typed in the location bar. DNS is the namespace that is being verified–why do we use a PKI that requires signers to validate identities through external mechanisms–email, letterhead, etc.–when instead we could validate it using the namespace that is actually being secured? (That goes for X.509, PGP, etc.)

David Martin December 8, 2007 8:41 AM

The TOR node operator LateNightZ (on the web site linked above) explains that he (she?) was also running an FTP server on the same LAN with his TOR node at home. He complains that his web browser generates various certificate warnings, and appears to attribute that to his not wanting to pay money to get new certs — of course that doesn’t make sense; instead it indicates that his browser is suffering from the same MITM attack. One day he noticed his virus detector freaking out about password-protected RAR files — this could be an intruder moving stuff around on his network. Maybe related to the the MITM stuff appearing on his network.

His explanation leaves the impression that he’s not sure exactly what happened and specifically “I didn’t do it”. Who knows whether that’s the truth or whether the story was invented to leave that impression…

Curt Sampson December 14, 2007 7:37 PM

While I find this interesting news, it doesn’t appear to me to change the security situation much.

In this case, if he were modeling the situation where he had a certificate from a party that he trusted, he would have installed that into the trusted list in his browser, and when the MITM attack occurred, he would have received an alert that it was occurring.

If he were modelling the situation where the partner was unknown, and he was seeing a certificate for the first time and it was not signed by anybody in his trusted list, well, you know that some part of your connection is encrypted; that’s all. In neither the Tor nor the non-Tor case does it mean that someone in the middle is not sniffing your packets. Generally, it’s not that big a problem, because you can make a decision at that point about what sort of data you want to hand over to any random party, based on what the effect of it being compromised would be.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.