Schneier on Security
A blog covering security and security technology.
« Detecting Words and Phrases in Encrypted VoIP Calls |
| Authenticating the Authenticators »
March 25, 2011
Identifying Tor Users Through Insecure Applications
Interesting research: "One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users":
Abstract: Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications. In this paper, we show that linkability allows us to trace 193% of additional streams, including 27% of HTTP streams possibly originating from ``secure'' browsers. In particular, we traced 9% of Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor.
Posted on March 25, 2011 at 6:38 AM
• 44 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Studies have shown that 193% of all percentages are inaccurate.
To reliably prevent this, you need two virtual machines. The first VM will be for applications; it has an internal-only network interface to the second virtual machine, and no other network interfaces. The second virtual machine is a Tor relay, and has connections to the outside world and the application VM. The relay VM is configured to insert some extra hops which the Application VM can't get rid of. The application VM's only network interface is to the relay VM, and it can't use that interface to do anything that would break your anonymity, so it remains secure even if the application VM gets rooted. (Provided of course that the virtual machine is itself secure, i.e. no security holes in the virtual device drivers, that the relay VM is secure against attacks on the application VM's network interface, and no identifying information is ever put into the application VM).
Unfortunately, creating this setup is rather odious. The Tor project should create and publish a pair of VirtualBox configuration files and hard disk images to make it easy.
if the abstract is true then creating more hops doesn't matter. As long as the stream is traceable they will always be able to find your public IP address. No amount of NATing will save you there.
The idea sounds like it creates a reverse TCP stream from the Application client to a known host server and that host server sees where the TCP stream came from.
That last part is the interesting part of the "paper". They ran an exploit, in the wild, and profiled results to help the case of Tor? Pretty ballsy to publish that, especially with Anon in full swing.
From the paper: "We show that the bad apple attack allows us to trace
193% of additional streams as compared to BitTorrent streams, including 27% of HTTP streams. In
total, we traced 9% of all Tor streams carried by
our instrumented exit nodes."
Meaning there was a lot more streams (193% more) than BT streams.
@ D M & Paeniteo,
Perhaps, but if they just would have said there were 2 times more streams than BT streams or BT streams appeared 34% of the total number of streams, that would have been a lot clearer?
The first thing you see when you try to download Tor:
"Want Tor to really work?
...then please don't just install it and go on. You need to change some of your habits, and reconfigure your software! Tor by itself is NOT all you need to maintain your anonymity. Read the full list of warnings."
BitTorrent over TOR? Wow, some people have a *LOT* of patience.
@D M: Yes... However, that parapraph (and its 3-4 virtually copy-pasted cousins) did not really eliminate my confusion. At least on re-wording would have been nice (if not for clarity then for reading-pleasure).
Anyway... Last time I looked, the tor documentation was pretty clear that you shouldn't use filesharing over the tor network (because of both bandwidth and privacy issues).
According to the paper, one attack only works when a client uses Tor for the tracker communication, but not for the actual upload/download of content. They discovered that 72% of Tor Bittorrent users are configured this way.
The first stage of the attack is quite simple: They monitor the tracker connection at the Tor exit node. When they se a connection to a tracker, they can tamper with the response and insert one of their own Bittorrent peers in the response. Once the Tor user connects to this peer (outside of Tor) to download content, they know the IP address of the user. The user can be identified through the client listening port in Bittorrent.
The second attack is a statistical attack on the distributed tracker (DHT), and could be avoided by disabling DHT in the Bittorrent client.
In other words, it seems like Quirinus' setup would protect against this attack, since the actual upload/download of content would also be through the Tor network. (Assuming that DHT is disabled.) If a lot of users do this, it would severely degrade the performance of the Tor network, though. That is not a very nice thing to do.
Disclaimer: I haven't read the paper yet, but it sounds similar enough to things that are known to work in theory.
You shouldn't even need two virtual machines, just one. The non-virtual, outer, "real" machine can be the relay. Some people using Tor are already doing this.
"The first thing you see when you try to download Tor:"
Right; Tor just gives you the tools, but doesn't force you to use them or teach you good privacy habits.
"BitTorrent over TOR? Wow, some people have a *LOT* of patience."
Look up Onioncat: IPv6 over "double-Tor", and I've heard some people even use that for bittorrent. :-
("double-Tor", since it runs on hidden services, which are basically like two normal Tor tunnels stitched together)
Interesting findings. What if you launch virtual desktop (like at http://DesktopRoar.com) connect to it using RDP, and use Tor from there on. If tor is traceable, others will trace the IP of your virtual computer (which can change per session in hours/days), but not your true, real IP. Application layer makes things complicated.
Using Bittorrent over TOR is not uncommon for persons trying to circumvent a firewall (for instance one used at a University to block p2p filesharing applications). Sure it's slow but it's better than nothing in those cases.
It needs to be said that TOR should come with a big health warning.
The reason being it is a 'low latency' 'demand oriented' network.
These two failings considerably reduce it's security value for a number of reasons.
As such the best TOR offers in reality is a limited degree of privacy on 'data in transit' within the TOR network.
For instance it is quite susceptable to active traffic flow attacks one of which is to add "jitter" to packets prior to them entering the TOR network.
That is an attacker who can control or influence a node between the user and the point of entry they use to the TOR network can from that point on track the packet stream at other places they have access be it in the TOR internal path or after the packet has left the TOR network for the service the user is connecting to.
Sadly for many users the TOR network is actually not very well distributed due to the nature of the Internet the TOR network actualy has just a few major nodes where trans-national or trans-organisational choke points exist.
To insert network jitter onto either a user or service traffic often all it requires is the ability to inject packets into the network node from a higher bandwidth connection (ie a limited DoS attack). This jitter remains on the traffic in many cases untill it reaches it's destination. It can be seen by using a simple autocorrelation function on the edge timing of packets which "lifts from the noise" the jitter signal, thus fingering the packet stream.
However TOR does not always provide even privacy, If I can see or identify a users traffic at a service often it is unencrypted and thus visable to simple monitoring. Even if it is encrypted into the service if the service is public and the data is static then often it is possible to identify the data sent simply by the file size and rate monitoring.
Whilst these faults can be corrected (rate limiting, adding random jitter between nodes, stream packet stuffing, out of order transmission and data stuffing) it appears that the TOR developers are not keen to do these things.
Untill such time as they become available I could not say I would trust TOR to give anything other than the illusion of privacy for the average user.
Back in January 2011 there was a commentary posted on SecurityForumX.com about :
TOR anonymity network oWn3d by spammers, web site attackers, fake traffic riggers and click fraud artist
Wherein it stated :
Quote begins :
In the last month I noticed a strange upswing in attack and spam traffic that always seemed to trace back to IP's that were participating in TOR
So it turns out that some crafty devils have found a way to subvert the 'anonymity' of the TOR network in order to launch attacks against web sites....... or to post spam on web sites... or just about any other abusive behavior you can imagine.
Fortunately, TOR is not as anonymous as it thinks it is and it is possible to block traffic originating from an IP that is being used by TOR users.
End of quote :
I guess this study is further proof that the folks using and abusing Tor are not as anonymous as they imagine.
Maybe just a little OT, but on Tor or similar, if I am a spook and I volunteer my setup for a Tor node, how much am I going to learn?
"Whilst these faults can be corrected ... it appears that the TOR developers are not keen to do these things."
Tor (not TOR, *cough*) has been pretty well known in the past for being slow, anything that adds latency or uses more bandwidth is not going to be too popular with the main developers.
Their main goal seems to be to make things reasonably anonymous for ordinary humans and easy to use, not strong guarantees for the most security-aware .0001% of the population.
(Compare with the runaway success of the significantly more paranoid Freenet approach - I'd say that the fact that Tor is being used poorly for bittorrent is a sign of its success in reaching ordinary non-crypto-geek users.)
Of course this involves trade-offs. It seems to me that deciding exactly what trade-offs to make would be the hard problem here; not designing the perfect system on paper, but getting people to actually use it.
Thor (quoting someone else):
"Fortunately, TOR is not as anonymous as it thinks it is and it is possible to block traffic originating from an IP that is being used by TOR users."
Sure, you can block exit nodes (they're all public), but that's not the same thing as breaking anonymity for Tor's users.
I don't trust Freenet, becuase it looks like basically a one-man effort. It hasn't gotten nearly the exposure and study that Tor has. Plus, finding anything on it is extremely hard unless you know exactly where it is already.
H.D. Moore (author of the Metasploit Framework) was the first person that identified and presented this fact (originating IP addresses can be uncovered regardless of proxy settings). I find it interesting how the authors of this paper make no reference to H.D. Moore's work.
Maybe. Didn't Ian publish the Freenet protocol openly in academia long ago and all improvements? The Freenet system seems really good. It has fewer problems than Tor at the design level and the paranoid nature of its design makes it more trustworthy to me.
I keep thinking of applying formal verification techniques to the protocol and design to see if they missed something. Preferably, some academics would do that, then others and I would make a high assurance implementation. That it depends on the Java platform turns me off a bit. So long as it parses input correctly and handles the common attacks, this shouldn't be a problem. I also have deployment methods that address all shortcomings, so my preference for a high assurance design is for others' benefit primarily.
Now, a protocol that could use a bunch of extra scrutiny and implementation oversight is I2P. I2P usage has exploded over the past few years in spite of the developer's warnings that it hasn't been peer reviewed adequately. That they keep repeating that earned them my respect. I'd like to see some rigorous protocol analysis applied to Freenet and I2P because they could be the best option for things like anonymous file sharing, email, and not-quite-Instant Relay Chat.
On the new attack
The BitTorrent angle of the attack is nothing new or even news worthy. I mean, aside from profiling Tor users. That applications that leak identifying data can give eavesdroppers identifying information is nothing new and could even be called common sense for the technically apt. Additionally, I've often warned against using Tor only for the trackers because RIAA and others have shown that many peers are malevolent.
The best approach for anonymous, fast BitTorrent cost a little money. It's using a seedbox: a dedicated computer that does nothing but BitTorrent and has a point-to-point file transfer protocol (e.g. FTP) to load/unload torrents and files. Seedboxes were originally used to spread torrents faster, but many use them to hide their IP & prevent BitTorrent from saturating their own network.
You download the files with the seedbox, then you download from seedbox to your PC using FTP over a WiFi connection not registered to you. Common options for WiFi include open WiFi, hacked "secure" WiFi, and (my favorite) paying for someone's meal to get the store (e.g. Starbucks) password, then connecting with a cantenna. People trying to trace the user really scratch their heads wondering who the hell is in there surfing. ;)
I forgot to give INRIA props for constantly doing useful research work. With my high assurance focus, I continuously check on INRIA projects because their lab (esp Xavier Leroy's team) keep producing awesome, useful stuff. Here are a few highlights:
1. OCaml language (C#, F# & Scala copied it a lot)
2. Coq theorem prover
3. CompCert verified C compiler
4. Mini-ML verified compiler in progress
5. ASTREE static analyser
6. Why, Frama-C and Krakatoa Java
7. RSA-OAEP security proof; broke SFLASH
8. eSTREAM cipher finalist Sosemanuk
9. Polychrony toolset for dataflow design/verification
It's just been one good, practical result after another with the INRIA people. I mean, they have their theoretical stuff too, but I end up both wanting to and being able to use so many things they produce. I wish we had more research centers like them state-side working on verified software development with similar results & usability.
Gosh, how really insecure and vunerable are we on the net?! I can envision the future (and actually the present) when hordes of marketing and research companies analyze BitTorrent in their business practise as one of many tools in finding trends and produce designs for new product development. Sadly, our IP is the least private thing we own.
"... pretty well known in the past for being slow, anything that adds latency or uses more bandwidth is not going to be too popular with the main developers"
And this is the crux of the issue.
As I have said many times befor "security-v-efficiency" as a general rule every time you make something more efficient unless you take special measures you make it less secure.
It is clear that the Tor developers are not taking the special measures required.
One of the things you hear about TEMPEST/EmSec is "clock the inputs and clock the outputs".
This helps remove a lot of the "timing side channels" that can be used to exploit Tor and strip of even it's meager privacy.
Taking it further is "channel saturation" if a "point to point" communications channel is always at full capacity it provides little information to a potential eves dropper. There are a number of ways this can be achieved the first major way is by "rate limiting" the second is by "traffic stuffing" or a combination of both.
When you have a very variable rate of traffic flow this is a "bandwidth to signal" issue which can on some networks be costly. There are two basic solutions to this. The first is to use "store and forward" where by traffic is stored in a node and then sent in one "burst" of fixed size/duration. The second solution is multiple sub channels of fixed bandwidth along any given signal path.
In either case you have to be very carefull how you arange the expansion and contraction of used bandwidth otherwise you open up secondary side channels that can be exploited, especialy where you have no control on the amount of traffic entering a node (such as on a public access system like Tor).
Another solution to the rate limiting issue is "traffic diversion". If a node has several "point to point" links it can put excess traffic from one link out onto another link and alow it to be delivered via a "diversion". If the basic network is configured as a mesh to alow this then potentialy packets can not only arive at their destination very out of order they can be very late as well. Such systems also alow a high end adversary to force traffic diversion through favourd nodes to their advantage so it is a very very complex issue to manage over and above the difficulties of ordinary routing issues.
Most of the issues with Tor as you note boil down to "latency" issues of "interactive traffic" the desire to minimise this is the real issue where the seccurity of the network breaks down.
But simply the lower the latency the higher the bandwidth of the side channels by which the traffic flow can be used. This also means the faster an attacker can "nail you cold".
It is because of this "design choice" you will not find me using Tor nor could I possibly recomend it because it also "attracts attention" just by it's use and the effective trade off is not in a users favour at all.
Which brings me around to your comment,
"Their main goal seems to be to make things reasonably anonymous for ordinary humans and easy to use, not strong guarantees for the most security-aware .0001% of the population"
Anonymity is as the old joke has it "like virginity" it is something you have or you have lost for ever and all it takes is one p**k to lose it (as is the point made by this paper).
Anything less than 100% anonymity is just a variation of "security by obscurity" and as we are finding with modern methods it is not possible to make anything of usefulness anonymous against a well resourced attacker.
We have seen with US, UK, European, et al legislation that all traffic flow information is to be kept effectivly indefinatly. Thus the statistical time channel methods can be applied at any time in the future against "single channel" "low latency" traffic". And as Bruce often points out "Attacks only improve with time". Therefore the anonymity that Tor currently offers is dead and buried, get over it and move on to other more diverse security methods.
So a warning for those using Tor for activities that are either "illegal" or seen as "reprehensible" in your jurisdiction. You have already "commited the crime" and you have left your "DNA at the crime scene", it is now only a question of time and priorities before you get a "midnight knock" on the door...
If you must carry out these activities go and use some network you are not associated with and then only once...
That is the future for "anonymity" lies initialy in "random unatributable access", then not in "low latency" networks but in multiple node "store and forward" networks with "decoupled entities" responsible for moving traffic from store to multiple store. However this raises a whole host of data managment issues that will prove as in the ancient Chinese curse "interesting".
Whic brings me around to your,
"Of course this involves trade-offs. It seems to me that deciding exactly what trade-offs to make would be the hard problem here; not designing the perfect system on paper, but getting people to actually use it."
The most important trade off is "low latency" it is extrodinarily difficult to achive whilst even having some semblance of "anonymity". Tor as it is currently configured realy is not capable of delivering both low latency and anonymity currently.
Changes that need to be made is to firstly have multiple routes for data to be fragmented and fired "shotgun" like to achieve a wide spread across the whole network. This needs to be coupled with store and forward techniques to break down the ability to cross correlate packets to individual user or service data streams. Data also needs to be duplicated and sent to different nodes where most will silently die to /dev/null. Node to node routes need to fully occupie rate limited channels using traffic diversion as required or opening other channels to other nodes as required to offload traffic. A minimum of 20% of the traffic originating from a node in any one channel needs to be fake that will hope on two or three other nodes before dropping to /dev/null. All channels that are opened need to be kept open for a traffic period exceading twice the maximum traffic size (that is if the largest message sent down the channel is 1Mb the channel should be kept open for atleast another 1Mb of null traffic before closing).
But the most important change users should make is to get used to using "store and forward" systems.
Back in the early days of the Internet it was quite common to use the likes of Email to get files from FTP sites via an intermediary agent. The process could usefully be recreated in a much more secure form for the likes of P2P networks and Bitorrent.
Put simply, a user would,
1, Use one of a number of directory services to find out which FTP sites held the file they wished to get.
2, The user would then send a request Email to one of several intermediary agent services, with the chosen FTP site and path to the file.
3, The agent service would then get the file from the users chosen FTP site and wrap it up as an ASCII armored message.
4, The agent service would then send the message in smallish chunks to the users Email account.
5, The user would log into their EMail account some hours or days later to retrieve all the chunks.
6, The user would then use a script to rebuild the chunks back into the original file.
It can be fairly easily seen how a modifed version of this using multiple "blind servers" and "blind agents" through multiple nodes could render any usefull time based traffic analysis mute.
Java is much easier to verify formally than C. (Pure functional languages are even easier, though.) I think someone used Isabelle to verify a Java implementation for smart cards or something, so that would make a good jumping-off point.
I don't see why Java would be a weak point. What makes you uneasy about it?
There's excellent work in Java verification. Kesterel's Java Card work, the Krakatoa tool, the Jinja (verified java-like lang, g.c., and runtime), NASA's java tools, and Aonix's safety-critical Java platform show it can be done. The question is, "Would you want to?"
Well, we're not developing an arbitrary system. We are developing a piece of software designed to prevent various information leaks. This is a textbook example of what Orange Book's A1 class was designed to do. Covert channels, like Clive's ranting about, are a big issue here. Predictability and TCB size/complexity are also important.
Java platform does so much behind the scenes: memory management & garbage collection; native code generation via JIT; interpretation; moving data between native & java code; interacting with the OS. Any of these things could produce exploitable covert channels in something like Tor. There's also the dependencies on the java runtime, libraries, etc. (These have had vulnerabilities before & I heard some bugs are still there.) Additionally, you must trust the JIT to generate code that maintains the security properties of your Java code. I don't trust this: it was caching behavior of the code that caused the side channels that broke AES and RSA at one time.
All of this crazy stuff is avoidable in an EAL6-type structured, layered, minimal design split into subsystems with carefully defined functionality and a formally modeled interface and set of states.
So, what to use to implement the system? Well, the best choice appears to be a mix of Ada and SPARK using GNAT IDE. Praxis' EAL5 demonstrator (Tokeneer) already shows those tools can do the job pretty easily. Verisoft and some other projects showed how to do something similar with C/C++ subsets. If I allowed a significant run-time, then I'd rather use OCaml because it has features conducive to writing extremely reliable & predictable apps (already used in a DO-178B project). The Coq theorem prover, used extensively by INRIA, can also produce Ocaml code from specs. Automated design, verification and generation tools like SCADE Suite and Perfect Developer may warrant attention too.
So, I'd rather just toss out the Java platform. Dedicated Tor appliances have the highest potential for secure design. Ideally, it would run a microkernel-based security kernel platform (Turaya Security Kernel comes to mind) and the design would decompose the app into pieces in their own process spaces, interacting in a carefully controlled way. POLA can be easily enforced. The transport layer might, via OK Linux, reuse the Linux device drivers and networking stack. All Tor crypto and routing code would be outside of the VM with the transport stack. Covert channel suppression is also easier with decomposed, native apps on a microkernel.
I just don't see how we can trust the security of a monolithic application on an opaque, complex language platform on a complex OS with tons of shit in kernel space. That's just breaking way too many rules of high assurance design. The attackers in the threat model are sophisticated, well-funded and have lots of time. "High robustness" is the only rating that can beat these people and that means the entire system must be designed to EAL6-7 standards to ensure its anonymity won't be broken. Low assurance systems just don't cut it here.
You're clocking the inputs and outputs seems like a good start. I see covert timing channels as the most insidious in a system that tries to ensure anonymity. This is because timing is everything. Anyone wanting to know about covert channel identification should look up Kemmerer's work and any research papers from the Orange Book days where they were building B3 and A1 class systems.
I found one recent work you might like Clive. It's "Securing information flow at runtime" by Paritosh Shroff 2008. They design runtimes that can deal with direct, indirect, and timing-related flows. They also have a formal proof of noninterference. It's some nice work. Glad the young folks are still thinking about this little esoteric subfield.
@ Nick P,
I've had a quick look through the thesis and yes it looks interesting 8)
However he missed covering one bet, in the section "other channels" he missed the oportunity to mention such things as "soft fonts" which can have a major effect on "compromising emanations".
I shall take a little time after cooking Sunday lunch to print it out and read through it more carefully (whilst others scrub the pots pans and dishes 8)
@ Clive Robinson
I don't think he's including EMSEC. He's just doing software-level covert channels. His work would have to be combined with other work to account for EMSEC issues. I just mentioned it because it's a very easy way to do info flow security for inexperienced persons compared to other approaches. I could see it being used at least during development to automate the process of finding covert channels in the software.
Interestingly enough, my search for more papers on formal verification made me stumble upon SPARKSkein: an implementation of Skein in SPARK Ada by Praxis and AdaCore. Praxis is known for building high assurance software with formal specifications and extensive static analysis on the code. They used similar techniques to produce a very readable, portable, efficient version of Skein and their prover found an error in the reference Skein implementation. It's a nice read and makes me wonder why crypto guys, who are already good at building proofs, aren't using this toolset for their algorithms (at least for readability/portability).
Security by Construction - Engineering software to exceed EAL5
Tor does not claim in any way to be a full-proof solution guaranteeing its users 100% privacy and security, even when configured and used correctly. It only improves these, and as such is just one layer in a comprehensive defense-in-depth approach. Connecting to an open network that can't be traced to your physical location, encrypting traffic, using a dedicated VM or distribution on USB (such as JanusVM or T.A.I.L.S.) will add to your efforts of giving determined adversaries a serious run for their money. Alternatives such as Freenet and I2P also come to mind, especially for direct point to point communications.
Re: German post: At "Transmitting Data Through Steel", 24 March 2011, I noted that the sig links to a retailer, but the contents mix sales spiel with comments about the Yemen protest, with the possibility that they were using this blog for steganographic communications.
The post in German at this topic links to the same retail site, but here is a Yahoo Babelfish translation:
These tasks are usually under control are being approaching freshly, while Virginia as well as demands delimitation consider exactly, like course leaders as well as pupil podiums socialize on Social Networking like Myspace, Myspace and Facebook as well as Twittollower., mbt sandalen that an official would like to form to protect, the educational prospect available by Ludwig as well as other advisers, but also sexual criminals out of the discovery the actual leave like web pages connected Rapport to also develop together with concerning possible to reduce. Its Virginia table been correct for education and learning these to really promote institution zones country-wide to take procedures of the administration of the social media to use instructors sandalen Kisumu ladies, in particular shift did not become by naturally., mbt during the lively, a proposal forwards nevertheless still still position this condition, if one of many all first in the situation its to master this kind from challenges to.
1) It's a way of obscuring steg communications by mixing in nonsense communications.
2) It's super-steg, encoded in the nonsense, or an actual code (not cipher).
3) It's all just spam.
Anyone have any ideas on which?
I'm tossing in another possibility: the author has multiple personality disorder. Personality 1 is a schizo, narcissist with a dream of being a journalist and wrote the article. Personality 2's native language is English and his OCD made him do a hasty translation from a language *he* barely knows. The perversion they share led to their alias. Didn't know the mental wards gave their patients Internet access.
@tommy , some geolocation stuff, and tracking people by social network.
Strip read mix and post
Regarding the German post: This only contains German words in some random (Markov) order. The text does not make any sense at all.
I suspect it is simply search engine bait. The proliferation of social media site names supports that.
The fact this topic is getting more attention is good. And actually it's good to have more people using Tor even if they're less tech-savvy and "doing it wrong". All the more Tor traffic to for the "real" users to "hide" amongst.
@ Nick P., w, and Winter:
All of the sigs link to the German retail site, whose URL began with "m b t". (spaced so as not to assist them, heh.) "M b t" appears several times in the above, and also "sandalen", which was not translated by YB, even though a direct DE-EN dic confirms the obvious "sandals" (-en being a German plural ending).
Which tends to support the search-engine bait, but for the shoe-sales site, not for the social media site names. The latter are so popular that an incremental reference or two isn't going to change their rankings. The Yemen stuff in the first example that I noticed gave some hope of steg for the Yemeni, but now I realize that it was just because Yemen is a hot topic, so as that topic is searched and spider-crawled, the shoe store gets more attention, too.
I am quite puzzled about this topic. Let me start out by saying I have no technical knowledge about it. But people who do, like Clive Robertson, say that Tor has dozens of vulnerabilities.
Now from what I understand, Tor has something like 200,000 users, and I am supposing that a substantial proportion of them are using it for something that is illegal in their home country. And there are many organizations, such as the CIA and the Chinese government, that have the motivation and resources to go after these users.
If Tor is so vulnerable, then we ought to be seeing hundreds of arrests every year, indeed hundreds in the USA along. But as far as I know (as from google searches, lack of notices at orgproject.org, etc) there are no arrests at all.
Can anyone explain this? Either with hard facts or speculation?
To further the last post, it seems that the most obvious thing to do for the Chinese, the Americans, the Russians, and so on, is to have their intelligence services JOIN Tor both in the sense of having their agents join the organization and in the sense of providing Tor nodes to facilitate tracing of users. This also seems true for every other target organization, for example Wikileaks. Didn't the Chinese penetrate the Tibetans in a similar way? How do Tor or Wikileaks or NewFlavor Whistleblower Site handle this sort of possibility in their security models?
Indeed, the most obvious thing for the spooks to do is to set up their own secure whistleblower site 'just like Wikileaks', with guaranteed anonymity.
With respect to the lack of prosecutions--the agency may want and need info more than convictions, especially convictions based on use of Tor, Wikileaks etc. The convictions might come in a secondary way--e.g. from my penetration of Tor I find out where you're going to heist the bank and just happen to have officers on the premises when you do the heist. You get sent to prison for the heist, not for Tor use in a conspiracy.
Yes, I can see that intelligence agencies might want to keep secret the fact that they learning about some activity by cracking Tor. But what about cases where there would be a court prosecution, such as child pornography or drug gangs? If Tor has been cracked, I would think we would find out about it one way or another.
It's fairly obvious if you click on the "world map" and view your Tor connections that a big chunk of the exit nodes are run by a small number of operators. They're not even trying to conceal it: [nameXYZ]1 through [nameXYZ]8, all in the same location, each with a fat pipe of 40 megabit per second throughput.
So, interested parties are paying good money to find out what people who are obsessive about anonymity are up to on the Internet. Are all of them state actors? Not necessarily. Porn purveyors doing market research comes to mind, also the music industry wanting to know which albums get shared illegally most often.
"It's fairly obvious if you click on the "world map" and view your Tor connections that a big chunk of the exit nodes are run by a small number of operators. They're not even trying to conceal it: [nameXYZ]1 through [nameXYZ]8, all in the same location, each with a fat pipe of 40 megabit per second throughput."
use Vidalia and activate the bridges option. you also have the option of excluding countries from use.
http > torstatus blutmagie de
note the nodes you don't wish to use in a circuit, add them to 'excludenodes' option in your torrc file. fingerprints of nodes may change daily/hourly so updating your torrc file manually with exclusions would be tedious. how do you know which node(s) are fronts for intel orgs or rogues? some exit nodes run and publish their
logs of urls visited on the open web.
"Now from what I understand, Tor has something like 200,000 users"
More like 350k+, according to graphs at:
http > metrics torproject org
click on 'Graphs' then 'Users'.
if not using Vidalia, bridges may be added manually by discovering them at:
http > bridges torproject org
the use of bridges, and running one yourself is strongly encouraged, they are not listed in the tor node directory status pages cropping up on the web and they are dynamic, bridges going up and down at different ips daily/hourly.
"it seems that the most obvious thing to do for the Chinese, the Americans, the Russians, and so on, is to have their intelligence services JOIN Tor both in the sense of having their agents join the organization and in the sense of providing Tor nodes to facilitate tracing of users."
which may be thwarted by always using SSL and SSH.
Scroogle's SSL page offers encrypted search, free.
Ixquick offers SSL search AND SSL web proxy surfing, free.
"I don't see why Java would be a weak point. What makes you uneasy about it?"
""soft fonts" which can have a major effect on "compromising emanations"."
www eskimo com ~joelm tempest html previously hosted 'soft tempest fonts' but pulled them. try viewing the page on archive.org's wayback machine in the hopes of tripping over the old .zip archive. they are not formatted for use so they must be prepared by the user. whether or not they still work and are not subverted by new tech is held in question.
site signature/timing attacks partial remedy: download a portion of a large, random file in the background when surfing (never the same file twice), load several websites at once, at least one site must contain several images which require at least 60 sec loading time, abort the loading of some larger image sites like flickr before the loading has completed while loading other sites in other tabs.
are you plugged into the wall or running off a laptop's battery? powerline attacks exist with keypresses.
freenet? no, try gnunet org.
@ sp00ks r us,
"... previously hosted 'soft tempest fonts' but pulled them."
The reason was the original designer Dr Markus Kuhn at Cambridge labs effectivly pulled them.
I'm not entirely sure what the reason was, but Markus has indicated in certain hardware setups they can actually make things worse.
Read more in Markus's explination,
I red all way frew dis particle, and pretty much understand all. New 2 tor, & have installed da bridges. If your'e meking simple http req's to static vebpages & not oosing BitTORrent, den - surely no 1 cares, i understend osing tor is not ilegal.
care 2 covent?
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.