Safeplug Security Analysis
Good security analysis of Safeplug, which is basically Tor in a box. Short answer: not yet.
Good security analysis of Safeplug, which is basically Tor in a box. Short answer: not yet.
David • September 10, 2014 7:52 AM
I don’t know, but the analysis seems to me to be quite biased. Obviously, there are some problems that needs to be fixed on the device, but otherwise I would say that for more experienced user, this could quite nicely work, especially for locked down devices (Windows RT, iOS etc.), if the user knows and remembers to clear his cookies in and between of sessions. Am I wrong?
Peter Boughton • September 10, 2014 8:34 AM
Biased in what way?
The following things are objectively verifiable:
Not good “features” for any device, but for one which dares to claim “complete security”… :/
if the user knows and remembers to clear his cookies in and between of sessions.
Cookies are not the only way to track people.
Sara • September 10, 2014 9:19 AM
Its a cool idea, but the execution is defintiely lacking. Hopefully they will address the security issues found. If they do, this would be a neat little device.
Joe • September 10, 2014 9:44 AM
Part of the problem with TOR is leaky applications. Part of the problem is unsafe browser use. But if TOR itself has been compromised, all bets are off. Recent cases of TOR users being unmasked point in this direction. If I were a terrorist, I’d be using carrier pigeons instead of TOR.
Nicholas Weaver • September 10, 2014 10:01 AM
This is just a bad idea period: Tor for web browsing MUST only be done through the Tor Browser Bundle. Everything else in NSA parlance is EPICFAIL . Why?
To begin with, a significant fraction of Tor exit nodes are hostile: not just monitoring but actively modifying content. Thus its critical to use a browser configuration that includes HTTPS-everywhere, so that at least a lot of sites can’t be tampered with.
Then there is tracking: it is trivial to extract a lot of information about “who is this person” if they fail to clear cookies: either just passively (by picking up advertising identifiers) or actively by injecting content which would cause the browser to spit out all traffic cookies to the exit node.
 I don’t know if EPICFAIL was an inside the NSA joke or a specific keyword/description.
Nick P • September 10, 2014 10:09 AM
Peter makes great points. I’ll add that Tor is one of those applications where the main group trying to break it is nation states. The rating for protecting such systems is called High Robustness state-side. Look up the EAL6-7 system requirements, look at the source code for everything in your machine, and notice how the code in the machines (and this device) look nothing like a high assurance process made them. So, they’re almost guaranteed to be bypassed even if Tor protocol gets to be 100% provably anonymous & secure. That was my analysis of a similar product years ago and this one only proves it out.
The best bet is to pay a firm with the right experience (i.e. Altran Praxis) to use their best tools and people to build one using something like OC.L4 or Minix. They can use Spark/Ada, one of the safe C subsets/mods, etc. for the software. Regardless of main language, the reference implementation would be in a low-level cross-platform language. Use contracts, static analysis, dynamic analysis, fuzz testing, etc. The safety-critical industry already has tools to automate a lot of that. End result is cross-platform code on a cross-platform microkernel OS with safe-coded drivers for a cheap, reference board. (Board might be open developed, too.) This kind of talent (and tool licensing) won’t be cheap: expect it to cost hundreds of thousands of dollars. You get what you pay for, though.
What I guarantee won’t happen is some hobbyists making a Five Eyes resistant Tor appliance that also runs on a cheap COTS board. Almost nothing such people have ever made was secure. That’s why I recommend crowdfunding an experienced firm to do an open source development with FOSS crowds hunting code & docs for problems, plus replicating the compilations and testing.
EDIT TO ADD: Just saw Nicholas Weaver’s post before submitting. I’ll add that even the device I describe should be used with an endpoint configured for privacy, as he suggested. I differ on the specifics: best to have a standardized distro like Tails custom designed for it so we all look alike to those doing fingerprinting and traffic analysis.
NobodySpecial • September 10, 2014 10:30 AM
@Nick P – and ironically the main funders of its development are the same nation state !
Nick P • September 10, 2014 10:40 AM
An irony I’ve always found delightful. Same thing is currently happening with NSF and DARPA clean slate projects. Far as U.S. gov’s multiple personality disorder, I say let it continue!
@ Nicholas Weaver
“ I don’t know if EPICFAIL was an inside the NSA joke or a specific keyword/description.”
It’s actually the default codeword they use for any electronic device in the U.S. not built to their highest standards. High Robustness INFOSEC is their designation for secure devices built in U.S. under government influence. AWWWSHIT is the codeword for such devices developed outside U.S. government control. WEREFKD is the codeword for the dire event when an AWWWSHIT system goes open source with Chinese or Russian suppliers.
Mike the goat (horn equipped) • September 10, 2014 11:30 AM
Nick P: I’d say this project more than qualifies for that designator. This has been a disaster from conception – and all throughout the normal stages of R&D they’ve had the opportunity to address some of the (extremely overt) weaknesses in their design and they have repeatedly failed to do so. I wouldn’t trust this device as far as I could throw it.
Mike the goat (horn equipped) • September 10, 2014 11:56 AM
Nick: if I trusted tor (and I don’t – it was never designed to maintain anonymity in the face of an adversary with godlevel network visibility, and thus correlation and even other less obscure attacks on guards etc suddenly become potentially doable) and wanted such an appliance I would address the following issues in building a custom device:
There is so much more to discuss but my point is that a lot of thought needs to go into a device that must by its nature process both a red and black feed.
Nick P • September 10, 2014 12:21 PM
Nice brainstorming. I usually use three devices + Red/Black for a setup like this (eg my secure VOIP/chat) so the protocol engine can be on a purpose-built machine. Ironically, I learned it from NSA as it’s how they did many successful, fielded products. The middle box has safe interfaces, minimal TCB, protocol engine, and some kind of hardware enforced isolation of various components. There will be a piece of hardware for communications in each direction with simple (easily parsed) messaging that are autoforwarded to deprivileged, guard processes. The Red and Black boxes handle the formatting, timing, transport, etc. They’re also hardened to stop lesser attackers and reduce availability attacks. The secrets that are used for encryption and anonymity never leave the middle device. It also overwrites memory encrypted before and after each use, before to ensure reading empty memory looks like reading a key. Its execution proven during development with synchronous scheduling, then runs partly/fully asynch in production for timing channel mitigation.
Btw, here’s an old VPN design I promoted that’s conceptually similar to my approach. I used Nizza Architecture (with different software) on personal projects for quite a while. The cool thing is that using standard interfaces and messaging protocol means the middle end’s assurance can be increased incrementally over time. Might start with a Loongson-based SOC with one of my obfuscation techniques & Nizza architecture. Next might use a capability model on a CHERI SOC I fabbed with money from sales on other thing. Then, add capability-secure high speed I/O engines, side channel resistant onboard crypto, EMSEC at board level, etc over time.
The incremental assurance model was proposed in the 90’s for MLS development. It was then applied by Karger at IBM to sustain an EAL7 smartcard OS development by making money off intermediate deliverables. So, it’s a proven approach with quite practical tradeoffs available. And even a medium assurance device would be worth buying as a start given everything on market is low assurance. Still need skilled systems, security, and software engineers to make it happen.
jasonic • September 10, 2014 12:35 PM
Where are all the people who decided they’re going to just continue using Truecrypt, in spite of the warnings?
Mike the goat (horn equipped) • September 10, 2014 2:42 PM
jasonic: probably in jail after the feds decrypted their drive :-), sorry bad joke.
Nick: yeah. The point I was trying to make is that it takes a lot more than a developer putting up a webpage and asking for donations to make a secure product. Every one of these projects appears with massive architectural flaws in them that could have been mitigated if only the devs actually consulted with someone in the sec industry.
Thoth • September 11, 2014 1:16 AM
I think the problem lies in education, greed and ignorance. Most people simply thought they could get away with a crypto program and some firewalls and they are pretty good which is false. How many security programs are built with high assurance in mind ? Most of them simply rant about their Serpent 10000 bits key size, using NIST standards and mentions the magical FIPS word and some FIPS label and claim they are really good.
Simply grabbing a custom chipboard and customize Tor onto it doesn’t make it anymore secure than running a laptop on Windows with wireless and a Tor access point which isn’t any better.
People whom were once using Truecrypt would either attempt to make *crypt variants, use something else or continue to use Truecrypt 7.1a.
M • September 11, 2014 11:53 AM
For a conceptually similar scheme that fixes some of the problems described with the Safeplug, I might suggest that you look at Whonix. It’s essentially a pair of virtual Linux machines where one acts as a TOR router in the mode of the Safeplug, and the other is a client box that runs the Tor Browser Bundle. The interesting bit is that the virtual network is set up so that all network traffic from the client is forced to go via the TOR router.
I like the separation of having a virtual machine for TOR activities that is never connected to the unobfuscated net. It can thus provide some resistance to EGOTISTICALGIRAFFE-style attacks, but adds dependence on the security of the host and virtualization scheme, so I’m not sure of the net benefit.
WD • September 11, 2014 8:19 PM
But I never purchased ours for browsing…use a LiveCd for that. That purchase was solely to sow as much trouble for evil authorities as I possibly could. As a tor middle relay.
So academics….please provide some meat here. What must we do to insure the safety of external users? I mean, buyers could always see those red flags associated with normal browsers…so I’d wager most of these boxes are basically middle fingers. Nothing more. Certainly not used as proxies.
Care to make those middle fingers stronger?
AA • September 12, 2014 4:32 AM
The fact that FBI didn’t use zero day vulnerability last year doesn’t mean they don’t have access to one. Perhaphs their target (child porn sites) wasn’t valuable enough. If you possess zero day vulnerabilities certainly you would use them against targets such as Snowden or for financial espionage?
For the best results you can use Tor Browser (or similarly configured Firefox browser) with a transparent Tor proxy. The best option is a dedicated computer, but a Whonix/Qubes OS virtual machine also works.
There is even support for this in Tor Browser to avoid “tor over tor” situations. From the Torbutton preferences, select “Transparent Torification (Requires custom transproxy or Tor router)”.
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment