Danno Ferrin September 17, 2014 2:45 PM

Two questions, (a) Do you think what was presented in court could have actually lead to the identification of DPR? and if so (b) Do you think that is actually what was used to ID him or was this a case of parallel construction?

Anura September 17, 2014 3:14 PM

@Danno Ferrin

In hindsight, yes, they could have gotten a warant to access to logs from the captcha service, made a series of requests looking at what they received and what IP the captcha service used to send them.

Whether they actually did that, I don’t know. Parallel construction is just as plausible.

vas pup September 17, 2014 3:19 PM

Looks like Internet becoming kind of ‘Wild West’ when neither ‘sheriff’ nor ‘gang’ bound by law.

thomasblair September 17, 2014 3:29 PM

I apologize for the length up front.

This is very different than the method that was described in the criminal complaint filed on 9/27/13.

Summary of criminal complaint below:

As for Ulbricht, this is how the complaint reads that they found him:

Jan 2011 – When Ulbricht made SR he went on a site called and posted some concern marketing as a user named “altoid”. He did the same on a site called

Oct 2011 – Another post by “altoid” on asking for an IT pro in the bitcoin community. The posting directed users to a gmail account –

The gmail acct has a google+ acct with pic posted. It matches a linkedin acct with Ulbricht’s name.

The Google+ page has links to favorite youtube videos – many from There is a acct with the name Ross Ulbricht and his pic that matches google+ and linkedin.

The DPR account on SR has a link to in the signature (one of only two in the signature). In forum postings, DPR cites the Austrian economic theory and von Mises and Rothbard as providing the philosophical underpinnings of his creating SR.

The records they got from Google showing IP addresses that accessed the gmail account put him at an address in on Hickory St in San Francisco. The server code for SR was written such that only someone logging in from a certain IP can get access. That IP was traced to a VPN service. Records subpoenaed from the VPN indicate the IP used to access was traced to an internet cafe 500 ft from the Hickory St address where Ulbricht was known to be living and where he accessed the gmail acct. These logins all occurred on the same day within 500′ of each other – June 3, 2013.

In July Ulbricht moves to an address on 15th St in SF. Customs stumblefvcks (as part of a “routine search”) into inspecting a package that contains 9 forged identity documents, each for a different name but all with Ulbricht’s photo. The package was addressed to the 15th St address where Homeland Security finds Ulbricht (who obviously matches the photos on the forged ID’s). Ulbricht refuses to answer questions but offers that “hypothetically” someone could go the the “silk road on Tor” and purchase any drugs or fake ID’s they wanted. Ulbricht provided the DHS agents with his true license (TX) and told them he was renting a room for cash. The roommates knew him as “Josh” and said he spent all his time on the computer in his room.

In June and July 2013 the DPR acct expressed interest in a batch of fake ID’s on SR. The FBI knows what is contained in all private messages from SR because they had the server hosting company make an image and send it to them in late July.

March 2012 – On stackoverflow Ulbricht had an acct with his name and gmail address and posted a question about how to use a certain kind of code to connect to a Tor hidden service. One minute after posting, the acct name is changes from “Ross Ulbricht” to “frosty”. A few months later he changes the email associated with the acct to The sections of code that Ulbricht posted on stack overflow asking for help are substantially identical (less the small changes for the bugs he was trying to work out) as the source code for SR as obtained in the image of the server the FBI obtained.

An encryption public key ends with the text string “user@computer” wherein the username and computer name of the person creating the public key is recorded. The encryption key for authenticating the administrator trying to sign into the SR back end has the text string frosty@frosty meaning whoever made the public key for signing into the SR back end as an admin did so from a computer named “frosty” using a user account named “frosty”.

So which method is it?

Douglas Knight September 17, 2014 5:13 PM

thomasblair, they could both be true.

Contrary to Bruce’s headline, this was a method of finding the server, not a method of identifying DPR. The documents you cite claim that they found the server long before they identified DPR. They claim to have obtained physical access to the server and still not found DPR. (but I think they also mentioned ways that they could have traced him from the server)

anon September 17, 2014 6:05 PM

I’ll admit that I used silk road to buy cannabis a few times. But I’m not a trusting person so I did my own analysis before trusting the site with an actual purchase.

A lot of that analysis was on the login/registration pages (including the captcha). My deep packet inspection included checking the IP headers and attempting to verify if they were assigned to current Tor relays (using publicly available relay lists). Weather or not they were listed …none of them hosted services accessible on the clearnet.

anon September 17, 2014 6:15 PM

follow up: of coarse, I was just verifying if Tor was really doing its job to protect me. If I was trying to locate the hidden service’s actual IP address … I would have tried other methods. It’s safe to say that investigators tried such methods.

Nick P September 17, 2014 6:46 PM

@ thomasblair

They probably did all that after the fact for a court-worthy case. A jury would simultaneously be overwhelmed with the information and see lots of dots connecting. It’s a good case. The problem is they had to start somewhere and they’re known to hide that part.

@ st37

Those documents give the missing piece of the puzzle. Specifically, the vulnerability assessment and thomasblair’s stackoverflow references show the guy had no skill in secure, web development. This is a common problem. Web apps keep getting hit by the same problems, even with free frameworks & libraries to mitigate them. He was probably just throwing stuff together, relying on Tor for security instead of just anonymity. They did some fuzz testing, owned the box, and then the box outed itself.

From there, it was just connecting dots. The FBI has way more experience with that part and appears to have done a good, thorough job.

Nick P September 17, 2014 6:59 PM

Draft proposal for preventing such deanonymizing attacks

Aside from web security, how can we prevent something like this? The simplest method is to have a dedicated, hardened device for Tor traffic. This device is the only one with an outside IP. It communicates to the other device either over internal IP addresses with a direct link. The device also does basic inspection looking for certain easily-identified leaks. Both the use and administration of the device occur over Tor, so as to look similar.

Note: This isn’t totally original as there are quite a few Tor proxy device projects. I’m talking one with very strong and professional effort in its security engineering. Also, defeating FBI might be easier than NSA, but FBI can always ask NSA for some collected info. And they might be able to get it.

Tail Between His Legs September 17, 2014 7:01 PM

@ Tails Gurus:

Does this mean that Tails anonymity is compromised if you fill out a captcha whilst using Tails?

The Robert Hanssen Integrity Award September 17, 2014 7:41 PM

The big story is the quick response of system users who exposed FBI cyberattacks. Without that, FBI would have run the network and peddled the contraband to trap informants, provocateurs, and fake terrorists. Silk Road would have ended up like giganews

They don’t mind drugs, they don’t mind child sexual abuse. They just want to control the trade. They use it for kompromat like the NKVD did.

Thoth September 17, 2014 7:57 PM

@Nick P
I think you should try to expand your proposal into more details. Here’s my try at making something practical out of a draft proposal. Do comment on the details. Just to give it a little more boost in security, we should consider adding a home made HSM in the brew as well. You could add some advise on hosting a home-made HSM that would operate in FIPS 140-2 Level 3 mode to protect the HSM keys.

1.) Get an open source hardware like the Raspberry Pi (if you don’t want to build your own). Encase the Raspberry Pi in a clear plastic cover and glue all the ends tight. You need two pieces of Raspberry Pi. One act as the Tor server and the other as the HSM for your crypto keys.

2.) Find an unwanted same vault and obtain some bricks, pottery and thermite. Lay 2 layers of bricks on the inside of the vault and the rest of the bricks as a pool for the vault to rest on and a thermite setup above the vault. Nick P, do elaborate more on the thermite setup since I am inexperienced in this part. You would need a few drilled holes in your vault to allow cable connection.

3.) Load a Raspbian OS onto your Pis and put your Pis into the vault and connect it’s cables. Once it’s all setup, shut the vault and arm the thermite trap.

4.) The Tor Pi will have an internal connection to the HSM Pi and all crypto keys (PGP/SSH/SSL keys) are handled by the HSM Pi and so are all crypto operations. The HSM Pi should be entered into FIPS 140-2 Level 3 mode where the keys should not leave the HSM and all crypto ops will be handed to HSM Pi to handle. The HSM Pi only allows a single port to be opened that will handle requests and will only recognize requests from the Tor Pi and the Tor Pi’s MAC Address.

5.) The Tor Pi will also have a connection outside the physical vault which would pass through a router (multiple network port Pi ?). The router would have on cable going to the external network. The connection from the router into the vault can be protected by a firewall device (another Pi if you have on to spare) The rest of the internal connections are hooked to the router and maybe protected by a separate firewall device (yet another Pi).

6.) The Tor Pi will accept suitable SSH connections from registered devices and SSH authentication.

7.) When emergency appears, activate the thermite which would burn up both Pis in the vault at the same time.

chief michael airic white sr September 17, 2014 8:36 PM

The question of whether the data came from the court case or whether indeed it was of parallel const is a good question. My reply is that there are always two sides and depending on the testimony allowed it would make a grave influence.

Amateur September 17, 2014 9:24 PM

Pi is known to have only one wired network interface. A model with two Ethernet ports would solve this handicap.

chief michael airic white sr September 17, 2014 10:36 PM

By using a single ethernet port the pi network fails as intended by the host. Remote connection u iploads a virus and no more computer i have seen. In essence a high dollar piece of hardware with pre installed software directly intended for one use and re-install on what appears to be a fresh connection. Man i have seen my software on the install cdrom disappear only after multiple instances reapper a month later. The registry is the key – without registry there is nada zip zilch

chief michael airic white sr September 17, 2014 10:41 PM

I dont even access the net using my laprop but somehow im still wireless optimal. The days before computers both pc and not i miss. Those were the days of the tail

Thoth September 17, 2014 11:00 PM

@chief michael airic white sr
Optimally you do not want network connections but that is not going to happen because you are trying to run a Tor service and that means you need a network connection. Unless… if you meant using Nick P’s method of using the good old ways of Guards and serial cables for internal data routing and for external routing over a low security network you use Ethernet, that would be of higher assurance.

If you want to air gap over a trusted and highly assured environment, don’t publish a Tor service and don’t do anything obvious 🙂 .

Figureitout September 18, 2014 1:17 AM

–You’ve got your priorities seriously screwed up if you want to protect a slightly hardened RasPi-based anonymous networking scheme w/ thermite…For starters, there’s better OS’s than Raspian for security too, I’d go w/ one of my faves, Kali over that. Also may try Plan 9 for that bare-bones feel. Think about what you’re getting yourself into if you go that route. A mistake and you’ve got a fried PC for no reason, a hole in your home, or worse which I won’t say. Are you physically protecting the system 24/7? If not, you assume entry into your residence while you’re at work has happened and they observe the protocol for destruction. What’s the easy way around the thermite? No EMC shielding? Clear plastic case? Ok, hole. Carry the device[s] w/ you 24/7 if it must be protected like that.

You can integrate better solutions than a RasPi, I’ll make one eventually to illustrate. How about OpenBSD on a Beaglebone instead? Has ethernet too, faster.

Build it to prove me wrong or if you’re just doing it for fun/practice if you want, but there are vastly superior solutions like for starters, controlling the flow of data (diodes) and monitoring critical points (net-taps & network analyzers).

Nick P won’t know much about thermite solutions, it’s Clive who’s said he actually made one, and RobertT who claims to have used nitric and hydrofluoric acid to destroy chips. HF acid will f*ck you up, and likely create poisonous fumes. Just don’t dive into this willy-nilly.

Andrew_K September 18, 2014 1:27 AM

Regarding the thermite setup.

I wonder if there is an easier solution. Thermite may be safe — if and only if it ignites. If it does not, you pretty much look like a fool. Plus, you cannot (or do not want to) test the setup’s working. At least I would not ignite thermite for testing the setup in my flat since that stuff is hardly controllable once it has been ignited. When SWAT knocks for entry, I would change that reluctance.

If it’s about destroying harddisks, yes, there may not be that much other options to physically destroy the medium. But what about starting with an USB-Stick. It may provide some GBs storage (which, should suffice for critical data unless it’s video). Why can’t you destroy it with current? Blowing up transistors is fun and works by applying overrated current causing the die to overheat. Is there a possibility to do something similar to flash memory chips — electronically rendering them unrecoverable? Or is it easier mechanize dropping them into a blender?

Nobody September 18, 2014 2:10 AM

Not only is thermite remarkably hard to ignite–hardly something that you want to rely upon for emergency data destruction–but rigging it up like that is illegal on so many levels it’s not even funny anymore. We’re talking charges like booby trapping as well as arson. Committing multiple felonies in front of a police officer isn’t exactly the best way to defend yourself from criminal charges.

Besides, why go through all that effort when you could just keep a disk encryption key in RAM and zero the key the moment tamper sensors are tripped? You don’t have to destroy the disk, just the keys.

Thoth September 18, 2014 3:23 AM

If you have an open custom hardware is is ready for use and proven with better assurance than the Pi, by all means you may use it. Clear plastic is to simply provide protection from wear and tear instead of EMSEC reasons. It can be costly to buy a faraday’s cage setup unless you can make one. If you can get Kali onto the Pi, I would like to see such a custom high assurance OS if there’s one.

With all the comments that are not readily available and not commercially viable, how can any of those non high assurance (normal devs and non-techies) get their feets off the ground first ?

All those nice TEMPEST and EAL 10+ feature may not come so readily despite being good features and do not forget the monetary cost. Some substances like thermite and undiluted high concentrated acid may not be readily available (acid is considered controlled in my country).

Of course what I suggested is a base skeleton to build on from something abstract.

Don’t forget, if it cost more than it’s perceived benefits, you will only make it more insecure because few would want to use it. If a balanced and gradual approach that is materially pragmatic and also not very costly, it will allow more people to be willing to build these stuff and then attract them to go deeper into higher assurance experiments and deployments.

Amateur September 18, 2014 6:26 AM

Kali may be excellent as a pentest tool, but I don’t think it’s supposed to act as a Tor gateway.

Thoth September 18, 2014 7:19 AM

Coincidentally if the vault used is a conductive metal, it would act as a Faraday’s cage which turns into a bonus EMSEC tool 😀 .

Another thing to consider is that my setup presumes you are hosting the devices at home (instead of in a server farm). You have electronics at home and concrete walls and basement. How are agents going to know where your device is kept in the house so that they can use their beaming devices to aim at your vault server unless they break into your house or simply get a warrant or execute a warrantless entry.

To have an effective zeroize of keys, you need a spare power pack inside the vault server just in case the main power supply goes down.

Clive Robinson September 18, 2014 9:10 AM

W/R to thermite etc,

Step one is to identify what you are protecting, how and most importantly why. If you don’t do this then you will set out on the wrong path and either end up “you know not where” and or “in a world of hurt”.

Step two is to critically examine the What, How and Why. Whilst it’s not possible to give any specific advice, you need to check more general advice holds in your specific case.

Step three is the good old KISS principle, complicated solutions fail in all sorts of ways you probably can not guess, and may unfortunately only find out –if at all– long past the point you are dead in the water.

Step four is to go around the loop again and again untill you are sure you cannot get the use of thermite etc out of the system.

Importantly thermite is a solution to primarily destroy hardware that may contain some secret information. It’s not to destroy the secret information, which is usually better protected in a number of other ways involving other information.

Thus you need to consider why you might need to destroy hardware and how you can take steps to avoid the need. I have used thermite to destroy custom hardware, and still use it where “memory burn in” may be a consideration. Virtually all other uses in ICT can be done other ways, or with a lot less thermal energy, and other problems legal and safety wise.

For instance you could encrypt the stored data, which if done correctly shifts the destruction to the encryption key which is much easier. The master key for an encrypted hard drive can be written with highly soluable non toxic vegtable inks on large stamp size pieces of rice paper, which you can eat, burn or drop in a cup of tea etc.

But with a little further thought you can remove this problem by having a PIN/Password/Passphrase written down instead and this can be used in a way from which the master key cannot be found, thus destroying the paper is not realy required.

Thus moving the encryption function from the computer and the hard drive and putting it in a small –single chip– Inline Media Encryptor (IME), significantly reduces the attack surface and what might need to be destroyed. Further how about using a SIM or Smart Card that’s got the appropriate crypto on it and is rated to a suitably high EAL rating…

If the IME is correctly designed and manufactured, simple anti-tamper and dead mens switches will ensure it does what you want without causing you any hard problems with destruction.

With a little thought you can come up with a solution whereby you never get to see the encryption key, and can provably show you never did nor needed to. We have discussed such solutions before on this blog when talking about the “Border Crossing” problem.

However there is still the issue of “residual plaintext” on the PC. To which the first thought “Why Windows” etc on the PC. Put simply all Windowing systems and their underlying OSs are all way to complex to ever be remotely considered secure, no ifs no buts no maybes. So ask “Why use them”, the simple answer in by far the majority of cases is you don’t. A simple single user 8bit computer with 32K of RAM and a serial interface or two was more than sufficient for a decade or so. You can get all of this and a lot lot more on a single chip costing around a dollar these days. Heck there are MicroChip PIC24s which beat the hardware for early minicomputers hands down –and apart from soft interupt issue– can run early versions of *nix and support four VT100 terminals, using flash cards instead of hard drives… So ask again “Why” complex OS’s, Windows systems and apps?

I urge everyone to rethink the Why, What, How and look seriously at the “How of times past” where needless complexity was not desirable.

Then consider the “How” of KeyMat, IMEs and simple methods of protecting the KeyMat in ways where rubber hose / thermorectal and other human inducments cannot work.

Only when you have done that and you are sure you have no other way should you start even remotely consider thermite or other hardware destroying systems. And by then you should be looking at only destroying a single chip or two at the most on a fairly small PCB…

It’s also worth noting that most modern military or diplomatic communications signals units don’t have to lug “destruction kits” around with them any more one reason for this is that crypto when used properly can obviate their use… those that do are generaly not for “comms” but specialised “intel”.

Oh one last thing, whilst thermite does not explode, the things you are trying to destroy might. Some batteries have the ability to liberate their own chemical energy in the form of rapidly expanding and oxidizing gases which if constrained in any way will make you think a hand grenade has gone off…

Anoni September 18, 2014 11:50 AM

You people are out of your freaking gourd.

There was this case I read about a while back where someone was suspected of possession of child porn. Police raided his house, found him destroying (physically) a USB stick. There was no evidence of CP in his home other than that destroyed USB drive, which may have had CP or pretty much anything else the fellow might have found embarrassing. Court still found him guilty of CP possession and sent him off to jail.

Your talk of thermite is ridiculous. Encrypt with a decent key and overwrite the key (dd /dev/random…) when necessary.

This whole thing is really off-topic. Getting back to the point at hand, DPR had to have a server on the net. Somewhere somebody [DPR] is paying for that. When the NSA can watch all the pipes, and governments control so many of the TOR entry/exit nodes, figuring out where TOR traffic is going hardly seems impossible. Hit silk-road often enough and try to figure out where that traffic is winding up. It’s just a question of how badly they want you, and how much effort they’re willing to expend to catch you.

anon September 18, 2014 2:36 PM

Using a static typing language for your backend would alleviate all these sql injection issues.

… but strict data types makes programming less exploratory or fun so that hasn’t gotten traction on the web. 🙁

DoD developed programs use strong typing. The rebels should learn from their opponents.

Anura September 18, 2014 6:35 PM


SQL Injections are a problem that’s been solved on most platforms since PDO was inroduced in 2004 with PHP 5.0; for ADO on Windows and JDBC for Java, I’m not sure they have been a problem since their inception in the mid 1990s.

Programmers have the tools, they were just taught to dynamically build their own queries.

Nick P September 18, 2014 7:00 PM

re thermite & data destruction

It is big and offtopic so I suggest we move it to the Squid thread so readers here can focus on Tor & DPR. I’ve replied here.

Nick P September 18, 2014 8:53 PM

@ Thoth

How to secure services like Tor against FBI/NSA

The topic of Tor security is special: the opponents are typically TLA’s. Silk Road was taken down by a very capable TLA with hackers, 0-days, and legally-empowered investigators. They are partnered with the most powerful TLA (NSA) in the world of SIGINT collection, with ability to ask for their help & parallel construct that away. They also have legal partnerships with others around the world. So, this is a situation where even remote attacks demand a high assurance solution. If it’s medium, it needs to focus assurance activities on everything we know they hit. And you better be covered legally, but I’ll focus on technical side for now.

The best way to start on requirements is look at it logically. Without knowing much about Tor’s insides, I’m guessing the logical system involves these components: hardware, main firmware, peripheral firmware, OS kernel, networking stack, storage, network time, Tor protocol activities, networked apps people connect to/with, configuration management tools, and administrative tools. A number of these are security-critical, being in the TCB and designated “trusted.” The TCB of the Tor engine must be initialized into a trusted state at boot, protected from attacks during running, and be sanitized upon shutdown.

So, what are the threat vectors? Most of the stuff I just mentioned… The FBI definitely can find 0-days in OS kernel, drivers, main firmware, or apps. The skill is the same for each of these, so if we haven’t seen it they just haven’t displayed it yet. The NSA adds attacks on peripheral firmware, protocols, covert channels, side channels, and hardware subversion (esp by interdiction). To stop FBI, one must ensure they can’t hit any layer from I/O up to protocol engine. To stop NSA, one must protect even more. This is… challenging*. 😉

  • “Challenging” is synonmous with “You’re f***ed” in this usage.

The simplest strategy for both is physical decomposition. Like my PCI backplane and Artigo designs years ago, the logical “system” is actually a bunch of different computers communicating over various links. You can break the system into untrusted and trusted systems communicating over non-DMA links. Can’t trust U.S. companies’ IOMMU’s because we can’t trust U.S. companies. (Simple, eh?) The hardware is ideally pre-2004 that wasn’t shipped or was couriered by someone who is incredibly trustworthy (yourself even). It must never get into enemy hands. Ever. For those inclined, one can make a custom SOC or board that basically acts as both a switch with IO/MMU and guard for traffic mgmt rules. Will be faster, almost plug n play, simpler, require less hardware, and cost a lot of money to develop. Tradeoffs, tradeoffs… 😉

The simplest decomposition has three nodes: internal transport, tor node, external transport. The transport gateways apply basic security filtering while also converting the packets to easily parsed format sent over non-DMA lines. As such communications are handled by CPU directly, it’s advisable to make a scheduling policy whereby I/O is only done so often. That keeps number of interrupts and cache misses to a minimum. A deprivilged subject does the I/O where possible, moving it to/from a specific and protected storage area. One or more separate subjects with their own protected, internal memory does the security-critical operations.

At this point, we have to secure the main points of attack. Let’s start simple. The middle system is only running a pre-configured Tor server modified to use our safer I/O interfaces rather than networking. It also needs a process or processes to handle that I/O. The configuration, executables, etc are all created on another machine and simply moved to this one. There’s no remote administration in this model: it’s done locally via the hard disk itself or a dedicated port with authentication. This system just runs Tor and only needs the kernel functions that it depends on.

The easiest setup starts with a monolithic OS. You can use OpenBSD or a Linux with extra protections. Configure the Tor service with maximum isolation using any technique you have. Make sure the firewall filters out anything that isn’t Tor (optionally doing other checks). Like in Poly2 project, remove every piece of code in the system that’s not critical to its functioning. Don’t just disable it: go into it, delete the code in the functions, and tell it to return success (or failure). This is easier than outright deleting modules because it requires less work and understanding of the system. For stuff the system does use, ensure security checks are in there & optionally use the special compilers that automatically add safety. Use things like SVA-OS or system call mediation to give kernel a bit of extra protection.

Your attack surface is already very low. The I/O is simple enough to write nearly bug-free implementations. They will be forced to focus on the application and few kernel calls it uses. The application logic should be made in a functional way with an internal state of the entire thing, incoming data causing a change in state, and optionally causing a response. Each state should be analyzed for the effect of common errors and attacks. Error states should recover or fail safely with logging. The system should be written in a safe language or safe subset of a language with input validation, static analysis, dynamic analysis, extensive testing of every state/feature, and fuzz testing running overnight regularly on instrumented code. Techniques such as control flow integrity or safety-critical memory management can be used to reduce likelihood of attacks. The compiler and linker must be verified to not screw anything up, like optimizing away security checks.

That’s one application on a hardened platform. The next step is breaking it into pieces that each do a job and communicate. A microkernel platform open to inspection should be used such as OC.L4, OKL4, Genode, seL4, Minix3, etc. Each logical component, inside or outside of Tor, is put into its own address space. The system is modified using distributed programming techniques to safely coordinate the overall system activity over message passing. Each TCB component is then designed, coded, and tested just as described above. The interaction of components can be modelled in a specification or programming language with concurrency checkers to find such errors, although there are distributed transactional approachs if I recall.

At this point, each part of the TCB is quite hardened. If cash flow continued, higher assurance techniques can be applied. Among the most important would be a covert channel analysis of the system. There’s tools to model and track information flows to make it easier. The next step would be an inherently safer architecture*. Typed (Sandia SSP), tagged (, or capability (Cambridge CHERI) processors can be employed to contain or prevent code injection attacks. Dedicated I/O processing chips with DMA to Tor memory might be added for performance, extra security, or avoid using the same TCB setup for app logic & I/O. Formal verification might be used on any aspect of the system. Tools might be expanded for automatically adding security protections to safer source code, then certifying compilation & linking at object code level.

  • The architecture might actually be one of the first things you do. When I did stuff, there were none around. Now, there’s over a dozen in various stages of maturity, licensing, and ASIC cost. It could be the very first thing the project does.

People might want remote administration. One of my old approaches applies here: use a production system and a management system. There’s a safe link between the two. The production system can be ROM’d to do self-checks, then automatically boot from management system’s storage. Combined with a object store or RAM disk, this design choice means Tor node doesn’t even need a filesystem layer. :O All the tools needed for diagnosing, generating, or otherwise managing the Tor node are on the management computer. Like with Tor, a dedicated application (eg SSH) runs on the Tor node that basically acts as a middleman between external network & management node, shuffling the data. The service is off & I/O blocked by default. A guard application listening on a certain port checks incoming data for an authenticated command to activate administration. It then leverages trusted code for initiating a session… from the Tor node to a designated external node. One can use SSH or just PGP-style encrypted commands sent back and forth asynchronously. I did the latter with pre-shared master secret for simplicity, speed, and immunity to quantum advances.

Finally, for your personal satisfaction, you put all this stuff in an EMSEC safe with obfuscated tamper circuits that can activate thermite. The circuits should detect RF attacks, extreme heat/cold, radiation spikes, and strong vibration. A HEPA filter on incoming air can prevent troubles nobody has published yet. (Don’t ask.) The circuits should send data to radiation-hardened microcontrollers that have battery backup. Need at least three in lockstep with voting protocols. Redundancy and diversity in detection circuits helps. All this stuff costs so much already (esp EMSEC) that you might as well spend a bit more to prevent your tamper circuitry from nuking it accidentally. Let them run in the production environment under your supervision in a learning mode to adjust to the common operating ranges of the environment. Then, they are put into production mode and can’t be disabled without a secure, secret method. You need to personally courier it to destination and set it up. See how costs can add up for tamper resistant remote solutions?

Of course, they are a coalition with legal authority and covert ops. You cannot control the box if you’re in a Five Eyes country and there’s risk in a foreign one. Whoever is managing it can’t be a citizen of one of these countries as they can be extradited. Their citizenship, location and travels must keep them away from those countries complicit in Five Eye’s covert activity. You also need crime to be low with a decent standard of living to make bribes cost more. Switzerland has best track record in most of these traits, with Iceland having some strong attributes. In short, if Five Eyes courts or spooks can target you easily, then you are screwed no matter what tech you use.

So, there you people have it: a combo of NSA-resistant INFOSEC 101 & Tor Secure Development Guide. This should be a start. (Hears someone in the audience say, “A… start…?”) Yeah, a start. I don’t list all potential attacks. The opponents are highly likely to fail, though, using most methods they will try. They might even give themselves away while telling you exactly where the flaws are if you’re logging crash data. Now start building! 🙂

Note: My earliest post on this stuff was on Freenet here. It begins an argument for high assurance techniques with covert channel mitigation, along with base deign.

anon September 19, 2014 1:01 AM

@ Anura

Yes, programmers have the tools but most programmers don’t make use of those available options to harden their systems.

Using a safe, statically typed language for web development would force programmers to build applications that are immune to these attacks; without relying on developers to use appropriate precautions.

I’m suggesting a secure by default approach. PHP doesn’t give us that.

Nick P September 19, 2014 11:31 AM

@ anon

I addressed web application security innovation here. Use it, build on it, tell your programmer friends. It’s not really the concepts or technology that’s lacking: people just keep building on insecure foundations instead of improving on what’s already proven to work. Same problem throughout IT industry.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.