New NSA Leak Shows MITM Attacks Against Major Internet Services

The Brazilian television show “Fantastico” exposed an NSA training presentation that discusses how the agency runs man-in-the-middle attacks on the Internet. The point of the story was that the NSA engages in economic espionage against Petrobras, the Brazilian giant oil company, but I’m more interested in the tactical details.

The video on the webpage is long, and includes what I assume is a dramatization of an NSA classroom, but a few screen shots are important. The pages from the training presentation describe how the NSA’s MITM attack works:

However, in some cases GCHQ and the NSA appear to have taken a more aggressive and controversial route—on at least one occasion bypassing the need to approach Google directly by performing a man-in-the-middle attack to impersonate Google security certificates. One document published by Fantastico, apparently taken from an NSA presentation that also contains some GCHQ slides, describes “how the attack was done” to apparently snoop on SSL traffic. The document illustrates with a diagram how one of the agencies appears to have hacked into a target’s Internet router and covertly redirected targeted Google traffic using a fake security certificate so it could intercept the information in unencrypted format.

Documents from GCHQ’s “network exploitation” unit show that it operates a program called “FLYING PIG” that was started up in response to an increasing use of SSL encryption by email providers like Yahoo, Google, and Hotmail. The FLYING PIG system appears to allow it to identify information related to use of the anonymity browser Tor (it has the option to query “Tor events“) and also allows spies to collect information about specific SSL encryption certificates.

It’s that first link—also here—that shows the MITM attack against Google and its users.

Another screenshot implies is that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA.

Here’s another story on this.

Posted on September 13, 2013 at 6:23 AM141 Comments


Tyrranum i Liberati September 13, 2013 6:40 AM

Exactly how does one trust someone we cannot see or meet with a service we rely on for security?

These MITM attacks seem to be the final solution for the NSA and I can’t see any way around them with their current grip on the interlinks.

z September 13, 2013 6:52 AM

Install the Perspectives browser add-on and then take a look at how many certs have been presented by Google over the last 30 days. There are a LOT of them.

Mike the goat September 13, 2013 6:53 AM


I composed a similar response earlier but for some reason it didn’t apply so I will retype.

What are your thoughts on hardware as a potential vector to facilitate eavesdropping? Example in point the SSL accelerator cards that many servers are using and the TPM modules that are making their way onto consumer motherboards.

I was made to defend my position on incinerating HDDs as a policy against data remanance the other day and I feel somewhat vindicated as we slowly find out about what the NSA has been up to. The customer argued it was wasteful and bad for the environment but I insisted.

Modern HDDs have a microcontroller inside that runs proprietary firmware that we – the admins – don’t get a chance to audit. We also know that there is often additional capacity that is not exposed at a block layer ostensibly for remapping failing sectors but as we have seen sometimes the capacity made available is far different (eg some WD 1TB HDDs a few years back had the same physical parts as their 2TB and only had the firmware changed to simplify manufacturing). How do we know that ‘interesting’ content is not retained, even when erased. I can think of several OS and filesystem ways to do this, the clever way being using file carving algo to search for, say DOC or JPEG headers and stash them away. Or if you have lots of space to play with as in the case above you could merely make a delete operation remap. An algorithm could detect repeat overwrites (say by shredding tools) and preserve the mirrored copy. Obviously full disk encryption would defeat this so long as the key is secured off the media.

A more expected threat is SATA secure erase. We’re told to trust it but how do we know for sure?! This is why we need open hardware and open firmware.

I suspect that the NSA has not broken RSA at least at sensibly large key sizes but that they are ‘cheating it’ through bad implementations. That said for those websites not using PFS what would stand in the way of picking a select few targets and putting all their resources into cracking (or stealing through human assets or remote exploits on the target host) just those?

The recent revelations about MITM attacks make sense but aren’t as elegant as I would expect. I figured they were getting their traffic through a fiber tap and thus were limited to passive attacks but clearly I was wrong.

I think there needs to be a discussion on hardware as we now /know/ some of it has been subverted. Hardware crypto and RNGs are a good place to start. Remember the kerfuffle over RdRAND in Linux kernel?

Keep digging Bruce. Full disclosure is the only cure to this crisis.

Gervase Markham September 13, 2013 7:03 AM

Sorry if I’m being dumb (although I don’t understand Portuguese), but the slide demonstrating an MITM doesn’t mention SSL at all. And the Flying Pig software screenshot seems to show it providing information about SSL connections, which is not the same thing as MITMing them. It’s called an “SSL Knowledge Base”.

Can someone connect the dots for me and show me where the released information shows that the NSA were MITMing SSL connections to Google?

If that slide does show them doing that, and if it’s a network-level rerouting, then if they want to remain undetected then they must be able to MITM all SSL traffic, regardless of client. That’s a risky strategy now that Chrome has cert pinning with reporting – if the target uses Chrome, they will get busted. (Unless they’ve managed to get a fake cert from the same CA Google uses. I don’t know whether Google pins to roots, or its own intermediates. Or unless Google has been enjoined to not reveal certain reported cert pinning violations.)

In other browsers it is, of course, technically possible to see the changed cert chain if you look. But perhaps they rely on no-one doing that.

If they were MITMing SSL, then then either the target clicked through warnings (unlikely) or they have subverted a widely trusted CA. (Again, if you want to be able to MITM all SSL traffic, you need a widely-trusted CA to avoid giving errors on some clients.) The $64,000 question is: which one?

z September 13, 2013 7:14 AM

@Gervase Markham

Law enforcement has had the ability to use their own valid certs signed by real CAs for some time now. I have no doubt that the NSA can as well.

Aaron W September 13, 2013 7:23 AM

Google rolled out forward secrecy to its SSL connections in late 2011. In theory, that made MITM attacks impossible, even if the NSA had a forged certificate, yes? Or just much, much harder?

So did Google roll out forward secrecy when it realized the NSA/GCHQ was using MITM attacks against its users? Or is this a MITM attack that’s effective even against forward secrecy? Or is it a MITM attack against plain old HTTP, which would be much less of a concern?

Marian September 13, 2013 7:34 AM

@Aaron W

Google rolled out forward secrecy to its SSL connections in late 2011. In theory, that made MITM attacks impossible, even if the NSA had a forged certificate, yes? Or just much, much harder?

Forward secrecy (achieved in SSL by using DHE and ECDHE suites) makes impossible to recover past (or future) particular sessions when long-term keys are compromised. It does not protect against MITM, which is possible when authentication is not done correctly.

DNS666 September 13, 2013 7:36 AM

@Gervase Markham

What would be the point of executing an active attack (MITM) and risking exposure against non-SSL traffic?! One would just listen in on the unencrypted traffic passively (as NSA et al. are now known to do extensively), no MITM shenanigans necessary.

Nicholas Weaver September 13, 2013 7:37 AM

The journalists need to start searching for DigiNotar in their document set in more details. It would explain alot about the DigiNotar hack if either the NSA was involved or the NSA stole the fake certs from whoever did it.

Mike the goat September 13, 2013 7:48 AM

I would also imagine they would need a diverse allocation of IP space (and preferably use tunnels to different locations rather than have all the routes go to one AS and make it look dodgy if scrutinized) as having a few blocks repeatedly connect to a target’s web servers would look very strange indeed. Of course they could be using this attack in a very targeted way and not routing everyone through it which would be a lot more subtle.

The commenter who spoke of the Perspectives project makes an interesting point. The notary servers on perspectives may contain the smoking gun(s)

tor September 13, 2013 7:49 AM

Quick Ant QFD Tor events. I assume qfd doesn’t mean what it does in the civillian world. Maybe that is their Tor exit node tracker, or possibly servers they deploy with one click to run a tor MITM attack when victims try to download Tor Browser Bundle, they get the NSA version instead

Nicholas Weaver September 13, 2013 7:53 AM

Some background for those unfamiliar:

Non-encrypted traffic doesn’t need MITM to capture, so the MITM diagram only applies to SSL.

The DigiNotar hack was discovered when the Google certificate was used to MITM all traffic going to Google from Iran (it tripped the newly installed Chrome certificate pinning IIRC).

The fake certificates from DigiNotar match the NSA’s interest list, including Google, the Tor Project, WordPress, and others.

Jenny Juno September 13, 2013 8:09 AM

“take a look at how many certs have been presented by Google over the last 30 days. There are a LOT of them.”

I use the Cert Patrol add-on for firefox and over the last month it seems like every single time I go to google I get a different certificate for them. Frequently they are from different authorities.

Has there been a discussion of this anywhere? I’d really like to see an informed analysis (versus my own paranoid speculation) of what the hell is going on.

Mike the goat September 13, 2013 8:13 AM

@tor: it certainly brings the revelations from the tor project of some ideas being decloaked by visiting any sites hosted by Freedom Hosting (which were mostly illegal stuff but also some services like tormail) into a new light. From what I remembered the guy went down for some charges and the FBI or whatever alphabet soup agency involved took over his servers and put a nice bit of Java which made a CONNECT to an IP in Maryland using some shellcode. Obviously this would go through the standard windows TCP stack and bypass tor unless they were running a VM or the routing onto tor was done upstream (say, on an OpenWRT router)

Mike the goat September 13, 2013 8:22 AM

@Jenny – prior to their transition to 2048 certs starting Aug 1 all Google certs /should/ have been signed by the Equifax issued Google root cert. This sounds very dodgy.

Wes the Mod September 13, 2013 8:25 AM

Funny thing about Bresnan up here in Montana, they were doing this for at least 4 years via DNS for Google as well as other sites. Due to their lack of ability to set up a fully functional network in Butte none of this seemed to matter. There were certificate mismatches, the web server would run out of connections on it’s connection pool… This whole idea of hijacking traffic via routing seems a bit excessive when you can proxy the traffic and only a few people notice.

Needless to say the solution in this case was just to change your DNS servers to public ones. For a more global solution there are always the root hint servers at which point you only have to get the cooperation of one group of people.

Marcio Lima September 13, 2013 8:41 AM

I am Brazilian and the NSA slide indeed mentions SSL MITM attack. Although it is not clear whether NSA had actually attacked the Petrobras’s private network or it was just an exercise. It is being a little passionate the discussion here in Brazil. In a previous program, Fatastico showed some NSA documents that implied that the US Government has succeeded to trace the President Dilma Roussef’s mobile calls to her secretaries, ministers and even relatives. The President had talked to Obama in the G20 meeting and maybe she is going to call off a visit to US because yesterday the talks between Susan Rice and Brazil’s Minister of Foreign Affairs in White House was not considered productive. Many brazilians are outrageous about this matter since this is not an expected behavior from a nation that has been a long history of fiorendship including fighting together in the two World Wars.

Mike the goat September 13, 2013 8:43 AM

@Wes – agreed. I previously worked as Chief NOC engineer for a medium sized ISP. We received warrants for subscriber information so regularly that it made me despair! I remember we were required to support Cisco’s lawful interception technology at OUR cost by a certain date. As a consequence of all our network equipment being x86 running FreeBSD this caused us some problems. Even our DSL tails were terminated by a few Linux boxes running l2tpns. As a consequence of this any ‘targets’ had to specifically be routed through a 7xxx series Cisco which provided them with the proprietary solution they so desired (I am certain LI has been standardized now). In an act of disobedience I setup packetfilter to slightly delay those routed to the capture router so that the net result was an additional 4ms on latency and a change in TTL. I told a few of my more paranoid enterprise customers that we were legally obliged to say nothing but told them to take notice of their ping times. Funnily enough you’d also see in a traceroute the reverse DNS name was revealing enough 😉

We also stopped transparently caching with squid as when we were doing just that we were required to maintain logs. Note that this took place outside of the US but in one of the countries mentioned in the “five eyes” in the late 90s early 2000s.

I often wondered as to why they even bothered with LI as it was common knowledge that our country’s surveillance organization occupied an entire floor of the data center in which we had colocation. I guess anyone who has been in a largeish data center will know the level of access control is just far too excessive if they were merely protecting customer’s racks. We were escorted to our floor and couldn’t even exit the floor via the elevator without picking up the security phone and asking them. I often worried about fire code compliance. The only obvious safety equipment was a ‘escape closet’ which they installed on each floor when they transitioned from halon fire suppression to CO2 (asphixiating in a chilled data center isn’t my idea of a nice place to die).

Anyway my point is this stuff has been going on for years. It needs to stop now.

bob September 13, 2013 9:11 AM

It would be nice to be able to grab a server’s signature via existing secure connections. If I’ve already visited local government sites, facebook and the bbc when I visit I could ask all the sites to verify the connection. Any MITM attack would have to be prolonged but still specific to avoid detection.

Adam September 13, 2013 9:14 AM

It would go a long way to defeat this kind of attack if certs could be signed by more than one signatory and not just CAs but business partners, business bureaus, governments etc. to build a web of trust.

And if browsers were far more proactive in validating discrepancies between certs and actively cached them so that it could validate they didn’t suddenly change for no apparent reason.

Gervase Markham September 13, 2013 9:17 AM

z: citation needed. (If Mozilla discovers any trusted CA has been providing certs for MITM purposes, we will un-trust them.)

Marcio: thanks for the confirmation. I wonder if the NSA still think this sort of attack is safe to run, now that pinning is a reality.

Re: DigiNotar, the NSA would need to have serious control over the Iranian internet infrastructure in order to mount an attack with the pattern that was seen. Watch this video, which geolocates the source IPs of OCSP pings, which are indicative of certificate use. Whoever conducted the Iran attack was able to control the routing of a great deal of Internet traffic from inside Iran, but not outside. (Or they chose not to.) I’m not saying it’s impossible, but the Iranian government seem like a much more likely culprit.

That’s not to say that the NSA couldn’t have re-stolen the certs from the hacker, or from the Iranians, or have also broken into Diginotar themselves and used the certs they took in much more targetted attacks. (Only a tiny number of the stolen certs were ever detected in the wild.)

Winter September 13, 2013 9:23 AM

Diginotar was the certificate notary of the Dutch government. The hack caused quite a lot of damages in the Netherlands.

If the hack of Diginotar really was perpetrated by our friends from the USA, we (the Dutch) sure do need no enemies anymore.

In other words, the NSA has waged cyber war against their allies.

Mike the goat September 13, 2013 9:51 AM

@Marcio – re cell phones. I imagine any kind of conference where large numbers of people are in a single area could be owned by just running your own BTS and doing a MITM attack that way. GSM was never a protocol that was particularly security oriented. I remember a Black Hat presentation a few years back where a guy whose name I cannot remember showed how he was able to quite accurately geolocate someone by combining HLR data (which should be confined to trusted telcos but is exposed by several companies to query online) with a vulnerability I believe in SS7 which gave relative signal strength on more than one tower allowing rudimentary geolocation. Hell, even HLR data is bad enough. I think the US gov’t would be crazy not to use that data. It would be easy to target too – sniff traffic while people are in the airport and you have a list of IMSI’s to query in 24 hours or so when you know they’ll be at their destination. Keep watching and you will easily detect if you have been into any geographical area they deem interesting enough to screen you when you return.

@Gervase – I think the fundamental problem is with the centralized trust we put in the CAs. Clearly given all of the stuff ups (ignoring the NSA scandal) indicates this trust is misplaced. A PGP like trust model would be more cumbersome but more resilient. When DNSSEC becomes a widespread reality (and if we can trust the roots, which is another post in itself) perhaps the fingerprint can be put in a TXT record. That at least increases the number of organizations they need to subvert.

@Bob – even if browser’s were to implement an openssh style known_hosts style fingerprint cache at least we would get a warning when a cert changes.

MatthijsK September 13, 2013 9:54 AM

About Schneier’s suggestion regarding DigiNotar: I fail to see how the contents of that slide in any way warrant the claim that it implies that the NSA was involved in, or exploited the consequence of, the compromise of DigiNotar. The screenshots show a mention of DigiNotar in a slide that . Am I missing something?

(Note: I am not claiming that NSA was NOT involved in, or exploited the consequence of, the compromise of DigiNotar.)

Winter September 13, 2013 10:09 AM

“I use the Cert Patrol add-on for firefox and over the last month it seems like every single time I go to google I get a different certificate for them. Frequently they are from different authorities”

I get the same from the Netherlands. It has been this way for years. But only for Google.

Sami Lehtinen September 13, 2013 10:19 AM

Ahem, Yahoo nor Hotmail / Outlook does encrypt email, those use plain SMTP without STARTSSL. To protect email properly you need to use STARTSSL, with authenticated certificate fingerprint.

Jacob September 13, 2013 10:52 AM

Re DigiNotar
I seriously doubt that this was a NSA work. At the time of the breach, there was an active “discussion” by the hacker who claimed to did it (can’t recall the site). He identified himself as a self-taught hacker expert from Iran, boasting that he could do wonders in penetrating servers, and that he decided as a nationalist Iranian to help the Iranian gov in its fight against internal political subversion by letting them use fake certs as MITM against the “enemies of the state”. He also vaguely explained how he hacked DigiNotar.
From the language, the content, the psyche, it was not NSA.

Re Pespectives add-on to Firefox:
How fool-proof is this? this add-on lets you see if your browser see a different cert from what other clients (about 10), running on a known list of geo-distributed servers, see.
If Eve targets you and presents you with a fake cert, what will stop her from also feeding you fake acknowledgements from those geo-distributed clients?

JeffH September 13, 2013 11:10 AM

@Stanislav Datskovskiy “At this point, if you still … rely on security software you haven’t personally understood the source code of, you’re a chump.”

This sort of crazy talk is why Stallman is considered a fringe nut despite his message being a sensible one. If you can’t come up with sensible suggestions on how to fix problems, there’s little value associated with the underlying message.

merp September 13, 2013 11:16 AM

Understanding the source is pointless unless you build it yourself, since there’s no guarantee the source for the binary you downloaded wasn’t backdoored. Now we have to do deterministic building to make sure even the compiler isn’t adding hidden backdoors or malware since the NSA could be intercepting your repository traffic if you are a security software engineer and they want access to your encrypted VoIP app or tor browser

Stanislav Datskovskiy September 13, 2013 11:25 AM

JeffH: ‘don’t shoot the messenger.’ Stallman, old dervish though he may be, turned out to be right about virtually everything.

And it is perfectly feasible to rely on properly-implemented security software that you understood (or wrote) yourself. A one-time pad engine can be a 20 line Perl script (assuming you have access to a good TRNG.) Granted, you’re stuck using this strictly between your own systems, and it isn’t as convenient as public-key. Such is life. Life never was easy for people who want genuine security, rather than a placebo like SSL.

Don’t hate the ‘conspiracy nuts’ who tell you that mainstream crypto was doctored, just because there is no easy answer to the problem. We’ve been saying it for years, and nobody listened.

Nick Semenkovich September 13, 2013 11:27 AM

FWIW, some prominent members of Google’s security/Chrome teams just blacklisted a Verisign intermediate (“VeriSignClass3SSPIntermediateCA”) — perhaps suggesting it was enabling MITMs for Google properties.

See the HSTS commit:

Note that the associated bug is private:

There’s a good explanation of the “bad_static_spki_hashes” parameter here:

Chart September 13, 2013 12:28 PM

Equifax and Geotrust are the same CA (Equifax root was bought by Geotrust, which was then bought by Verisign, and is now part of Symantec).

Brian M September 13, 2013 12:42 PM

@Mike the goat:

When was the last time you audited any driver code? As a system administrator, not as a developer. I have been working with that hardware since the controllers were separate MFM and RLL controller cards. Give some thought to what you just wrote. The drive isn’t going to be “squirreling away” any data, because the drive can’t tell the difference between data. All sectors get overwritten, some more than others. Any drive with a swap partition would go absolutely nuts trying to hoard data. The reason that “shredding tools” do the multiple overwrites to erase data is so that the residual magnetic signature from the previous data on the disk is erased. Detecting that residual magnetic signature is not something that normal drive electronics can do. It has to be done with specialized equipment. The normal drive electronics and controller code is not going to go back and and say, “Oh, this must be valuable because it’s being scrubbed, so I’ll copy it off for the NSA to review.” One overwrite is enough to erase it so the OEM electronics won’t detect it.

The only way to tell if a drive’s built-in erasure is effective is to hook up rather specialized equipment to view the read head’s signals. Yes, you can buy that equipment from Tektronix or HP, and it costs more than a new car. I have worked with that stuff in an OEM hardware lab.

The real question is, who wants that customer’s data? Is it a national agency, or not? If it’s not the NSA, CIA, etc., then a normal erasure is fine. Somebody dumpster diving is not going to go through those lengths to pull data off the drive.

As for eavesdropping on computers: The data has to go somewhere. If there is no data traveling back out of your computer to someplace you don’t know, then there’s nothing to worry about. It’s easier to eavesdrop on a monitor than a computer. The SSL and TPM hardware isn’t a threat vector. A rootkit is a threat vector, and it’s easier to drop a rootkit on a box than bother with the hardware manufacturers.

Cryptoki September 13, 2013 12:51 PM

The Diginotar hint is significant and should not be bandied about without more substance. Diginotar provided outsourced PKI to the Dutch Government. So the US attacks allies now? This was an outright criminal attack – not just exploiting a vuln or enabling a backdoor. And it is alleged that the same Ichsunx attacker was involved in hack attempts against Comodo and StartSSL.

Mike the goat September 13, 2013 1:10 PM

@Brian – I don’t claim that this would be even viable but was more just thinking aloud as to what threat vectors could be lurking in our hardware. If the microcontroller was smart enough it could conceivably use file carving heuristics to target a particular file format that was deemed to be interesting. I don’t think the drive would actually do this but conceivably it could. Why they would do this? I don’t know.

I was more or less brainstorming on the revelation that the NSA modified consumer hardware and what one could conceivably do. I think subverting a hardware RNG would be the cheekiest way of weakening any crypto.

Of course we know that Linux, for example only uses RDRAND as one source of entropy and /dev/random isn’t directly sourced from RDRAND (although there was heated discussion about doing just this to get higher throughout).

As for me, I am just a sysadmin and occasional dev and I am not a crypto geek.

Mike the goat September 13, 2013 1:32 PM

Stanislav: l like that idea. Perhaps if it was ntfs aware it could serve up a modified rpcnet.exe on first read (eg during boot) that had the rootkit inside and then loaded the real rpcnet just like Dell’s laptop BIOS does with computrace.

Your mention of a 20 line one time pad in Perl reminded me of people who used to get that RSA implementation in Perl tattoed on them so they could make a statement about the then US crypto law. I think about five years later they were doing it with deCSS. Always going to be something, eh?

Stanislav Datskovskiy September 13, 2013 1:38 PM

Mike the goat: the nearly-universal disdain for the OTP (including our gracious host’s) puzzles me. There are situations where it fits, and it is bulletproof “when used as prescribed.” IMHO, if you can use it, you should. Let the snoops choke on gigs of entropic garbage.

Mike the goat September 13, 2013 1:51 PM

Stanislav: agreed. You hear many say “don’t use asymmetric key exchange” but fact is – if you are going to meet physically to share your key then you may as well share a BDROM full of pads and have 22 GB of perfectly secure communication. That’s a lot of text!

Simon September 13, 2013 1:56 PM

Encrypting random data does not make it more “random.” That’s crazy – why do people keep saying stuff like this?
In fact, if you used AES256 (or any other cipher) to encrypt a large file of noise and even used noise to create a strong key and the IV, the entropy of the ciphertext would fall because structure was introduced into it. How in the world can it NOT?
You should use large amounts of data to measure it properly. There are a lot of different tests you can run on it, but the result is the same. Every last comment about attacking block ciphers mentions only attacks against keys, not the cipher text itself. That’s because it seems easier.
Here is one way to create all the noise you will need

nameless September 13, 2013 1:57 PM

@Douglas Knight

It looks like they are not at Wikileaks as of yet (unless maybe I ended up on some NSA-approved version of Wikileaks).

BTW Wikileaks has an interesting article about Google connections in Washington:

Looking at how NSA operates they probably have people in the Anonymous as well.


From the language, the content, the psyche, it was not NSA.

Sorry for laughing but if I worked for NSA and we got that sort of a reaction from a reader because of something my semantic analysis specialist colleagues had put together, we would have considered the mission a success. You can not be certain that your conclusion is correct even if that “Iranian self-taught hacker” had been your personal friend. He may still have worked for NSA.

simpson September 13, 2013 2:51 PM

By now you heard about the ADE 651


Also see

Study this carefully because it says a LOT about security. How can so many people think that open-source is categorically safer JUST BECAUSE it’s open? That’s because they think the other guy is looking at it, surely all those people can’t be wrong?

And why do you keep saying don’t worry about breaks, 2048-bit SSL is fine. And you can always just increase the key length. Aaronson said it’s so simple, just increase the key length by 10. So, this is probably a good time to start thinking about coming back to reality. Besides the implementation and SW development nightmare, and all the nonsense people are spewing about OpenSSL…

DON’T FORGET that if you exchange private keys and your stuff gets stored away somewhere, a few years from now it won’t matter how long you made the keys. Let’s tell people then, to just go back and increase the key length used in all the stuff that has been archived.

Nick P September 13, 2013 2:58 PM

@ Brian and mike the goat

Re attack via hard drive firmware

You’re getting warm. Remember, “subversion is the attackers tool of choice.” (Schell) You are focusing on what they can embed in the hard disk itself. Remember that they have pervasively compromised protocols and systems as support for their activities. Let’s call it “offence in depth.” 😉

So, they like subversion and we want to know what a HD’s onboard software can do. Combine them, then the better attack methodology is programming the HD to use its DMA, drivers or other trusted access to subvert the system. This might be as simple as inserting a tiny trap door and modifying the networking stack to pass TCP requests through it. Upon a tiny unique trigger (maybe in TCP fields), it puts payload from that source into memory w/ kernel or admin privileges. Attacker goes from there.

The attack code needed to do this on the HD itself would be totally unnoticeable in normal operation. The key enablers such as DMA are also normal. It would be a tiny piece of code, optionally inserted on a tiny subset of PC’s (precedent: stuxnet), that could allow data from an untrusted source to enter the system and assume privileged status. It would be a blip on defenders radar they’d miss entirely. Probably.

Improvement: use a chipset that can covertly receive wireless command that are unique but blend into background a bit. Then you can activate the subversion, have the operative do their work, and remove it. Reduces exposure at system level just like that.

Next improvement: Over-the-air update or command erases the subverting tool from the firmware (and its flash) in certain emergency scenarios.

Mike the goat September 13, 2013 3:12 PM

Nick: given the market share of Microsoft Windows in the domestic market you could safely make assumptions regarding OS choice. Take the Computrace ‘trojan’ on Dell laptop BIOS’s – the example I gave before. There would be nothing stopping a rogue BIOS from doing whatever the hell it wanted. Conversely I think a rogue HDD could indirectly do whatever it wanted – if not via physical direct access (think a FireWire connected disk having direct memory access) then it could certainly inject arbitrary code where it’s likely to get executed (and the MBR is probably too overt; I would prefer to pick some obscure driver windows always loads on startup. We know that they have keys to digitally sign them 😉

What about microcode on x86? It is encrypted by Intel, completely proprietary and without scrutiny. AMT? etc…

Tim L September 13, 2013 3:26 PM


What would be the point of executing an *active* attack (MITM) and risking exposure against non-SSL traffic?! One would just listen in on the unencrypted traffic passively (as NSA et al. are now known to do extensively), no MITM shenanigans necessary.

Surveillance is only half of the control loop.

MITM is required to be able to shape Google search results for psychological subversion purposes.

z September 13, 2013 3:31 PM

@Gervase Markham

I should have worded my post more cautiously. I don’t have any hard evidence myself; I’m going off of a conversation I had with an LEO who said they had implemented MITM attacks using SSL intercepting hardware to display their own valid certs signed by legitimate CAs. I trust him and he has no reason to lie about it in this way, but I’m afraid that’s all I can offer. It does fit with the pattern of the government coercing companies to do their will, and I certainly would be shocked if the NSA did not try to get a CA to do this, but again, I have no evidence.


Yes, it is possible for an attacker to feed you fake results, but that would require the attacker to go after you specifically. Mass MITM attacks (like SSL decryption hardware parked in front of Google’s servers gobbling up everyone’s SSL traffic) would be detected though, because it would be infeasible to alter the results of every single Perspectives user. Even if they did something to the notaries themselves to make them feed the results they want, they have no control over who runs a notary. IIRC anyone can set one up whenever they want and you can delete any notary you decide you don’t trust from your list.

Moxie Marlinspike based his Convergence tool off of Perspectives. It works similarly, but I couldn’t get it to work on my browser and have been too lazy to figure out a solution. The main difference is that Convergence caches the results from the first lookup and alerts you when the key your browser sees is different from the cached key for that site. This is faster than Perspectives, which checks every time. Perspectives gives you a nice graph of the key history over time though.

Timothy September 13, 2013 3:36 PM

Bruce, are you not a little scared about getting arrested from all this disclosure? It’s fascinating to find out the intrusive goings-on of the secret services and I love your analysis of it all, but I’m rather concerned for your safety.

nameless September 13, 2013 3:42 PM

@Tim L

MITM is required to be able to shape Google search results for psychological subversion purposes.

subversion, yes, and not necessarily only in USA. The previously linked article Op-ed: Google and the NSA: Who’s holding the ‘shit-bag’ now? says the following:

Documents published last year by WikiLeaks obtained from the US intelligence contractor Stratfor, show that in 2011 Jared Cohen, then (as he is now) Director of Google Ideas, was off running secret missions to the edge of Iran in Azerbaijan. In these internal emails, Fred Burton, Stratfor’s Vice President for Intelligence and a former State Department official, describes Google as follows:

Google is getting WH [White House] and State Dept support and air cover. In reality they are doing things the CIA cannot do…[Cohen] is going to get himself kidnapped or killed. Might be the best thing to happen to expose Google’s covert role in foaming up-risings, to be blunt. The US Gov’t can then disavow knowledge and Google is left holding the shit-bag.

Malcolm Digest September 13, 2013 3:45 PM

Re: hostile hard drives:

By using host based full disk encryption you will prevent the HDD from tampering with your system. Outside of the MBR, nothing that the drive sees will be in the clear. Anything it wants to inject will fail to decrypt or will be mangled by decryption to the point of unusability.

There’s still a chicken and egg problem of getting enough OS installed to enable FDE, but that seems like a solvable problem.

Andrew September 13, 2013 3:45 PM

A decade ago I warned a company I worked for that enabling SSL for email connections (via Outlook or other dedicated email reader) did not prevent MITM unless the 100+ CAs were culled from our employees’ browsers. No one listened. At the time, they were worried about foreign espionage by a friendly ally and assumed this took care of it. Ten years later, we have a potential example of our own government doing it.

Craig, I told you so.

Ahhh, so, satisfying.

Sorry, folks, I had to do that.

Leon S September 13, 2013 3:48 PM

@Brian M

I agree that some aspects of Mike the Goat’s hypothetical scenario seem less than plausible, but it would certainly be possible to design a firmware that could save away certain kinds of information without causing a huge amount of thrashing. Instead of moving information around, you just maintain a mapping of logical sectors to physical sectors, and don’t overwrite data you want to keep around. How do you think ZFS supports snapshots efficiently?

Peter Bright September 13, 2013 3:49 PM

Another screenshot implies is that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA.

It really doesn’t imply that. I think the much more likely explanation (especially given that we know how DigiNotar-originated fraudulent certificates were used) is that it’s an NSA presentation explaining that attack, or using that attack as a reference.

DNS666 September 13, 2013 3:52 PM

@Tim L

Well, yes. However, as far as I am aware (a) the actual case we’re discussing here is about surveillance not subversion and (b) so far we have not seen anything pointing to such PSYOPS in the Snowden leaks.

Malcolm Digest September 13, 2013 3:57 PM

Stanislav Datskovskiy: True. Though to the point of the MBR being the crack that lets a hostile HDD in, you could put your boot blocks on a USB thumb drive, or MicroSD card. That way the hard drive never gets a chance to substitute data. You could source a dozen different USB thumb drives and pick on at random as well as rewrite your boot loader to be equivalent but different enough that it’s improbable that any micro controller on the USB stick could recognize the code. Alternately you could modify your BIOS to have the decrypting boot block in the BIOS itself.

At some point, though, you have to trust your hardware or else not use it.

Leon S September 13, 2013 3:58 PM

Alternatively, if you have double number of platters than you need, it would be possible to copy one platter to another very efficiently.

There’s probably a hundred and one ways to do surreptitiously store data without affecting the performance characteristics of the drive too badly.

Nick P September 13, 2013 4:09 PM

@ Mike the goat

“Nick: given the market share of Microsoft Windows in the domestic market you could safely make assumptions regarding OS choice. Take the Computrace ‘trojan’ on Dell laptop BIOS’s – the example I gave before. There would be nothing stopping a rogue BIOS from doing whatever the hell it wanted. Conversely I think a rogue HDD could indirectly do whatever it wanted – if not via physical direct access (think a FireWire connected disk having direct memory access) then it could certainly inject arbitrary code where it’s likely to get executed (and the MBR is probably too overt; I would prefer to pick some obscure driver windows always loads on startup. We know that they have keys to digitally sign them ;-)”

Yep, you’re getting it. 😉

@ Malcom Digest

FDE from host is a good idea to protect data at rest and maybe reduce leaks. It doesn’t protect against subverted hard disks unless the subversion only reads incoming filesystem data. As Mike and I discussed, there are many ways for a HD to hit a system that require more thorough protections. In your scheme, our DMA or driver-based attacks could do an end run around the whole thing. If I didn’t want to inject code, I’d just grab the key material from memory via privileged access (eg DMA) and store it with the encrypted files in the hidden disk location. Game over.

Dealing with untrustworthy hardware

Untrustworthy hardware is a big problem due to the high integration of CPU, memory, and IO devices in COTS systems. They’re designed to share for efficiency. Sharing often = compromise in security. So, isolations and info flow restrictions must be imposed on each device you don’t trust. The market isn’t exactly pouring out such options for us, either.

My solution has always been to fight the integration, going back to plug n play components. Divide security- or performance-critical functions into dedicated physical components. These have isolated local memories, optionally a shared global memory, and a high speed comms bus. The security protections would be spread out between the components, memories and bus, each doing the part they were good at.

We’ve seen the logical version of this for security. We’ve also seen the physical version in safety-critical systems, rarely in security critical systems, and in NUMA-style machines like SGI’s or mainframes for both performance and reliability. I encourage hardware designers to explore opportunities there. It seems like it would be the low hanging fruit for providing basic hardware protection against malicious devices. It seems, anyway.

Mike the goat September 13, 2013 4:13 PM

@Leon – yes, that’s sort of where I was going with the 1TB logical drive with 2GB physical space. If you were doing as you suggested and merely remapping you would have heaps of unallocated space to play with. An ATA_SECURE_ERASE command could go ahead and thrash the disk for a while then add all of the blocks it would have actually erased to a table so when the disk is inspected they appear zeroed. You could even detect common write patterns indicative of software level erasure tools (eg GNU wipe).

@Malcolm – this wasn’t put out as a legitimate attack merely a hypothetical concept to point out that without secure hardware we cannot have secure software.

This needs to change. Anyone notice that so many vendors particularly of WiFi chips only give binary blobs for Linux? I know they probably have commercial reasons, NDAs (or maybe the FCC in the case of software defined radios possibly being able to be reprogrammed or retuned) but this isn’t acceptable.

NSA Shill September 13, 2013 4:24 PM

Bruce Schneier,

Though I strongly disagree with your opinion and your methodology, I do have a lot of respect for the fact you haven’t deleted comments that disagree with you or question your reasoning.

-NSA Shill

Gregory Schlomoff September 13, 2013 4:40 PM

One document [1] published by Fantastico, apparently taken from an NSA presentation […]

Another screenshot [2] implies is that the 2011 DigiNotar hack was either the work of the NSA, or exploited by the NSA.

I doubt that those 2 documents are original slides or screenshots from NSA material. They both are written with the familiar rounded font that Globo uses for all its text [3]




Mike the goat September 13, 2013 4:47 PM

This thread is fantastic! Anyway, moving on…

@Nick – I agree with everything you’ve mentioned. I try and maintain a secure OS. I compile my FreeBSD builds from source after I have verified the hashes published by the project. I don’t use any precompiled binaries. Assuming I trust the project and I trust the compiler not to do anything underhanded I can have a medium level of assurance that my software hasn’t been modified by a third party (obviously all bets are off if there is a rogue in the core FreeBSD team ala OpenBSD IPSEC saga, their CVS/SVN/got repo is compromised and nobody notices or somesuch other event).

Even if I somehow audited every single line of code (this would be humanly impossible with FreeBSD but doable with MINIX or an embedded RTOS) I still can not be certain my BIOS, IPMI interface, etc etc are not compromised.

This is the problem with closed hardware and I don’t think it is going to get any better with TPM, UEFI, Secure Boot etc adding further complexity to the already unnecessarily complicated x86 architecture.

Stanislav Datskovskiy September 13, 2013 5:05 PM

Stallman’s dream might become a reality: there is, at last, at least the theoretical possibility of a market for clean hardware and software.

But, to be useful, it must be clean not only of ‘binary blobs’, but of all blobs which leave open the possibility of a Ken Thompson attack. That means: free of legacy operating systems, compilers, etc. – and hardware.

And even this will be a waste of time if the resulting machine has the brain-destroying complexity of ‘wintel’ – where one could easily hide an elephant. I expect that several such ‘elephants’ will be found in due course.

Malcom Digest September 13, 2013 5:20 PM

Nick P: DMA shouldn’t be a worry as modern machines have IOMMU which will prevent a random device from performing arbitrary DMA transfers. If you’ve got a rogue driver running on the host, then it doesn’t matter whether your HDD is malicious, you’re already screwed.

Mike: I’m sure you’ve seen this, but it’s not exactly theoretical.

Mike the goat September 13, 2013 5:25 PM

@Malcolm – wow… I knew it was theoretically possible but didn’t think someone had successfully exploited it. Fantastic work.

RobertT September 13, 2013 7:38 PM

Some what Off topic:

The most fun I ever had with system design was when I was tasked with creating a module that would monitor and interfere with the operation of a section of hardware that we had reason to believe contained an intentional backdoor. Needless to say the hardware that I added needed to be invisible to any adversary including one with full access to the design database. The purpose of my modification was to make the output of this block appear to fail intermittently, requiring a reset. Through this process we wanted to see the manner by which this block got set and reset and thereby reveal the depth of our adversaries control. We also added some notification scripts that were triggered whenever our design databases (for this block AND adjacent hardware functional blocks) were accessed, think of this as our very own “honey pot” for anyone interested in discovering what was wrong with this block. The whole exercise was very revealing!

urban myth September 13, 2013 8:29 PM


Google is getting WH [White House] and State Dept support and air cover. In reality they are doing things the CIA cannot do…[Cohen] is going to get himself kidnapped or killed. Might be the best thing to happen to expose Google’s covert role in foaming up-risings, to be blunt.

…that could perhaps explain why Eric Schmidt, the Google ex-chairman, traveled to North Korea…

Anonymous Coward September 13, 2013 8:55 PM

@Nick P

You can weaken/compromise the security of FDE with appropriate exploits, even without DMA. Clive enlightened us about this a week ago:

It looks like there’s very good reasons the US Gov’t (and other security-conscious entities) will de-gauss hard disks on site, and then ship them off for complete destruction (with a trusted chain of custody). The NSA anticipated (and could very well have exploited) the methodology employed in Malcom’s link, and the malicious possibilities.

Clive Robinson September 13, 2013 9:11 PM

@ Stanislav Datskovskiy, Mike the goat,

    … the nearly-universal disdain for the OTP puzzles me. There are situations where it fits, and it is bulletproof “when used as prescribed.” IMHO, if you can use it, you should.

Almost every time I see someone recomend OTPs it turns out they have not used them in anger, and when they do they discover just how much of a problem they are even when used in a calm unpreasurised environment.

As the Russians found OTP use goes horribly wrong due to assumptions about Keying Material (KeyMat) handeling / managment (KeyMan).

Firstly putting big gobs of OTP on a CD/DVD sounds great untill you start thinking about “end run” attacks on the “end points”. If you don’t have an “air gap” simple malware by any one of thousands of “zero days” will slurp your OTP down the nearest comms channel whilst you are dunking a doughnut in a coffee whilst reading / writing the latest message. And please don’t think an “air gap” is going to protect you because to read a message the ciphertext has to cross the air gap some how, and ages ago I worked out how to do this bi-directional trip with malware on the favoured file shifting device the USB thumb drive.

The cardinal rule with OTPs is not as most people think “only use the keymat once” it’s “destroy the keymat immediatly it’s been used”. You can not do this with a CD/DVD so you can not use one. Likewise you can not do it with a flash drive, and ordinary HDs are not sufficiently reliable either, nor for that matter is any magnetic media.

If you don’t destroy the keymat immediatly after using it then malware will as I’ve said rob you of it.

The simplest form of OTP is a stream cipher and thus the OTP suffers from nearly all the failings of using a stream cipher which are many (bit fliping, no authentication, etc etc).

One serious issue with stream ciphers and thus OTP’s is “run length”, that is you can not have a run of more than around 30bits of all zeros, ones or any other repeating pattern or the plaintext statistics will show through. This is esspecialy worrysm where you have known format plaintext at the begining or end of a message.

Then there is the question of just how long it’s going to take to generate all that Truely Random Keymat.

Whilst you can buy or build a TRNG you realy realy have to know what you are doing.

Further up this blog is a link of to a design using a GM tube and radiation source.

I’ve designed and built TRNGs using GM tubes and they and their associated electronics are very susceptable to interferance from the surrounding circuitry or any stray EM fields that happen to cross the TRNG analog electronics.

I don’t have the link handy but two members of the UK’s Cambridge Labs took a quite expensive 32bit TRNG designed and built by an internationaly reputable firm and subjected it to an unmodulated microwave EM field. The output range of the generator dropped from 32bits to a little over seven bits. If they had used a modulated carrier the chances are they could sync the TRNG to the modulation applied to the EM carrier. Back in the 1980’s I was doing similar but not to TRNGs but digital wallets and hand held gambling devices…

If you feel the need to use an OTP I would suggest you write it out on paper and use each line to do a key transfere protocol of some form to be used with a high grade block cipher.

Wesley Parish September 13, 2013 9:54 PM

@Stanislav Datskovskiy

Something RISCky? There’s a ton of Verilog and VHDL code out there. But as I’ve said previously, you’d need to check the postprocessing – the postprocessor is usually closed-source.

It wouldn’t be impossible to design a chip fabrication machine, just difficult; the actual circuit layout is essentially toolpath work, and that is hardly a secret, being the basis for CAM software. Not expressible in G-code, naturally, because G-code’s a machining language, but the essence is the same.

The question of whether or not any such open source chip fabrication machine would ever get built’s another matter, and depends very much on great ape politics.

Mike the goat September 14, 2013 12:20 AM

@Clive – I agree with you partially. Some of the disadvantages you discuss would be just as applicable to a symmetric block cipher, e.g. if they have physical access (black bag cryptanalysts) then they could conceivably use forensic techniques to obtain the cleartext from your HDD (many people write the cleartext file to the HDD, gpg it, then software erase the original – if we are talking nation state capabilities recovery may be conceivable) or they could go after your secret key directly. Of course PGP tries to prevent this with a keychain passphrase but there is no reason why you can’t implement this with your OTP media (e.g. symmetrically encrypt your padbooks, preferably a ‘chunk’ individually and keep the key to each chunk on flash paper). Sure, /if/ they get physical access you are no more secure than you would be using a block cipher directly but if they have physical access to your residence then chances are it is game over anyway and they will coerce the data out of you directly.

Re a DVD being unerasable – perhaps you could use a DVDRW or BDRE and somehow disable wear levelling when packet writing so you can selectively overwrite those sections. Even easier would be to use ‘tracks’ and simply use the burner’s erase function on that track, then write random over it (just in case we don’t trust the burner), then erase. You could probably visually verify erasure if the tracks are big enough, at least for say CDRW media.

On another note, does anyone here use OPIE for authentication on their Unix machines? I used it on FreeBSD years back and it worked flawlessly. I transitioned to ssh certificates only for convenience but the implementation never failed to work.

Nick P September 14, 2013 1:48 AM

@ Anonymous Coward

“You can weaken/compromise the security of FDE with appropriate exploits, even without DMA. Clive enlightened us about this a week ago:”

“It looks like there’s very good reasons the US Gov’t (and other security-conscious entities) will de-gauss hard disks on site, and then ship them off for complete destruction (with a trusted chain of custody). The NSA anticipated (and could very well have exploited) the methodology employed in Malcom’s link, and the malicious possibilities.”

That was about SSD’s. Without them and w/ the right crypto setup, you’re in pretty good shape. Clive and I worked out details on here a few years back to the requirements for a very secure inline-media encryptor [using COTS hardware] for hard drives. The whole point of such a design is so you don’t have to worry about the HD going into enemy hands. And NSA has one of these…

I certainly referenced it in Clive and I’s discussions. Data goes in at up to TS/SCI and comes out as ciphertext that’s considered “unclassified” and “secure” when “at rest.” I’d say they’re not worried about recovery. The trick is that they engineered a high assurance crypto system to do it. And all the specs and requirements are laid out for you right on that page. People wanting to beat NSA snoops out of their encrypted volumes just consider that a little present for you. 😉

Re hard disk destruction

They mainly do that, imho, because most devices and software DOD uses aren’t high assurance. It’s COTS crap that might leak plenty of stuff. It’s anything but trustworthy. That includes many encryption products for windows and such. It’s probably better to destroy any storage for such stuff after use, just in case. However, the NSA IME should show if you use the high security approach it’s not strictly necessary.

Exception: Unless you use an SSD some people here claimed. I’ll take their word for it. I got along for years without one so I won’t miss them.

Revision: “COTS crap taht might leak plenty of stuff” + “COTS crap they subverted to leak plenty of stuff.” 😉

@ Malcom

“DMA shouldn’t be a worry as modern machines have IOMMU which will prevent a random device from performing arbitrary DMA transfers. If you’ve got a rogue driver running on the host, then it doesn’t matter whether your HDD is malicious, you’re already screwed.”

IOMMU is one of the answers to dealing with rogue DMA. However, IOMMU is mainly on the complicated x86 chipsets with plenty subversion potential (SMM, AMT, NSA…). The chips with low-subversion risk often don’t have IOMMU’s. It’s two options I’ve been back and forth with for a while. I think another commenter had the right idea with opencores. I’ve pushed that too. An open IOMMU and processor spec designed for simplicity/security would be ideal. Open peripherals too.

And yes, with rogue drivers, you are certainly screwed. It’s why I mentioned them. I can’t even recall when malware started posing as drivers or admin programs it’s been so long now. And yet only in recent years are we seeing usable solutions to the [security] problems drivers pose.

Jarda September 14, 2013 2:11 AM

Now if a random cybercriminal did all that and did some wee industrial espionage and got caught for it, he’d get at least 20 years of jail in the USA. If he wasn’t from USA, the US administration would be screeming “extradition”. But if NSA does it, however illegal it is, everything is in perfect order and the worst which will happen is they’ll make pay the guy whose fault caused the leak. Laws are just for the commoners, some people are above the law.

Mike the goat September 14, 2013 6:25 AM

@Nick – I assume the SSD is deemed to be an unacceptable risk as it is unable to be reliably and provably erased (well, at least not as easily as degaussing) and not just because of the wear leveling implemented by the flash controller.

The forensics magazine I occasionally get had an advertisement for an interesting device – basically a briefcase like UPS. If the target computer was on a power strip, you would plug a cable into the power strip and upon removal from the wall socket its logic would kick in and give power quick enough to prevent it from powering off. They also had scotch like connectors that, if the PC was plugged directly into the wall socket you could carefully unscrew the wall socket and I guess take their wall socket with you.

It was reportedly for allowing you to take a FDE enabled PC back to your office where you can do a cold boot attack at your leisure.

Crazy times we are living in. But we knew this was coming for a long time.

jon September 14, 2013 7:10 AM

I am horrified and disgusted by CP but if the Rebublic extradites him to the US I hope the Irish people understand and accept that it will be a death penalty which I am pretty sure that most people in the Republic are against. As am I. I’ve been a victim of child abuse and have no sympathy for anyone but I don’t believe that hosting servers containing CP content warrents the kind of prison sentence that will be handed out in the USSA. Hi will either spend his life in solitary or be murdered by a guard or fellow inmate in one of our PRIVATE prisons.
to my Irish friends, punish who you will the way you will but never let your Govt let anyone ever be extradited to the US to enter into our only real growing business, The PRIVATE PRISON system.

Marcio Lima September 14, 2013 8:32 AM

Gregory Schlomoff • September 13, 2013 4:40 PM

The Globo Icon is indeed in the two slides you mentioned but this does not imply that the documents are not real. I suppose Globo wanted that the slides were not used by a third party without give the credits for Fatanstico program.

Marcio Lima September 14, 2013 8:46 AM

The question in Brazil now is whether NSA stole sensitive information from the pre-salt oil reserves. The Congress is even discussing a law to forbid american companies to participate in the bid for the exploitation of the pre-salt oil reserves and the acquisition of 35 F16 for the modernization of the Air Force is certainly lost. The chineses(oil) and the french(figthers) are very grateful to Keith Alexander.

Mike Acker September 14, 2013 8:51 AM

this excellent discussion of SSL/x.509 touches of the key issue in two spots.

the key issue is the existing CA structure is extensive — presenting a large attack surface — and not under user supervision — as described for public key encryption in Phil Zimmerman’s original essay.

users should be offered an option to generate a key-pair at any point when they are editing their account credentials. once this is done they should then have the option to mark CA signatures as marginal trust. this would then require an additional signature to promote the certificate to full trust and the user would be able to do this using the keypair he generated as part of his log on credentials

i note we are using https here with Bruce’ site. marginal trust will be enough for this. but for anything dealing with money — Amazon, TurboTax, Credit Union etc — full trust would be needed.

this means i need to get a copy of the certificate i want to have full trust from a separate source — one that i consider reliable such as my Credit Union. I would then verify the fingerprint and then sign their certificate.

a bit more than we do now? yep, but for a society that loves computers i don’t see it as a game stopper, particularly if its is introduced as an optional process.

i think it is important to note that much of the process we try to use to authenticate transactions is simply our old pen&ink process, partly jammed into a digital environment. notice how much hacking is accomplished by faking or bypassing proper authentications.

Stanislav Datskovskiy September 14, 2013 9:51 AM

Clive Robinson: note that I have carefully specified that an OTP is “safe when used as prescribed.”

That means: no obvious idiocies like keymat reuse (the Venona break, etc.) And no runs of 30 zeroes – what is the chance of this in a reasonable entropy generator? EM isolation isn’t hard either. Enclose the device in a grounded steel box, with no electrical connections whatsoever to the outside. Data and power both must travel over optical fibers; the only openings in the box are the minimal size needed to admit the fibers.

In practice, malware (in the usual sense) is only an issue for Ms-Win users. Avoid the crapware peddled by the Redmond Criminal and you’re a considerably harder target. To the point that you will be a good candidate for Rectothermal Cryptoanalysis.

As for airgap, anyone who can build a fully-automatic ‘USB Stick of Death’ compatible with all variants of Linux deserves your data.

Also consider one simple defense against anyone at all “slurping up your OTP”: a very large OTP, and a very slow Internet connection.

Stanislav Datskovskiy September 14, 2013 10:03 AM

Mike the goat:

> The forensics magazine I occasionally get had an advertisement for an interesting device – basically a briefcase like UPS… allowing you to take a FDE enabled PC back to your office where you can do a cold boot attack at your leisure.

There are some fairly simple defenses against this kind of trick. One could, for instance, embed a GPS receiver into the machine. Better still, a 3-axis accelerometer. Any appreciable motion and the keys get zeroed and repeatedly bit-walked over.

Clive Robinson September 14, 2013 1:15 PM

@ Stanislav Datskovskiy,

    And no runs of 30 zeroes – what is the chance of this in a reasonable entropy generator?

It’s not just runs of zeros or ones it’s any repeating patern as I originaly said.

So the probability of bad sequences is based on the bit pattern length, number of repeats and their seperation.

For instance the bit pattern “10” would be expected one time in four thus on average once every eight bits. What you would not want is for it to appear in a regular or fixed pattern such as at the begining of every eight bit frame. Which can happen with TRNGs when power supply regulation breakes down with age or some twit has put an electrolitic capacitor in the wrong way around.

The problem is that such a sequence would pass some statistical tests without issue, and it’s not possible to do more than basic statistical tests on a live TRNG.

Which means you have a lot of post-generation testing to do.

Which is why people prefer to use the likes of crypto algorithms to actualy generate the random output and use the TRNG output to augment the CS-PRNG.

A simple example is a counter that is double incremented on the changing edge of the TRNG, the counter value is then put through a crypto function. Think of it as say AES-CTR where the counter value jumps unpredictably. It has all the security guarentees of CTR mode but with the added advantage of not being as predictable over time.

As for “power over optical cable” it actual does not solve the EM problem it mearly shifts it upto the optical source where the EM field effects the optical source driver electronics, However doing this power by light source actually adds a problem which is power fluctuation from mechanical vibration. I won’t bother going into the ins and outs of it but it’s always there.

Oh and don’t forget any wire that moves in a field is a transducer and will convert part of the energy of that movment into electrical energy. It’s known to analog and RF engineers as “microphonics” and some components are realy very susceptable to it.

Oh and grounded or not even perfectly welded steel boxes are not as EM proof as you might beleive partly because steel is not a pure metal or crystal it’s effectivly a granular combination of both and magnetic fields will penetrate into it quite a distance at verious frequencies. Likewise when looking at copper it suffers from the “skin effect” which as a rough rule of thumb will tell you how far in an electrical field will penetrate at any given frequency.

Oh and don’t forget ionising radiation the likes of X-Rays can quite easily get through a few milimeters of steel or copper and electronics especialy small transister microelectronics are quite susceptable to them.

As I and others have repeatedly said designing TRNGs is not easy. Basicaly they can all be got at one way or another when engineering them with parts that are cost effective to manufacture. It’s always a game of “trade offs”.

As for “malware” you are assuming a level 1 not 3 adversary. Beleive me most *nixs have zero days and other flaws just as much as any comercial software. If a level 3 adversary targets you you will without a doubt have malware on any system that has been connected to any kind of communications network even if as with bluetooth or WiFi you thought it was disabled. Put simply at power up the hardware jumps to a known state and tests it’s self then the BIOS comes along and does further setup and test before loading in drivers etc to boot up the OS. Neither you nor I know what goes on before the OS recognises the “soft switch” status or if it actually disbles it. If you want to be sure you need to pull the device data sheets, and trace tracks and then using needle point probes attached to wires onto a 12volt car/motorbike batter treat the appropriate PCB traces to a quick “fuse blow” to permanently open circuit them.

As for a “USB stick of death” it does not have to recognise your OS just the COTS BIOS of which there are not very many, for when you power up with the USB stick pluged in because you forgot to unplug it after you last used it. But the reality with Linux is the kernel and drivers are fairly standard across most distros. Further you don’t know what goes on at the firmware level with the USB hardware…

As for OTP size-v-network speed, malware only has to push out the bits of the OTP you’ve used so in reality if the network speed is fast enough to send messages it’s fast enough to send out the equivalent bit of the OTP…

Security is hard at the basic level and gets exponentialy more difficult as you move up levels, but what we do know (it’s been proved) there is no such thing as 100% secure not even close.

As I said if you are going to use OTP stick to paper for the pads and a box of matches and an ash tray, it’s security is way way easier to control, and use it for sending keys for an appropriate cipher. And as for generating the OTP use four or five dice in a beer glass, it might be slow but from a practical point of view way way more secure than any TRNG you are likely to come across.

Stanislav Datskovskiy September 14, 2013 1:47 PM

Clive Robinson: this is all true;

Dr. Evil can drive up to your house and transmit a pulse of modulated X-rays that will trigger the backdoor in your CPU, Northbridge, etc. squirreled away inside all such products on the Führer’s personal orders.

However, rectothermal cryptoanalysis is still cheaper and more effective. Dr. Evil’s minions won’t sit patiently in the van, instead they drag you out, stuff you in – and drive off to the Grand Inquisitor’s house for some tea, cookies, and frank conversation.

If the hypothetical X-ray vans (or magic packets for doctored ethernet cards triggering diddled BIOS, etc.) were to barrage everybody en masse, some alert fellow might discover the trick. They are being saved for some very special occasion, and I doubt that any of us here are worthy of such treatment. So they are not particularly relevant to the question of mass dragnet surveillance.

Aside from living on an island fortress guarded by nuke subs, there is no real defense against the black van. But there are plenty of very cheap and easy things one can do to become truly worthy of the van (the proverbial gasenwagen.) For instance, the use of Coreboot (formerly known as LinuxBIOS) with crapola like USB boot snipped out, inline HDD encryptors of one’s own design, a reasonably-custom Linux (e.g. Gentoo) with unnecessary drivers snipped out, etc.

Perhaps we’ll all meet in a slave labor camp one day and compare notes: what worked, what didn’t.

Bert Kerstens September 14, 2013 1:53 PM

Amazing, how everyone runs around in panic, all because of some ‘leaked’ Powerpont slides… Knowing a lot of details about the Diginotar case (I’m Dutch and live 2 blocks from their former office), I’m calling it total B-S.

Mike the goat September 14, 2013 1:57 PM

@Clive – I remember years ago when an undergrad I designed a RNG by making a matrix of contacts along the bottom of my perspex enclosure which fed a controller from an old keyboard. I then used small copper foil squares as my ‘bits’. I made several ducts to introduce compressed air and it operated by bursting air into the enclosure. The copper foils would land where they wanted to and my code would output random bits. Of course it was just a college project but I have since seen designs based on lava lamps, sources like americum from a smoke detector and even a CCD focused on a waterfall with the difference between frames being used (I also saw one supposedly using the background noise of a camera in the dark as a source of entropy) but it seems there is nothing practical and miniaturized that also fits the bill of being auditable (I am looking at you Intel!)

I can also think of numerous ways bias could be introduced into any one of these aforementioned designs. Is there such a thing as a perfect TRNG?

Stanislav Datskovskiy September 14, 2013 2:05 PM

Mike the goat: so long as you need physical proximity to induce the bias, it is still fair to call it a good TRNG. Consider the degenerate case: anybody can significantly reduce the entropic quality of any TRNG ever built using: a hammer.

Anonymous Coward September 14, 2013 3:31 PM

@ Nick P

I recognize that Clive’s comments were with regards to SSDs, but the trouble with those, as I understand it, is ensuring that you’ve securely erased every chip. Those devices internally maintain extra capacity not directly exposed from the e.g. SATA interface, but which is hidden behind the controller.

‘Mike the goat’ mentioned that it’s not unheard of for platter HDDs to be overbuilt in terms of capacity (in his anecdotal case, 2GB physical capacity drives were marketed as, and firmware-limited to, 1TB).

By subverting the firmware in the HDD controller, one could, in principle, maintain extra copies of data outside the user’s control. This is obviously very bad for plaintext. What I was getting at when linking Clive’s post was that maintaining extra copies of ciphertext also presents a security risk. Of course, this is very much implementation-dependent. I understand Clive’s post referred specifically to this being a weakness for stream ciphers, but surely there are implications for various implementations of block ciphers as well?

I’m no cryptologist or security engineer, so I’m more than happy to be corrected for any misgivings I may have.

Rolf Weber September 14, 2013 3:55 PM

Stupid question: Why does the NSA bothers with man-in-the-middle attacks, when they already have “direct access” to the servers? This is, at least, what the press wants to make us believe. Or is the “direct access” theory now disproved? Eventually.

Stanislav Datskovskiy September 14, 2013 4:12 PM

Rolf Weber: that one’s easy: direct access for NSA toadies like Google, Microsoft, et al., MITM for some minor pesthole (small company mail server, etc.) that hasn’t been properly pwned yet.

Gold is for the mistress — silver for the maid /
Copper for the craftsman cunning at his trade.” /
“Good!” said the Baron, sitting in his hall, /
“But Iron — Cold Iron — is master of them all.

Cheap malware gimmicks for the Winblows luser; MITM for the Hushmail hipsters; bare metal attacks (see Clive Robinson’s comments) for the hard targets.

z September 14, 2013 4:15 PM

@Rolf Weber

I suspect even Google would be skittish to hand over the amount and type of data the NSA wants. I bet they use MITM attacks for the data they can’t get Google to give them voluntarily and can’t get an NSL for.

RobertT September 14, 2013 6:43 PM

I wonder how much Googles cooperation with the NSA is driven by the frustration with trying to keep them out of the loop. Imagine you find that most of your software quality problems are caused when someone (on your team) has intentionally inserted vulnerabilities into the code. They are doing this so that their real boss (NSA) has systemic access, intentional zero-days, which are somewhat easier to find then the unintentional type AND are much easier to protect with unlikely trigger sequences.

Eventually you’d get sick of trying to keep them out so in the interests of quality you’d invite them into the loop, at least than their code would be subject to the same quality controls and regression tests that apply to the rest of your product.

The more you (as a company) resist the NSA, the more vulnerable (and valuable) your employees become, you spend $100M keeping them out, they spend $1M getting in. The old Counter Human Intel maxim of MICE (Money, Ideology, Coercion, Ego) applies to your employees today just as it applied to foreign operatives in years gone by. Would Google really want their employees to be coerced, or bribed into cooperating (think of the legal problems this could create). How could they possibly weed out those driven by Ego or Ideology?

Stanislav Datskovskiy September 14, 2013 6:54 PM

RobertT: this is a bit like how nations allow some measure of foreign spying under diplomatic cover: ‘legal residents’ (in Soviet parlance.) Doesn’t stop anyone from sending in ‘illegals’ (spies in the ordinary sense) as they are considerably more useful.

Nick P September 14, 2013 7:35 PM

@ Clive Robinson

“But there are plenty of very cheap and easy things one can do to become truly worthy of the van (the proverbial gasenwagen.) For instance, the use of Coreboot (formerly known as LinuxBIOS) with crapola like USB boot snipped out, inline HDD encryptors of one’s own design, a reasonably-custom Linux (e.g. Gentoo) with unnecessary drivers snipped out, etc.

Perhaps we’ll all meet in a slave labor camp one day and compare notes: what worked, what didn’t.” (Stanislav Datskovskiy)

I think I like this guy haha. He would probably have had interesting things to say during our big discussions a few years back where we worked out quite a few secure solutions’ details. Particularly on hardware.

@ Stanislav Datskovskiy

The comments you’ve made here and on your blog that grab my attention the most involve high[er] level processors. Recently, I’ve been referencing the same thing on this blog. Many software errors are removed by giving programmers higher abstractions. However, the safer software platforms keep having trouble at the lower level stuff the language assumptions ignore or which is right at an abstraction boundary. The obvious solution is to raise the hardware’s level of abstraction/operation up to reflect what we’re trying to do with it. Not to mention, it’s better to work on screws with a screwdriver rather than an old hammer. 😉

So, what to do? The first idea I had was to look into high level systems of the past. The LISP machines came to mind. You referenced a Scheme system I didn’t know about. Then, there were tagged/capability architectures that allowed fine-grained control over how the processor worked with specific pieces of data. Most recently, as there exist Java OS’s and security-oriented tech, I’ve been thinking of enhancing Java processors for security appliances or addons. Example: Java Processor + Crypto Coprocessor + JX Operating System might give attackers quite a bit of headaches.

(Note: You might find HISC instruction set processor interesting. It’s mostly made for Java and OOP. However, it’s pretty straightforward design where small features give plenty of bang for the buck. That’s the kind of approach I prefer. )

To me, the trick is to really build into hardware the following:

  1. Trustworthy boot process.
  2. Device restriction tech.
  3. Safe control flow: ability to tell difference between instructions and data IN PRACTICE. And instructions follow valid paths.
  4. Most essential security primitives in hardware (incl TRNG)
  5. Strong support for fine-grained componentization and isolation of software.

These alone can be used to build highly secure, usable systems. I know it’s the case because a subset of these accomplished those goals in the past (KeyKOS comes to mind). So, the trick is, what modern processors can do this already? If none, what routes or past examples will lead to one? The old capability/tagged architectures had good wisdom. Should we just update and clone one of those solutions? Or how could it be done better given what progress in chip design that academics and companies have made?

I mainly think of Java processors as interim solutions to eliminate low hanging fruit (always a priority of mine). And I’m aware the DARPA Clean Slate programs, especially, are working on a modern functional and tagged architecture. I’m talking something less radical that lets us reuse existing code in language like Java (or Python/LISP) and simply secures what’s underneath it. That will make it more marketable as it’s easier to port apps to it.

Nick P September 14, 2013 7:42 PM

@ Anonymous Coward

Thanks for clarifying.

” I understand Clive’s post referred specifically to this being a weakness for stream ciphers, but surely there are implications for various implementations of block ciphers as well?”

Honestly, I’m not sure. It’s worth looking into.

Mike the goat September 14, 2013 8:31 PM

@Nick – so, if I understand what you and Stanislav are saying correctly – our current architectures build in a level of complexity that effectively makes complete transparency practically infeasible, and if we could have a CPU that understood a higher level language we could engineer some of this out of the system? Is that the supposition, in essence?

While I may not see the utility in, say going back to the LISP machine style architecture I can certainly see the wisdom in sitting down and deciding what is important and what is not – in the case of x86 there is just too much legacy and redundant fluff.

On the topic of systemic complexity – One only has to look at, say the source of OpenSSL and say CyaSSL and compare the two implementations not just in code readability but in how many lines of code to see that simplifying any implementation can indirectly improve its security.

Stanislav Datskovskiy September 14, 2013 10:01 PM

Nick P and Mike the goat:

There’s a right way and a (very much) wrong way to go about the “secure hardware” game.

The wrong way involves layer upon layer of “trustworthy” restriction crapolade – compartmentalization, cryptoprocessors, etc. This is great for making a system “safe” – from you, the supposed owner. And is certainly no defense at all against the adversary under consideration, considering that his minions will be the inevitable architects of all the “secure” junk.

Now, the right way – which has a Snowden’s chance in hell of actually happening in our lifetimes – is a computer whose design actually fits in one head – like a Kalashnikov’s. Don’t add to the complexity, eschew laboriously-standardized protocols of every kind, burn the legacy garbage – a computer whose complete description, from the ground up – including basic system software – would in one mind.

z September 14, 2013 10:28 PM

Well, this is interesting. One of the Perspectives notaries is seeing a different cert for Bruce’s blog than the one I am.

Key I’m using: a2:47:7c:cc:07:c7:f5:e7:3b:a4:3f:09:0e:9d:ed:e7

Key that notary “” sees:

The rest of the notaries concur with the one I have. The strange one has been in use for the last 30 days at least according to that notary.

RobertT September 14, 2013 10:33 PM

@Stanislav Datskovskiy
Agreed, even when you invite your enemy to be part of the creation process he MUST always maintain a separate capability to subvert your product, if for no other reasons than simply to maintain an element of deniability. Look this is what we agreed to! the stuff you’re pointing at is the work of some lunatic that Google or whoever hired.

As for CPU and system simplicity being the correct route to security, I think I’ll have to take issue with that because the simpler you make the system the simpler its operational signatures become. Simplicity aids analysis like DPA which as we have seen in the past is a very effective way to attack AES encryption.

Invariably you also end up wanting to hide a secret on the chip/system (mutual authentication for instance) Simplicity help the attacker to extract this critical information ….. Ahhh I knew their was a reason I gave up working in this area.

Stanislav Datskovskiy September 14, 2013 10:45 PM

RobertT: simplicity of the “physically no place whatsoever to hide a nasty” variety is the only real answer to the nation-state adversary.

Aside from the benefits of doing some good slash-and-burn on the jungle of legacy crapware, consider other possibilities.

Dare to think some very weird thoughts: Asynchronous logic (Muller gate) built on ECL transistors. Power analysis, gone. No cache hierarchy (no caches) – timing analysis, mostly out the window.

A genuinely-simple and exhaustively-documented architecture also means you can have your silicon fabbed locally by persons you trust, whether you are Putin or Castro. 1970s MOS will do fine, because computers only come in two speeds.

RobertT September 14, 2013 11:20 PM

@ Stanislav Datskovskiy

Asynchronous logic only seems like a good idea until you implement it because it makes DPA trivial. It only changes the skill set needed to undertake the DPA task because the signature is no longer locked to the instruction clock frequency. Think of the Asynchronous DPA signature as being like a doppler shifted DSSS transmission, exactly the same correlation Rx techniques you would use for this problem works for Asynchronous logic DPA.

Getting the silicon fabbed by people you trust now thats a good laugh, you’ve created a system that is completely subvertable by just owning a single step of the process. Modern chip lithography is capable of fabricating logic gates with area of less than 1000sqnm (under 1/10th the wavelength of light in every dimension) How would you ever know that some of these ultra small gates were not added to your simple system. BTW typical SiN/SiO2 sandwich chip passivation layers are completely opaque to deepUV.

Nick P September 14, 2013 11:35 PM

@ Mike

” so, if I understand what you and Stanislav are saying correctly – our current architectures build in a level of complexity that effectively makes complete transparency practically infeasible, and if we could have a CPU that understood a higher level language we could engineer some of this out of the system? Is that the supposition, in essence? ”

Well, I don’t speak for him just me… 😉 What I’m saying is that current CPU’s were designed for another era. That era had people whose programming routine was about moving memory and data around in primitive ways to accomplish work. This was incrementally improved on, but kept legacy & backward compatibility. Processor companies rarely innovated b/c the market wanted backward compatibility, more speed, and optionally performance enhancing features (even more speed).

Today, we have managed runtimes, concurrency issues, security issues, portability issues, etc. Some of these could be solved or a solution aided by a better hardware base. A departure from the norm, shall we say. A few companies tried that back then with object architectures. Intel 432 and IBM AS/400 were examples. Intel’s was much safer than average and supported more modern paradigms, but ran at 25% speed. AS/400 wasn’t a pure object processor but was good at what it was designed to do. It made it. 432 died. And that’s it far as non-embedded market went for a while.

I mean, there are enterprise Java processors for application servers. Itanium tried to be a bit different with a simpler architecture and built-in safety/security features. However, hardly any company is designing the chips to provably eliminate problems that have been plaguing us for over a decade. Current chips have to be forced to operate correctly with great effort. Certain chips in the past did well naturally by design.

Far as complexity goes, the you are right in that you don’t want more than you need. Proper use of abstractions is the main way of managing complexity, along with translation/generation tools. Examples of trying to increase hardware assurance are VAMP processor for correctness, AAMP7G for isolation, and Cassion language for specifying it. Compare that to say Verilog + typical specs and you can see a world of difference in likelihood that end result will work properly.

Stanislav Datskovskiy September 14, 2013 11:39 PM

RobertT: I’m distinctly uninterested in DPA, because “tamper proof” hardware is an idiot’s game. If you can get a hold of a few thousand samples, it doesn’t matter what kind of unobtainium a “tamper proof” widget is made of.

The “machine owner as the adversary” segment of the “security community” should really be run out of town on a rail. If that’s what you (not you, RobertT, specifically, anybody reading this) are working on, I hope you fail miserably and go broke. I want no part of that kind of “security.”

Unless you’re thinking of power line analysis by an adversary at the mains plug end, something which AFAIK has never been demonstrated (though I do believe it was once carried out, in anger, on… electric typewriters.)

The solution to paragraph #2 is simple: stick to 1980s VLSI. A desktop user doesn’t really need modern CPU power for much of anything. Especially once you get off the Microshaft treadmill.

The 1980s state-of-the-art workstation was better-designed, more responsive, and in various other ways more useful than what you and I have now. Most of the increase in density has gone to waste. It is being used to churn garbage (idiot bloat like XML; multiple overlapping memory allocators with mismatched impedance, etc.) while the hapless user twiddles his thumbs and curses.

Stanislav Datskovskiy September 14, 2013 11:44 PM

Nick P: “you can’t transition from the informal to the formal by formal means.” Where is the mathematical proof, the formal specification, which shows that your doorknob turns? And yet it turns. A Kalashnikov fires, though no mathematical proof of this fact exists or is likely to exist. Simplicity is power.

We can do the same re: computing. It really isn’t such a complicated business – once you build the Bonfire of the Protocols and dispense with all the design-by-committee garbage.

Nick P September 14, 2013 11:50 PM

@ Stanislav Datskovskiy

“This is great for making a system “safe” – from you, the supposed owner. ”

And tons of other attackers, including FBI and NSA to a degree. We’re screwed if they subvert the fab itself. Otherwise, a decent design provides more protection than crappier designs.

“Now, the right way… is a computer whose design actually fits in one head – like a Kalashnikov’s. ”

It should fit in the mind of a designer. If people can’t thoroughly understand every bit of it, then it will conceal problems. Both LISP Machines and Wirth’s Oberon/Modula workstations remind me of this.

“Don’t add to the complexity, eschew laboriously-standardized protocols of every kind, burn the legacy garbage – a computer whose complete description, from the ground up – including basic system software – would in one mind.”

Problem is we don’t just need a computer. We need a computer that can securely interact with untrustworthy input and hardware/software faults. Security requires a few things: control of information flow; isolation of certain information; recovery from problems; assurance of design & implementation of former.

So, we have a data-driven action and then we have a version that prevents problems from affecting us. The latter will always be more complex and less pure than the former. As will the features I mentioned. The result is that the complexity must be managed by the designer using modular design with careful interfaces, rigorous specification of behavior, and strong correspondence between implementation/design/requirements. The design might fit in one’s head all at once or might only do so one part at a time. Nature of the game.

Another reason for those things you don’t like are to contain issues when system developer or user screws up. (See Inevitability of Failure) A system assuming perfection of developers, users or hardware will be compromised. Call it one of my maxims of security.

Stanislav Datskovskiy September 15, 2013 12:06 AM

Nick P:

> And tons of other attackers, including FBI and NSA to a degree.

Pray tell, how?

I think a new security maxim is called for: any system physically-capable of concealing a design detail from its owner will end up concealing an NSA master key.

> …contain issues when system developer or user screws up.

This is really a dream of “fried ice.” A harmful, physically-impossible dream, which will be happily catered to by snake oil peddlers of all stripes until the world awakens from it.

Clive Robinson September 15, 2013 3:01 AM

@Stanislav Datskovskiy,

I know that you think RobertT and myself are thinking way beyond what you think is possible or will be done, but history shows that what is deemed impossible today will be an operational weapon tomorow.

Thus it’s sensible to keep a “weather eye” for approaching storm clouds and follow a few basic rules of thought.

One aproaching storm cloud is the Ed Snowden revelations, the simple fact is that people in the security industry have been pointing out that what the NSA are supposadly doing has been possible for twenty years or so one way or another and it would be daft not to consider they are doing so. The general industry concensus was “so what” and “carry on as usuall” ignoring the problem.

That was great for two sets of people, the NSA et all and those who have taken the warnings seriously. Why the latter well it’s in part the “low hanging fruit” principle and in part the old joke about the two men and the tiger where one realises he only has to run that little bit faster than his friend to survive because the tiger will go for the “low hanging fruit” of the slowest man.

The Snowden revelations are going to act as a wake up call for quite a few (but probably not many as a percentage) and they are going to take the time to tie their laces and run that bit harder than the others, which is going to vastly increase that second group, and that’s a real problem.

Why is it a problem, well few people are going to climb a tree for just one or two very good apples when there are hundreds of thousands of perfectly acceptable apples within reach of the ground with a bit of a stretch. But if there are now thousands of very good apples up there then it pays to go and get a ladder or cherry picker crane to get them.

That is that second group that were outside of the NSA general harvesting methods were to small in number to bother with will now grow a thousand fold and as such now sufficient in number to be worth bothering with.

This gives the NSA two choices extend their existing hoovering methods to cover them or directly target on a “personal” basis. The most efficient with anything other than a very small number of select targets is to extend the existing hoovering methods.

I can say this with a degree of certainty because this is what the criminal element on the Internet has done already and as history shows, where criminals lead spys will follow just a foot step or two behind [1].

Right now there are very many “Personaly Identifing Information” handeling organisation executives having conversations with the legal fraternity about liability under verious pieces of legislation. Because yesterdays “best practice” has just in effect become an admission of guilt for not taking due caution and diligence. So they are going to talk to their technical staff and open their cheque books and pay for a whole load of new security product. Much of which will be based on what is current practice in that small second group (with luck it will spike BYOD and external Cloud which could be good news for sysadmins). So it will cause the Feds etc to “go dark” untill they have extended the hoovering or changed the legislation (which appears unlikely in the immediate future).

This guarenteed extension to the hoovering methods means that the small second group have the choice of either notching up their security efforts or getting hoovered up with the rest. So whilst the Snowden revelations are a bright shining light to many for the few they have cast dark storm clouds across the horizon and tempestuous times are ahead.

Thus the use of minority operating systems and permiable air gaps is nolonger sufficient more technical means are required. Some of this will be acheived by better OpSec such as replacing air gap crossing media such as thumb drives with media that is either very cheap to buy and securely destroy to do one off unidirectional use, or can be easily and reliably sanitised such as old school magnetic media like floppies and mag tapes (but not hard drives), all with properly audited and fully traceable media use.

But some will by necesit,y be by hardening systems at a more fundemental level than has been done previously. As a first step by reducing systems to the point of single function not just in the application but in the OS and hardware as well. How far down the hardware stack is currently an open debatable but certainly below CPU level.

As Nick P has noted he and I along with several others have been discussing this for some time in both speciffic areas (financial authentication methods, IMEs, data diodes etc) and more generaly such as computer architecture (see Castles-v-Prisons). We have also along with RobertT looked at just what would be required as a process to “poison the supply chain” at SoC and lower levels and even how to store data in ways that can only be meaningfull to one small part of a chips functionality and thus defeat “flip top” and “embedded” attacks on the silicon.

On the above CPU side of the computing stack we have criticaly looked at faux techniques like “code signing” long before Stuxnet reared it’s head to briefly wake the rest of the industry up. We also looked at ways of subverting PKI.

But in the more distant past we have talked about what you might call TEMPEST or EmSec, not in just the “passive” way but the “active” way as well and importantly the ways you manage it and how they relate to managing complexity.

You make the point about “in a single persons head” this has been discussed before on this blog and it’s been repeatedly pointed out that our computing needs are beyond this “strip back to basics” idea which from one perspective (supposed efficiency) is true. However it’s not the only game in town and it is also an evolutionary culdersac as the likes of Intel have realised [2]. The solution is to manage complexity by divide and concour. That is you have one “function” per computer and strongly controled choke points between them. In this way a single person can hold the design of each striped down computer, switching buss and control buss and hypervisor in their head.

I’ve discussed this in the past as part of Castels-V-Prisons and it follows on from the logical design prrocess of the highest security systems.

Whilst you are considering this think carefully about the purpose an MMU has and what advantages this can give you in a multi CPU+MMU system with common memory BUT where the MMU is not controled by the attached CPU but a security Hypervisor.

[1] The reason for this as history clearly shows contrary to the impression some spy stories may give, in general the Inteligence Services “officers” don’t get involved with the “risk” of actual “spying”. That’s what the expendable MICE driven “agents” and “contractors” are for. The officers mearly act as middle men passing orders down to the operatives and the resulting intel “product” from these “methods and sources” operatives up to the “analysts”. The analysts then do what is in effect “investagative journalism” and produce the refined and sanitised “intel” that gets used by those in government or other Inteligence Service departments/compartments.

[2] As a broad overview the Intel x86 design started when Complex Instruction Set Computers (CISC) became viable due to reductions in transistor size on chips. At that time the opposit aproach of Reduced Instruction Set Computing (RISC) was the vogue for high performance computing which became multi CPU for Cray and IBM. Sun purchased the rights to Cray’s switching techniques and the Sun StarFire series of computers became the way to go until clustering took off. Intel stuck with CISC for quite some time but as problems increased they first switched to an internal Harvard design with seperate data and instruction caches, and eventually even they had to do a wolf in sheeps clothing where by they wrapped a RISC core in CISC clothing. But as we see now even that idea is nolonger cutting the mustard so Intel has gone down the multi core process quite late in the day as the reduction in transistor size and other chip fab techniques have made it viable. More interestingly perhaps is what has happened to Intel’s main rival in core count Advanced RISC Machines (originaly Acorn Computers but spun off). ARMs RISC CPU pops up all over the place and some SOCs have not just multiple ARM cores but multiple computers with each CPU core having it’s own seperate RAM and IO, in what is effectivly “clustering on a chip”.


Rolf Weber September 15, 2013 4:03 AM

Rolf Weber: that one’s easy: direct access for NSA toadies like Google, Microsoft, et
al., MITM for some minor pesthole (small company mail server, etc.) that hasn’t been
properly pwned yet.

The report was about a MITM against Google.

I wonder how much Googles cooperation with the NSA is driven by the frustration
with trying to keep them out of the loop.

Consider the time flow: The reports said the “direct access” was established before the attacks.

No guys, there is only one logical explanation: The “direct access” story was a lie.

Clive Robinson September 15, 2013 4:16 AM

@ Nick P,

    Many software errors are removed by giving programmers higher abstractions. However, the safer software platforms keep having trouble at the lower level stuff the language assumptions ignore or which is right at an abstraction boundary. The obvious solution is to raise the hardware’s level of abstraction/operation up to reflect what we’re trying to do with it. Not to mention, it’s better to work on screws with a screwdriver rather than an old hammer

The problems with “obvious solutions” is whilst they might be right in the short term they are often wrong in the long term (as I commented above in my sub 1 Intel fell into this trap a number of times).

I’ve mentioned before the Unix idea of pipelining small special function programs in interpretive shell scripts as being a possability for decreasing hardware complexity whilst also increasing security whilst also increasing code cutter productivity. All of which are desirable outcomes, the downside being that it’s some what slower on any given platform (the reality of which is it usualy does not matter).

In effect you do what Intel did when switching from a CISC to RISC core, you wrap a layer around the core in essence dressing a wolf up to look like a sheep.

This wrap can be hardware, microcode, byte code or executable programes it does not matter. They are all interpreters, they are all less efficient than working at the layer below, however that is irrelavant all that matters is the wrap is secure in it’s own right.

Importantly we know that software bugs are more proportianate to lines of code than they are to the complexit of what each functional instruction has. Thus there are significant “code cutter” advantages to having as higher level of code as possible in terms of both bug minimisation and increased productivity. It also makes code easier to review as well.

So at the highest level you have a machine with what is in effect a striped down interpreter (like the Unix reduced shell) and a file of very high level interpreter code that pipelines data throuch executable code functions but does not have a compiler etc to write the executable code.

The executable code is written not by jobbing code cutters but by experienced secure code developers via formal methods which goes through independant accreditation for both security and formal function. However the addition is another file for the hypervisor that monitors the data paths between functions looking for limits and exceptions to the data formats as well as ensuring data is segregated correctly against function. Thus for instance data taged as an encryption key could not go to any place that was not formaly identified in that file as a port on a function being both capable of and authorised to receiving that data type.

Clive Robinson September 15, 2013 5:30 AM

@ Mike the goat,

    I can also think of numerous ways bias could be introduced into any one of these aforementioned designs. Is there such a thing as a perfect TRNG?

The simple answer “not that I’ve seen”.

And to be quite honest part of the reason is the definition we glibly say “True Random” but what do we mean? Usually we realy mean “unpredictable” or more correctly “non determanistic” but in turn what does that mean? When you consider limited knowledge of determanistic processes and our inability to have full knwoledge, not just as a human limitation but a hard limitation on computing and physics.

Apparently Einstien like many scietists of his time did not believe in what we now call quantum mechanics and the effect it has on the world. In effect he actualy believed that everything was fully determanistic and thus “pre-ordained”. Which in turn means that we run like trains on a track and have no free will to chose our own direction. Thus there could be no random only that which we could not predict due to our limitations of information and comprehension.

But Heisenberg amongst others gave rise to the notion that it was not possible to know everything that there are fundemental limits, in that the more precisley we know one measure of a particle (ie its position) the less certain we became on another (ie it’s momentum). Thus our knowledge is both finite and probabalistic in nature.

Few scientists these days have any trouble with the notion of free will and most physicists whilst they may not be comfortable with it accept the ideas of the quantum world and are fully paid up card carrying members of the “Shut up, sit down and calculate” club. And thats the part of the problem our view of the quantum world is not one based on what we can see and touch but what our statistical models tell us.

We thus have no way of determaning if an individual bit or number is “random” or “determanistic” by observation, only if it falls near or on a statistical line when compared to many individual values so far observed.

So we have a series of determanistic tests that get ever more complex. The simplest being we read out X bits from the generator and we expect to get X/2 zero bits and X/2 one bits. We then go to two bit values and expect them to fall as X/4 in frequency and so on. Obviously the more tests the generator passes the more likely it is to be a “random” number generator of some form. But it’s quality can only be witnessed by passing tests, as there is no one difinitive test by which it can be said to be “non-determanistic”. And as we know we can develop determanistic sequences that pass all tests that don’t have specific knowledge of the determanistic process

However there is an issue that differentites TRNGs from PRNGs which is when a value appears next. The easiest way to visualise it is the generator is an urn filled with balls –the number of which is defined by the generator internal state size– into which you put your hand and draw a ball to get a value. The difference is what you do with the ball you’ve drawn, do you put it back (TRNG) or do you leave it out (PRNG). From just examining the drawn balls it’s not initialy possible to tell the two generators appart. But after a sufficient number of balls have been drawn the abcence of any repeate balls drawn enables you to say with some confidence that it’s a PRNG not a TRNG. However this test can be mitigated simply by having non unique values on the balls, or putting such a number of balls into the urn it is not possible in a realistic time period to draw out sufficient numbers of balls to have confidence in a prediction.

So the real answer is we can not test for a TRNG in any meaningful way in a realistic time period and providing our models of quantum mechanics don’t change we never will.

Which leaves the question of bias, if a signal of sufficient level is injected into a TRNG source we know that by a process called “injection locking” the source can be pulled into synchronisation with the signal, if this signal contains no bias then the output of the TRNG will show no bias. If however the injection signal is insufficient to achive locking it’s effect may be seen on the TRNG source output as addative or multiplative noise this may cause bias to appear on the TRNG output. This is why in a good TRNG the output of the source buffer prior to any bias removal circuitry should be made available for testing on a spectrum analyser in waterfall mode or more complex test unit. The reason for this might sound odd, but all usable sources will either be biased or show bias from time to time simply due to the nature of noise. If that bias is not there or to large or changes in some way then it’s an indication the source is failing in some way.

BackdoorNation September 15, 2013 5:35 AM

I’ve been saying for years that NSA was doing this. I said that NSA most certainly owns various CA’s (or clandestinely ran their own, which is much easier) and people called me paranoid. The CIA, for instance, is quite expert in running faux corporations or businesses and using them as fronts for intelligence activities (mostly in foreign countries). NSA could easily do the same with something like a CA.

Like Bruce always says, being a good security person requires thinking of how to break systems in unique ways. Well, CA subversion is not unique, but is flat-out common sense. We’ve known for years how big of a joke the CA chain of trust was, with some CA’s even giving out root certs for free and others selling them to various corporations who had no business needing a root cert. Not to mention all of the hacks of the various CA’s. Get one root or intermediary cert and you own the PKI, at least until someone bothers to check.

A lot of people sort of shunned this idea because they said MITM is hard to perform without getting caught. That might be true for Joe Sixpack script kiddie. But when you are NSA and literally own the backbone of the Internet with fiber taps, you can do as you damn well please and it will be hard for anyone to figure it out.

The CA and the MITM problems can be resolved. Of course, it will take industry 10 years to implement fixes we already have (for instance, why are we still using decrepit TLS versions from 5-6 years ago? It’s preposterous, but IT security has always moved at a snail’s pace).

But what is more concerning is the subversion of hardware. I will bet my bottom dollar that NSA has their own circuits built into Intel/AMD processors. With all of the widespread collusion we have seen from American tech companies with NSA (thanks to Snowden), it would be surprising if there weren’t such backdoors in commodity hardware. If I worked for NSA, this is exactly what I’d be spending money and time on doing. Screw breaking crypto. You don’t need to when you own the hardware.

This is one reason I am highly suspicious of Intel’s latest RNG implanted on their CPU’s (Bull Mountain with the RDRAND API). Intel reps tried to recently convince Theodore Ts’o (the guy who created /dev/random module on Linux) to make it the default entropy input into the kernel. Ts’o said no thanks, but Linus Torvalds is calling people “idiots” who think Intel is up to no good. I’m not sure who won that battle, but obviously Linus’s decision is final.

You know what they say: those who accuse a significant other of cheating for no good reason are usually guilty of it themselves (those who bark the loudest and all that). If you will recall, earlier this year it came out that NSA was accusing Huwaei and Lenevo of putting backdoors in it’s hardware. Most of their reasoning for believing this was “classified,” of course, even though it would benefit American companies if we all knew more details (but classification trumps all when talking about the NSA). I suspect NSA is so suspicious of China because they know that they do the same damned things to American products.

I read an article recently where a professor of computer science who specializes in chip fab designs said she has had personal talks with various NSA people (off the record) who said that NSA has spent a lot of money researching hardware backdoors. She said they have put a lot of foreign made COTS hardware under thorough analysis in their labs. This is probably how they discovered that Lenevo PC’s were compromised and is why they have told all “5 eyes” allies not to use Lenevo products in classified networks.

So, I ask you, dear reader, do you not think an agency with NSA’s budget, expertise, and lab equipment could use this knowledge for their own nefarious purposes?

So what’s my point? I hate to say it, but it’s come to the point that even non-paranoid people are having to wake up to the fact that we can’t trust anything electronic (at least not if you care about government snooping). Even Adi Shamir said at RSA this year that cryptography is basically dead — not because it isn’t an interesting or important field, but because it simply doesn’t matter when you can’t trust anything else about your security.

I would like to see public researchers and academics take a stand and start their own standards groups, and most importantly, start their own open hardware initiatives (nothing else matters if the hardware isn’t clean). Will it happen? Probably not.

mile acker September 15, 2013 6:04 AM

if we concede that the bad guys — whoever they may be — will be able to access information from any on line computer then the obvious counter is to make sure whatever it is they take is of no value

setting up a crypto machine with an air gap to the online machine is not difficult, nor is setting up a Vernam Cipher program

when the beast can’t read your cipher he may get mad though. so when the black van shows up that is when you will need the Kalashnikov

Better to escape before that. I hear John McAffee is looking for people to play poker ( bring money )

Mike the goat September 15, 2013 6:46 AM

@BackdoorNation – indeed the battle between Torvalds and T’so got quite heated I believe, perhaps it wasn’t quite as caustic on lkml but T’so himself said that it was a significant bone of contention. If my memory serves me right he eventually reverted the code as a compromise where it would use RdRAND to seed the entropy pool (along with other inputs) instead of relying on it directly. This seems like a reasonable enough compromise given that even if the hardware RNG is backdoored it is being used only as one of a diverse range of inputs into the software PRNG. Interestingly enough T’so wrote a Google+ post a week back saying that in light of the NSA scandal that he feels vindicated.

@Stanislav – I agree wholeheartedly re PC owner being viewed as an adversary. We /own/ the damn thing. This is why DRM is a huge mistake.

@jon – These kind of exploitative porn are abhorrent but I do think there shouldn’t be any content that is illegal to merely view. Data is data. The idea that there are URLs you can conceivably enter in someone’s browser that will get someone investigated and potentially jailed is scary. Not that I am defending those legit offenders. Absolutely go after, prosecute, castrate etc those producing and uploading sickening material but the laws on accessing material are poorly implemented and have been used to frame innocent people in the past. Those who have open wireless networks, run proxy exit nodes or otherwise share internet are at risk of their lives being systematically destroyed even if there is ultimately never a conviction. Those browsing internet pornography sites can inadvertedly stumble upon illegal content through following links that may not necessarily reveal their true nature or through pop ups. What better way for the gov’t (or a business rival) to destroy someone than to download some illegal material (using either physical access or a remote exploit), dump it on your HDD and then start a p2p server to start pushing it out to the world. It would only be a matter of time before they were imprisoned. Never mind the fact that even those who are actually deliberately looking t anything sick (and this includes ‘shock’ pictures of autospy photos etc) are basically just kids themselves and are just fulfilling our basic human curiosity to see the worst the internet has to offer. I have no doubt that – when properly targeted this intelligence probably gets genuinely sick people off the streets who were a genuine threat to the young or otherwise defenseless but at what cost to civil liberties, especially those scenarios mentioned where it is a legitimate false positive. Nothing is more repugnant or polarizing than these kind of accusations which is why law enforcement needs to get it right and make sure they are fsnn right before destroying someone’s career, marriage and liberty.

@Clive – Thank you for your detailed response.

nameless September 15, 2013 12:18 PM


I wonder how much Googles cooperation with the NSA is driven by the frustration with trying to keep them out of the loop.

I cannot claim to know but Julian Assange seems to be of the opinion that Google works with the government because they want to show “Washington that Google can be its partner, its geopolitical visionary, who will help Washington see further about America’s interests. And by tying itself to the US state, Google thereby cements its own security, at the expense of all competitors.”

According to some emails leaked from Stratfor, Google is involved in sponsoring uprisings in some countries abroad, and this is supposedly because they are able to get away with stuff that the US government cannot officially touch.

Google and the NSA: Who’s holding the ‘shit-bag’ now

nameless September 15, 2013 12:25 PM

and whilst on subject of Google:

Google knows nearly every Wi-Fi password in the world

If an Android device (phone or tablet) has ever logged on to a particular Wi-Fi network, then Google probably knows the Wi-Fi password. Considering how many Android devices there are, it is likely that Google can access most Wi-Fi passwords worldwide.
Many (probably most) of these Android phones and tablets are phoning home to Google, backing up Wi-Fi passwords along with other assorted settings. And, although they have never said so directly, it is obvious that Google can read the passwords.
Android devices have defaulted to coughing up Wi-Fi passwords since version 2.2. And, since the feature is presented as a good thing, most people wouldn’t change it. I suspect that many Android users have never even seen the configuration option controlling this. After all, there are dozens and dozens of system settings to configure.

In Android 2.3.4, go to Settings, then Privacy. On an HTC device, the option that gives Google your Wi-Fi password is “Back up my settings”. On a Samsung device, the option is called “Back up my data”. The only description is “Back up current settings and application data”. No mention is made of Wi-Fi passwords.
In Android 4.2, go to Settings, then “Backup and reset”. The option is called “Back up my data”. The description says “Back up application data, Wi-Fi passwords, and other settings to Google servers”.

Needless to say “settings” and “application data” are vague terms. A longer explanation of this backup feature in Android 2.3.4 can be found in the Users Guide on page 374:

Check to back up some of your personal data to Google servers, with your Google Account. If you replace your phone, you can restore the data you’ve backed up, the first time you sign in with your Google Account. If you check this option, a wide variety of you personal data is backed up, including your Wi-Fi passwords…

Dirk Praet September 15, 2013 12:58 PM

@ Clive, Nick P, RobertT, Mike the goat, Stanislav Datskovskiy

Thanks for your enlightning posts on secure systems/hardware. These are the kind of discussions that got me on Bruce’s blog and that keep me come back.

Figureitout September 15, 2013 10:16 PM

Stanislav Datskovskiy
I doubt that any of us here are worthy of such treatment
–Uh, I wasn’t worthy of it but I still received it (and it still lingers…) I can confirm some of the so-called “way too expensive” attacks; bluetooth in particular. Physically accessing your devices is the end all though; do you like to sleep? I’ve seen the van too, unmarked; and other devices. They will move in next door to you if they haven’t already, so be sure to welcome the new neighbors. Well, they can’t figure out [one] of my means of countering, I even blurted it out in the clear to shove it in their face. I also stated that there are possibilities of the brain that enable some capabilities that will blow their mind (useful to determine which girl likes you as a kid) and it’s like magnetic field, that will remain a secret too. I didn’t deserve the treatment I got and they created a lifelong enemy, my own gov’t…

I would like to see public researchers and academics take a stand and start their own standards groups, and most importantly, start their own open hardware initiatives (nothing else matters if the hardware isn’t clean)
–Oh how many dreams I’ve had…I really think it’s time and that young engineers need to see this. Eventually the old farts die.

Figureitout September 15, 2013 10:23 PM

–If I ever manage to scrunge up enough to start up my own fab lab, there will be a hell of a fight to subvert my lab. How do they even verify all the transistors? They don’t, always a failure. They’re just as insecure as us, idiots.

Mike the goat September 15, 2013 11:07 PM

@nameless – This is precisely why I don’t do anything trusted on my android device. I have no doubt that in an implementation as clumsy as android there is bound to be numerous remotely exploitable flaws in addition to the possibly deliberately placed ones (CarrierIQ anyone?). It would be really convenient for me to install GNUPG on my cellular phone but I have resisted the temptation. The closest I may come is generating a subkey which I will mark in the comments as being of low trust, e.g. “Low trust key used on cell for signing. Don’t encrypt to this ID” and use it strictly for signing purposes. Hopefully people will understand that it carries only mild trust, especially considering that cells can also go “wallies” uh, get stolen and android’s “security” is a misnomer. Even FDE is silly as phones are always left on and the key left in RAM.

@Stanislav – why doesn’t that surprise me? I would guess that script kiddies love pages like these where many security oriented people – some very well known. What better way to get kudos from their pimply friends on 4chan than owning a security researcher’s blog? Guess if you talk the talk, you’re expected to walk the walk.

DaveyJones September 16, 2013 9:16 AM


“Even FDE is silly as phones are always left on and the key left in RAM.”

Valid, but there are times when there is time to shut it down. The most likely scenario is a traffic stop – FDE and shutting down, guards against a Cellebrite device.

Also, while at home – but this depends if you have other security to warn of “uninvited guests” – cameras, dog, etc…

FDE isn’t entirely useless on phones.

Mike the goat (now with added horns) September 16, 2013 9:41 AM

@DaveyJones – if I had a few minutes before the feds knocked in my door my cell would be going in the wood stove 😉 if it had even a kilobyte of potentially incriminating data on it (encrypted or otherwise). I do not trust a kernel compiled by a proprietary cellular manufacturer or even worse a carrier to do “the right thing”. For all we know their dm-crypt might be subtly leaking the key. But yes, I appreciate that FDE has some benefit. The problem is that the end user (who is often uneducated) isn’t aware of the limitations and the potential for the code to be (deliberately or inadvertently) broken. Ethically android should produce an EULA style dialog that requires you to page to its end before being dismissable before the user enables FDE explaining that a) the code may contain vulnerabilities b) no cipher is immune to brute force attacks on the passphrase c) in some jurisdictions you may be compelled to provide your key, and failure to do so can be an imprisonable offense [e.g. UK] d) FDE implementations are vulnerable to cold boot style attacks e) all your bases belong to Google.

DaveyJones September 16, 2013 10:39 AM

Again, I agree with a lot of what you just posted. The bottom line is that very few will not carry a phone. For the few of us that know security, to me, buying a Nexus device, installing an Android Open Source Project ROM, and going pre-paid, is the best way to go.

Remove permissions from ADB, re-lock the bootloader, firewall it (especially any GPS services), secondary container encryption, Tor/VPN, Gibberbot, etc…

It’s either not carry a phone, or do what you can, with what you are given

But yeah, non-nerds need not apply, and you are spot on for the masses.

Stanislav Datskovskiy September 16, 2013 10:59 AM

At present, the best defense is the one which seems to work for most of us: being uninteresting.

The fellow in the black van doesn’t work for free.

But this is rather like recommending anorexia as a defense against cannibals. It might work, for so long as your are skinnier than the cannibals are hungry.

Figureitout September 16, 2013 9:34 PM

Being uninteresting works wonders too; Idk though, there was a pretty hot chick the other day in the MAC (math assistance center) lol. Well they definitely won’t be in some other labs lol.

Clive Robinson September 17, 2013 2:40 AM

@ Figureitout,

    –The van can be gray too

Or white 🙂

In the UK we have an expression “White van man” due not just to the vast numbers of white vans, but also in part to some percentage of the drivers of these white vans driving either recklessly, illegaly or out right dangerously. As well as some of them being near grey with dirt sufficient for others to write comments in that critique the van’s driver’s abilities with a somewhat barbed witisism…

Figureitout September 18, 2013 12:22 AM

Clive Robinson
–Yeah, or have false logos. I remember their plate #’s and driver faces anyway to track down when I feel like it. Plus I love seeing spooks passing my house as I’m mowing the lawn, found another “stash house” near me; will have to check it out too I guess. One individual even parked backwards in a pitiful attempt to hide his plates, haha, got’em anyway and his face and his residence.

Mike the goat September 18, 2013 5:28 AM

@figureitout – I am not going to suggest you are not being surveiled but it is quite common for people to be unsettled by the first discovery to the point that they start becoming paranoid and find data to support their hypotheses. Unless you are a particularly high value target government surveillance in ‘meatspace’ is typically of short duration and typically hard to detect. They certainly don’t need the SIGINT vans they used to use anymore. A HSUPA DC equipped bug aggregator can push a day of G729 compressed audio surveillance in under a minute. In fact the longest delay is network association. It is common for them to burst data out like this to reduce their chance of detection. There are self contained devices that replace a standard wall socket and include both a sensitive mic and UMTS transmitter and there are standard bugs that are designed to be littered throughout the place that phone home to the hidden aggregator. I have even seen low energy listening devices powered by zinc/air cells that can last for weeks without an external power source.

My point is – if you are being surveiled by the pro’s you won’t see a telltale van sitting around, nor will they need to move next door (although I have heard of this, particularly in apartment buildings). Now local law enforcement might be a different story. I know many still do it the old fashioned (and legal) way, with stake outs and tailing. The pro’s will likely use your vehicle’s GSM radio (OnStar equipped vehicle) and your personal cell to keep tabs on you and won’t need to physically follow you.

Figureitout September 18, 2013 7:27 PM

Mike the goat
–It’s ok, you don’t know my situation so you can think what you will. It was a really stupid investigation and I wanted to make that extremely clear. I made some slip ups but overall my methods remain secure. It was both, local law enforcement was truly a joke; extremely easy. The “pros”, lol; I taught myself my skills since I was a kid and got bullied so it was my revenge since it was generally a group of kids who would otherwise beat me up by myself. Obviously it’s not like they’re tailing me like that, no; it’s used to transport equipment.

I made sure to eat plenty of taco bell for the bugs too; I didn’t really care about them, I can do it too…

Clive Robinson September 19, 2013 8:41 AM

@ Mike the goat, Figureitout,

Being under the “eye” or suspecting so is very moraly corrosive, look up “long gun fear” it can cause an entire batalion to become ineffective. Or look back to the times of “The Washington Sniper”.

The behaviour people emitt at such times looks, tastes and feels like paranoia but it’s actualy not it’s actually an inbuilt protection system in your acient “monkey brain” to make you run up a tree when things just don’t quite add up. People who are more sensitive to “something odd” in the environment have the makings of “prey that survives to mate” or in modern parlance has the makings of “good situational awarness”. It’s something several thousand years of “soft social living” has been breeding out of humans.

And it can all go wrong through no fault of your own and others.

A friend of mine from years back invested his Army Leaving pension money into a property or two figuring not incorrectly that the financial industry were a bunch of crooks.

One property was in a part of West London close to Wilsden, a not particularly “good area” but improving. It was in a block of flats, but unknown to him and his tennant a number of the other properties had been taken over for a drug growing farm. His “something hinky” alarm was going full tilt and he was certain he wass being followed.

Now when it comes to field craft following people is a real art that few are any good at especialy if they don’t think about a “check tail” who folows on behind a principle to see if people are acting in a way that indicats they may be following the principle. Check tails are almost always impossible to spot if they are even half way good at their job and those that do surveilance know it’s safer to get a “man inside” than it is to follow people around.

So my friend asked me to check his tail, I aranged for him to walk slowly past a well known fast foodd outlet that had seating in the window and sure enough he had three people following him which was odd. So I decided to follow the last one and note where he ended up when my friend ditched the tails on a pre aranged signal. Well the last one on being ditched made a phone call and walked back to a point where he was picked up buy a couple of similar types. The car registration matched onethat was seen parked down the road from the flat quite often.

So a few days later I photographed all the tails to get nice clear face shots and printed them out A4 size and poped them in an envolope along with a few photos of the car and put a pole job up to watch the car. Sure enough about six different people used the car on “turn about” and 8 identified the property they were using which overlooked the block my friends flat was in. So the photos and a pice of paper with “who’s watching the watchers” in it was hand delivered to the property on the next turn about by my friend who then went and sat on a wall on the other side of the road and just waited. It took about half an hour for the crap to hit the fan. Needless to say they were not happy bunnies.

Any way they had another problem, theyed been blown but the other tails were unacounted for… Any way the upshot was a “drugs bust” two days later and all the tails disapeared.

We checked for tails again a couple of times later and did full sweeps with thermal and non linear equipmet of his home and office and did cell checks etc. and they all came up clean. But he has become more cautious and does a lot off basic OpSec field craft now including ditching the car and mobile and paying for a call taking service, and getting proper security in place. Oh and he gets on well with the local police as well they say hello and smile and he smiles back and is on first name terms with some of the more experianced ones

Is he now paranoid, not realy just more cautious and it’s payed off as the improved security has stopped a couple of break ins before much damage ws done and nothing got stolen and his tennants like the extra security as it makes them feel safer oh and his insurance costs are quite a bit lower as a result. Oh and he’s a lot fitter as well ditching the car in favour of a push bike gets him around faster as well as no more parking tickets or “congestion charge”.

Mike the goat September 22, 2013 4:41 AM

Figureitout: wasn’t attacking you personally. I am aware how being surveilled makes one paranoid and was merely pointing out that I have personally observed people who remain convinced they are being tailed years after the surveillance has ceased.

Clive: agreed, and that was the point I was driving at. Sometimes the unease and paranoia such intense surveillance and scrutiny generates can be worse than the initial “seed” that planted the suspicions in the first place. This is used to great effect by organized crime outfits who run a protection racket – “watch your back, you never know when we will strike if you refuse to pay” etc.

Kimani October 1, 2013 9:52 AM

This doesn’t surprise me at all. The world is turning into a virtual battlefield. No longer do nations need to take a large standing army to invade and topple a nation and/or government. In todays world it is much easier and vastly more efficient to have a government destroy itself and replace leadership. I’m not saying that’s what is going on here, but it always seems to start slow and non-threatening when agencies move into foreign contries to ply their arts to whatever bebefit they deem worthy. Its a scarey world out there. You have to be careful in everything you do cause it is all vulnerable to intrusion. Great article. I love to see the news that mainstream ALWAYS deems unworthy.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.