The NSA and the Risk of Off-the-Shelf Devices

Interesting article on how the NSA is approaching risk in the era of cool consumer devices. There’s a discussion of the president’s network-disabled iPad, and the classified cell phone that flopped because it took so long to develop and was so clunky. Turns out that everyone wants to use iPhones.

Levine concluded, “Using commercial devices to process classified phone calls, using commercial tablets to talk over wifi—that’s major game-changer for NSA to put classified information over wifi networks, but that’s what we’re going to do.” One way that would be done, he said, was by buying capability from cell carriers that have networks of cell towers in much the way small cell providers and companies like Onstar do.

Interestingly, Levine described an agency that is being forced to adopt a more realistic and practical attitude toward risk. “It used to be that the NSA squeezed all risk out of everything,” he said. Even lower-levels of sensitivity were covered by Top Secret-level crypto. “We don’t do that now—it’s levels of risk. We say we can give you this, but can ensure only this level of risk.” Partly this came about, he suggested, because the military has an inherent understanding that nothing is without risk, and is used to seeing things in terms of tradeoffs: “With the military, everything is a risk decision. If this is the communications capability I need, I’ll have to take that risk.”

Posted on September 20, 2012 at 6:02 AM33 Comments


kashmarek September 20, 2012 6:23 AM

More than likely they failed their original objective and someone called them on it in light of the expected budget cuts that are going to come to pass beginning in 2013. Or, their own lust for function compels them to take the risky path. Soon, they will become victims like the rest of us, with their tweets, porn, and secrecy lapses exposed for all to see.

AC2 September 20, 2012 6:30 AM

Glad to hear that the NSA, just like any other IT Dept, has to bow to the wishes of the higher ups who want usability at the expense of security or standardisation…

‘Just do it, Levine!!’.

The upside, I guess, is that they can’t be fired when (errr.. if?) it all goes wrong?

Musashi September 20, 2012 7:12 AM

Shame these dudes don’t teach the TSA and government Fear Departments a thing or two about Risk Management…

Per-Erik Eriksson September 20, 2012 7:23 AM

Hmmm. Interesting. I guess my old colleague may really be on to something with his new product for GSM/3G/4G mobile encrypted communication via SIP and SRTP.

(Though he need to get his act together and enable it for iPhone as well… 😉

Per-Erik Eriksson September 20, 2012 7:33 AM

Two additional things.

I guess it should be former not old colleague.

Secondly, please be aware that I have promised to help him market Exsec i.e. the product. In other words I am not an uninterested/objective party. Though I do believe in the product.

Autolykos September 20, 2012 7:50 AM

“we don’t know of any phones that have a hardware camera-off switch”
I know one, it’s called duct tape.

Steve September 20, 2012 8:52 AM

“I know one, it’s called duct tape.”

The bigger problem is something keeping the mic on, and you can’t duct tape over the mic on a phone. People don’t generally point their phone cameras randomly at interesting intelligence material. A picture of the inside of the President’s pocket isn’t particularly interesting. A slightly muffled recording of his conversations is something that foreign intelligence services would kill for.

Mark Gent September 20, 2012 9:12 AM

Security is all relative; surely if they use some form of enforced VPN on these devices to protect the transport layer and protect it with some crypto with a hideous key length cipher, that is just as good as a custom device? (and easier / more reliable to achieve)

VRWC September 20, 2012 9:45 AM

“A picture of the inside of the President’s pocket isn’t particularly interesting.”

It would have been in 1998…

moo September 20, 2012 12:02 PM

@Mark Gent:
What if the bad guys manage to deliver targeted malware to the device? Now you have to worry about stealth malware secretly running on the device. You have to somehow guarantee that it can’t break out of its VPN sandbox, and also guarantee that it can’t somehow transmit information across the boundary using a covert channel (timing of packets sent through the VPN seems like a stupid obvious one, I imagine there are tons of more subtle ways to do this, some of them with more practical bitrates, someone clever such as Clive or Bruce could probably rattle off half a dozen ways).

That’s a bad scenario. Your intelligence agent (or diplomat) is walking around carrying a malware-compromised cell phone that is recording his conversations, his GPS position, his text messages, etc. and is leaking that recorded info to foreign operatives in some way that you haven’t detected yet.

Hardening the software of a consumer-grade device against this, especially on something as complicated as a modern smartphone, seems like it would be very difficult. How can you ever know that you’ve plugged all of the possible holes? Though, I guess the NSA types face that type of challenge with any hardware they use, even if it was designed and manufactured entirely by their own trusted people. But its far worse when the components are fabbed in China and the software is written god-knows-where and can be loaded onto the device over the air…

Nick P September 20, 2012 12:04 PM

The NSA rep is treating INFOSEC like it’s all or nothing. High assurance is typically all-or-nothing, but security isn’t. NSA already dropped the assurance of most GOTS software products they were creating. I think they max out at EAL4+ (low assurance) on CC. Additionally, the vendors that were EAL6/7 dropped to EAL5+. So, it’s all been going downhill for a while to give users the features & performance they want, but that’s not the same as “anything goes” attitude in the article.

They could approach it a different way. They could partner with Apple to make a NSA or govt-friendly version of the smartphone. NSA could modify the firmware, software or hardware to increase assurance. They could take out any components that were risky or modify them. Optionally, these enhancements could be shared with Apple & allow them to choose whether to include them in the base product. That also goes for the results of NSA bughunting.

Another option is developing their own products with a new development process. It’s obvious that NSA and company’s robust processes are too slow. They should work with a design company to figure out how to tweak mainstream solutions for their use. They should identify core functionality first. Then, they release a bit at a time over several years (6 mon cycle). The releases will either add features or assure existing ones. Warfighters are used to their gear seeming four years behind. Six months will feel like Christmas. 😉

The separation kernel vendors and platforms like OKL4 seem to be the way to go for now. I promoted them heavily in the past. The main benefit is they can easily enforce separation at the software level without much performance loss & preserve legacy software. Simplest model is Multiple Single Level, with easy switching.

One thing I just remembered is that Mac is based on Darwin, which is mostly Mach & BSD. Much of the original research in the security kernel days went into Mach & Mach-based systems. On one hand, this means NSA could leverage all of that experience & software in the creation of more secure iDevices. On the other hand, NSA could leverage it’s resources to fund a port from the Darwin platform to one that can realize greater security. Apple wants to do away with Mach anyway, so this might be a good opportunity to do it with some additional benefits.

Figureitout September 20, 2012 12:57 PM

Even lower-levels of sensitivity were covered by Top Secret-level crypto.
–Thus reducing likelihood of successful S.E. attack. I’d be interested in what a normal convo is and lingo involved..

…buying capability from cell carriers that have networks of cell towers…
–Is that like RF allocation?

Maybe just use your phone to agree on meeting place to “really talk”. Not too long ago, my sister discovered our landline phone had a “room monitor” function built in (it wasn’t very good, but could still work with normal convo volume); so add that to the list of surreptitious bugs and cell phones to be mindful of. Hand-delivered paper notes are still my favorite means of comm.

I’d much rather have the clunky NSA-engineered device; but on the flip side an iphone doesn’t raise my suspicions as much.

I’m sure everyone’s seen this link:

Jonathan September 20, 2012 2:05 PM

In some Israeli military-related industries (like RAFAEL), employees are issued cell phones with the camera physically dismantled – I saw a Samsung Galaxy SII like that. Don’t know what they do with the mike though.

RobertT September 20, 2012 10:24 PM

@mark Gent
“….surely if they use some form of enforced VPN on these devices to protect the transport layer and protect it with some crypto with a hideous key length cipher, that is just as good as a custom device?”

Nice bit of Trolling, I even thought about penning a similar post…

Seems that we keep meeting on this same well trodden road, (securing mobile communications), I like what I’m hearing the NSA say BUT what does it all really mean?

Your ideas of a special NSA software load for a commercial hardware (iPhone) is interesting but completely impractical, believe me I’ve sat on both sides of this fence and there is no way to make it happen. The first problem that you encounter is completely undocumented hardware, the API’s as implemented are often very different to the only hardware specification that you can find, so when you finally track down someone that wrote the API you will find that the software written, to be hardware spec compliant, didnt work but THIS did, so THIS is what we use
You can probably hear me screaming HUH! WHAT! forget-it, this is critical…we need a meeting immediately…

Unfortunately this meeting is never going to happen because all the key individuals are far too busy to go back and revisit a 2 year old project ( 1 year development + 1 year stabilization before NSA picks up the platform). If you ever try to resolve these issues then you’ll get a junior engineer assigned BUT only if the NSA promises to pay his/her salary. (insert voice of engineering manager gleefully yelling yippee free resources)

In many ways achieving secure COTs is like tethering a race horse to a stately carriage. It might look good from the outside but….

click to play September 21, 2012 12:47 AM

NSA, CIA, FBI, etc. should all use well built, in-house, classified 3-D Printers. They should form a private, interconnected database, removed from the internet of course, for sharing between agencies of designs for agency phones and other devices. They could even mock existing commercially developed designs while being an entirely different product altogether. No consultations between agencies and established tech companies at all.

When you have money for some of the brightest minds in the world, this should be a trivial decision.

click to play September 21, 2012 12:52 AM

“No consultations between agencies and established tech companies at all.”

That should have read:

No consultations between COMMERCIAL (non-governmental) agencies and established tech companies at all.

I prefer BYOB September 21, 2012 8:13 AM

Re: “buying capability from cell carriers that have networks of cell towers in much the way small cell providers and companies like Onstar do.”

I’d be curious to hear what folks on this blog think of the Wireless Private Network offerings from Verizon and others (glossy sales doc link below):

The gist is that the telco provides a “private” MPLS network on their existing tower infrastructure that terminates inside your enterprise network. As long as the device can see a cell tower, it basically has a connection back to the enterprise, with all the good and bad that entails. For instance, you can monitor and filter their web browsing (though Wifi is an obvious gap there).

The other piece of this offering is some type of “Mobile Device Management as a Service.”

Most sane people seem to think that the device would still connect to internal resources using some VPN technology, but this sort of undercuts the value of this solution. This implies doubt that MPLS labels provide sufficient security, which again, to me, calls the whole solution into question.

vasiliy pupkin September 21, 2012 8:48 AM

It starts many years ago in Nazi Germany and Soviet Union. Gestapo and NKVD could secretly and remotely activate microphone on the landline phone when handset is in the cradle and monitor/listen/record what is going on within several meters around the phone. The solution was just set mechanical switch to disconnect handset out of the phone when it is on the cradle or same type of switch to disconnect phone out of the line but latter prevented from getting incoming calls. So, secret for user remote activation of the phone’s mic is not the latest developement. That was just recently adjusted to cell phone era using software.
There are two options to disable mic for sure on both cell and landline phones: mechanical(!) switch which let user disconnect mic (like old era mute botton on the landline phone) or place near the phone small generator of white noise.
For camera – tape is the best option. It is working for laptop and other devices camera as well.

Faraday Cage is the best option to disable mic, camera, GPS, remote changes of software when battery removal option was not provided for cell phone by design.

Wael September 21, 2012 11:33 AM

@ click to play

When you have money for some of the brightest minds in the world, this should be a trivial decision.

Only if the the individuals who have the money are themselves the “brightest minds in the world”

B. D. Johnson September 22, 2012 10:25 AM

There has to be easily thousands of people working for the Federal and state governments who need secure access on their smartphones. You’d think this would be a niche that some manufacturer would aim to take advantage of. And the customization of Android would lend itself nicely as the basis for a secure phone OS.

I haven’t done any real tinkering with Android itself (just installing others’ custom ROMs), but when you have apps like superuser that can force apps to explicitly request permission to certain functions, couldn’t that be expanded to force explicit permission to do things like access the camera or microphone? Or read specific data from the phone?

Nick P September 22, 2012 11:04 AM

@ B. D. Johnson

The problem with Android is that it simply has too many security holes. These holes range from those used to “root” the phone (i.e. gain max privileges) to the covert channels present in most UNIX-like systems. We can certainly expand the app privileges, but there’s too many bypasses. A secure Android phone would require eliminating all easy bypasses.

The alternative that has emerged is MILS & MILS-like strategies. Essentially, a small (verifiable) hypervisor or microkernel is on the bottom layer. Middleware might make communication easier or enforce fine-grained security. At the top are stand-alone apps and entire VM’s. The stand-alone apps run directly on the microkernel, resulting in great performance & isolation. The VM’s run legacy (e.g. Android) software. This model isn’t necessarily “secure:” it just makes it easier to reduce the attack space & manage how the information flows.

MILS Kernel Technical Primer

Leading Microkernel phone solution
(Note: They paravirtualize many phone OS’s, allowing them to run side-by-side.)

So, until we have better options, I advocate a low-TCB solution like microkernels or MILS. A hardware-up approach would be ideal. An OS designed from scratch for robust security & easy modification would be better. There’s no incentive to do either mass-market, esp. due to user’s preferences.

robertt September 22, 2012 4:49 PM

there are a couple of thing worth remembering about the smartphone market
1. important people want the newest toys available. they definitely do not want to carry a 4year old phone
2. real life communications is mix of secure and non secure content. for most people 95% of the content is unsecure. this means that the ease of use and functionality will be tailored towards unsecure content.

RobertT September 23, 2012 4:52 PM

It is interesting to ponder what the NSA wants to achieve with a secure smartphone.

It seems to me that there are 4 possible levels of secure functionality, presented in ascending order of difficulty…
1) Secure Email device (Cloud storage)
2) Secure Access device (classified databases)
3) Secure Voice communications (GSM, 3G, 4G, SIPVoip ??)
4) Secure Location

Where do blog readers believe the security fence needs to begin and end? and how much device functionality will we willing surrender (WiFi, BT, NFC, data storage (movies/music))

I’m clearly skeptical BUT I’m very interested in the discussion, because this wave of business and government usage is happening. so it is necessary, for all of us in the field of IT security, to have an understanding of the relative security of smartphone vs laptops and be able to at least advise businessmen or gov’t officials about the possible attack space that they are opening.

TRX September 23, 2012 5:07 PM

This isn’t a new problem. It’s the same as when politicians and bureaucrats got cellular phones. Repeated briefings couldn’t stop people from blathering classified information over the airwaves. Each new advance in connectivity brings its own new layer of apparent problems, but it’s all the same problem underneath – people who insist on talking about things they shouldn’t, and doing it over insecure channels.

Jay September 23, 2012 7:07 PM

Given ARM TrustZone extensions, what makes you think just reflashing the ROM will get rid of backdoors? If you can’t be sure you control all the code (viz. Secure Zone, or bootkits, or Blue Pill) you can’t be sure you control anything.

(You would have to worry about lower-privilege things, and even separate peripherals too, but that sort of attack wouldn’t be nearly as stealthy.)

cots_forget_it September 23, 2012 10:36 PM

You may have to worry about the hardware too. Where were those CPUs fabbed? With just a few hundred extra gates, you can add a remotely-triggerable privilege escalation vulnerability to the CPU. Then you just need to send packets to it, or attack a web page you know it visits (facebook wall etc) and you can pwn the device regardless of how “secure” its software is.

Clive Robinson September 24, 2012 6:36 AM

@ RobertT,

It is interesting to ponder what the NSA wants to achieve with a secure smartphone

It is 😉

As you are probably aware there is in effect a hierarchie of security needs/requirments. Loosely you have several dimensioons to this problem, the simplest to understand being how long the traffic remains secure from an adversary. At the bottom you have tactical “field ciphers” that only need be secure for very short periods of time ranging through to diplomatic and above that should remain secure for upwards of a hundred years or so. Another dimension is the amount of traffic on a given network that is often under the same key etc etc.

Well it was identified prior to WWI with “field telephons” that sometimes Generals and Diplomats will chat about very high level secrecy items across “Channels are available” because sometimes the information has to get from A to B in the minimum period of time.

This became worse with the advent of aeroplanes during the likes of WWII where high status individuals would be flown for protection on military aeroplanes across or close to enemy territory from meeting to meeting. Usually such aircraft were what was available as required rather than be assigned to the individual. Thus usually came with tactical radio systems rather than diplomatic radio systems.

The solution the NSA came up with eventually was to make all comms systems equally as secure in terms of time and other dimensions.

This is obviously nolonger practical due to cost and other factors.

One solution that has been proposed in other places is the use of smart cards as crypto units that you “plug into any phone with a slot” such that the voice etc gets routed through the smart card and back into the phone for onwards processing/communications.

Surprisingly this is generaly quite workable with older generation GSM phone technology and such a phone can be passed at quite a high EAL level.

The fly in the ointment is modern smart phones that make “common interfaces” to all the gizzmos such as mic, touch screen, camera, GPS, accelerometers, etc wide open at the lowest levels (which is why the CarrierIQ “test” software could do what it wanted).

There are various ways to get around this issue but currently nobody appears even remotly bothered.

This is partly due to “no need perceived” by the designers and the one issue I bang on about from time to time “efficiency-v-security”.

It is this latter item that is the stumbling block that is going to litter the road to security on mobile devices.

One solution that has been sugested is an insecure smart device attached to a secure phone by a mandated interface or choke point.

In some respects this will work relaativly well, in that the phone knows who or what the user is trying to connect to and can thus set the required security policy whilst the comms is in progress. The phone “CPU” has a well known load so can be not “efficiency limited” unlike the smart device.

Another way to look at it is the smart device is effectivly working “off line” so it’s data can for many reasons be treated as “data at rest” that is it is “encrypted by default” before going to the choke point, which greatly simplifies some of the security issues.

Another is the “central switch” system where the phone part provides an always encrypted Point to Point link from the smart device back to the central switch. The switch then takes care of the security issues where “efficiency” in not such an issue.

If I was having to make the design choice I would almost certainly take a serious look to using the P2P + Central switch method with the phone being basicaly “secure” and the “smart device” hung off of it only ever seeing the central switch. Whilst it might not give the ultimate “bang for the buck” in many ways it is the way GSM phones are currently designed as it makes compliance testing oh so much simpler.

RobertT September 24, 2012 4:44 PM

“With just a few hundred extra gates, you can add a remotely-triggerable privilege escalation vulnerability to the CPU”

I don’t think so!

It is certainly possible, on chip, to build hardware which will look for and match a long enable sequence in teh ALU or some I/O peripheral, however in most operating systems achieving this hardware match does absolutely nothing to escalate the privilege that a low level software task is allocated.

Task privilege is a function of the operating system kernel, so with proper sandboxing and virtualization techniques the low level task is still confined to operating within the its allocated memory space. This means that even if this special hardware existed the low level task would not be able to utilize this hardware. There is no difference between this and a low level task trying to access the memory space dedicated to a different low level task (hopefully in both cases the OS/memory control systems would prevent this access)

For a hardware exploit to be useful it needs to somehow achieve one of the following:
1) enable an alternate unsecure boot ROM, (such as enable the USB for OS loading), in essence we need to enable a rootkit load. This could be done by the exploit hardware changing the chips internal memory addressing, most microprocessors execute their very first instruction from memory location 0000h, if an exploit existed that mapped 1000h to 0000h then the cold boot would start at 1000h. The privilege escalation would be achieved by first loading a rootkit located at 1000h, before loading the OS.

2) remap the memory locations of peripherals (for instance the microphone I/O might also go to an unused IO port or be memory mapped (Note: a properly sandboxed task would generally not have any access to any I/O so adding this would be useless), this hardware change only helps to copy the secret information to an unexpected location in the memory space, it does nothing to change task privilege to access that new space)

3)recreate a hardware function and map it to a peripheral that is generally accessible by low level tasks (e.g create a microphone, amplifier and ADC using other unexpected hardware) I don’t wish to educate hackers on this so I’ll say no more.

I can think of several other ways to utilize a hardware exploit, if it exists, but none of these methods involve task privilege escalation.

RobertT September 24, 2012 5:07 PM

@Clive Robinson
I agree the problem here is the smart phone rather than the desired function.

With a simple GSM phone only design the microphone data went straight to the ADC and LCP encoder which attached directly to the TX modulation hardware. The microphone data was not needed by any other blocks so it was not generally available outside of the Voice processing chip (VBAP chip).

These days on smart phones there is a desire to explore functionality that MIGHT be useful, so lets let any task access the microphone data stream, maybe they can achieve voice commands or possibly better noise correction whatever. Clearly this creates a HUGE attack space, which is completely unnecessary for the task that the secure radio wants to achieve.

I’m of the opinion that secure voice functions need to be removed from the smart phone and put into dedicated peripherals such as encrypted BT headsets. Unfortunately this still does nothing to prevent the microphone of the smartphone being used in parallel, just to record the conversations….

All takes us back to the beginning question of what the NSA hope to achieve by utilizing smartphones for secure traffic. The cynic in me thinks it might be a new version of the British gov’t supplying captured Enigma machines to the colonies.

Nick P September 24, 2012 5:37 PM

@ RobertT

“For a hardware exploit to be useful it needs to somehow achieve one of the following:”

I can imagine a much simpler one: give the calling code Ring -1 or 0 privilege. It can then read/write arbitrary memory, manipulate privileged CPU registers and perform other sensitive actions. Any of these might be used toward an effort to rootkit a machine.


I also encourage you, if you haven’t already, to check out the IEEE Hardware-oriented Security and Trust conference papers for the past few years. They’ve been submitting many papers on malicious hardware attacks & counters with some interesting abstracts. If only I knew anything about that stuff… lol. I figured you might be interested in the developments.

Here’s one that I understood at a high level.

Silencing Hardware Backdoors

RobertT September 24, 2012 10:38 PM

@Nick P
“I can imagine a much simpler one: give the calling code Ring -1 or 0 privilege. It can then read/write arbitrary memory, manipulate privileged CPU registers and perform other sensitive actions”

I agree if the tasker is Interrupt driven than privilege can be escalated by hardware manipulating / triggering of the interrupts. this can result in task / interrupt stack overflows etc. similarly CPU registers that are dedicated to controlling things like the Memory management unit can be manipulated thereby enabling tasks to operate outside their sandboxes.

Regarding the paper: Frankly I find most of these academic papers to be uninspired, it is the old security problem that Bruce has often mentioned about “Thinking Hinky”. You can have the worlds best lock on your door but the thief will just break the window. It is the same thing with these attempts to scramble data, this only has a hope of working if you assume that the hardware attacker has no knowledge of the data reordering. I kinda ask myself who would go to the trouble of embedding malicious hardware with first figuring out if it would work. Clearly the individual has access to the design database and knows how to use hardware / software simulators, so he also likely has a simulation regression test suite that proves the exploit works.

About all that is potentially fixed is the problem of malicious vendor supplied blocks, but even that is a stretch given the vendor embedded engineering approach that has been the norm for at least 10 years.

I do try to follow the IEEE hardware stuff, but to be honest I’d rely on other sources for cutting edge hardware security information.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.