SOUFFLETROUGH: NSA Exploit of the Day

One of the top secret NSA documents published by Der Spiegel is a 50-page catalog of “implants” from the NSA’s Tailored Access Group. Because the individual implants are so varied and we saw so many at once, most of them were never discussed in the security community. (Also, the pages were images, which makes them harder to index and search.) To rectify this, I am publishing an exploit a day on my blog.

Today’s implant:

SOUFFLETROUGH

(TS//SI//REL) SOUFFLETROUGH is a BIOS persistence implant for Juniper SSG 500 and SSG 300 firewalls. It persists DNT’s BANANAGLEE software implant. SOUFFLETROUGH also has an advanced persistent back-door capability.

(TS//SI//REL) SOUFFLETROUGH is a BIOS persistence implant for Juniper SSG 500 and SSG 300 series firewalls (320M, 350M, 520, 550, 520M, 550M). It persists DNT’s BANANAGLEE software implant and modifies the Juniper firewall’s operating system (ScreenOS) at boot time. If BANANAGLEE support is not available for the booting operating system, it can install a Persistent Backdoor (PBD) designed to work with BANANAGLEE’s communications structure, so that full access can be reacquired at a later time. It takes advantage of Intel’s System Management Mode for enhanced reliability and covertness. The PDB is also able to beacon home, and is fully configurable.

(TS//SI//REL) A typical SOUFFLETROUGH deployment on a target firewall with an exfiltration path to the Remote Operations Center (ROC) is shown above. SOUFFLETROUGH is remotely upgradeable and is also remotely installable provided BANANAGLEE is already on the firewall of interest.

Status: (C//REL) Released. Has been deployed. There are no availability restrictions preventing ongoing deployments.

Unit Cost: $0

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on January 13, 2014 at 2:45 PM43 Comments

Comments

Matt Leidholm January 13, 2014 2:51 PM

Be honest: you’re doing these according to codename in reverse awesomeness order, right?

Because these things just keep getting better.

(I wish I knew enough about Juniper SSG 500 and SSG 300 firewalls to leave a thoughtful comment; I do not.)

DB January 13, 2014 4:05 PM

We need open source firewalls, running open source operating systems, with fully 100% open source drivers/bios/firmware/etc to be more safe from such abuse. Not the kind where you have an open source wrapper around proprietary closed-source binary blobs, but actual proper through-and-through open source. Security by obscurity is not security at all. The solution to eroding trust in a secretive pwned infrastructure is more openness.

Nix January 13, 2014 4:08 PM

Does anyone still think System Management Mode is a good idea? Because code should be allowed to run on the processor without the knowledge or awareness of the operating system! What could possibly go wrong? (Answer: a lot. If I could throw all SMM off my systems, I would — but I can’t because some of the SMM traps are used to fix outright hardware bugs, and they’re all undetectable without a logic analyzer in any case. Very useful for malware…)

DB: you need more than open source firmware for SMM to not be a threat. You need an open source chipset, and proof that you’re actually running the thing you have the source to. SMM is deeply, deeply evil.

Parker January 13, 2014 4:39 PM

Screw open-source. I’m sick of hearing how open-source everything and command line in Linux crap is going to fix everything. Wake up!

Iain Moffat January 13, 2014 5:18 PM

@Nix and DB – I think it goes further than having a trusted chipset and trusted code to ensuring that there is nowhere in the system that persistent malware can hide through a power cycle, and the boot process has to start from a provable and immutable root of trust e.g. a really read-only ROM (I can see fuse link PROM and masked ROM coming back into fashion!) with code that can checksum each more complex layer of software on the way to the final operational code before it is allowed to run. New designs for security devices probably ought to forego “smart” peripherals with any embedded storage, and accept the inconvenience of human involvement in upgrades to leave malware with no easy hiding place.

Usually modern network kit has a permanent ROM boot loader which executes an operating system from flash memory which then loads a configuration file and/or rule set to tailor it to the requirements of the site where it is installed.

Looking at the recent posts in this series there seems to be a family of firewall implants comprising different loaders for different vendors and hardware (Souffletrough, Jetplow, Halluxwater and Feedtrough) with a common higher layer (Bananaglee). The higher layer seems to work across different unix-like operating systems (I believe PIX post version 6 and ASA are Linux based and Juniper generally use OpenBSD) so the common higher layer of the implant probably depends more on a common CPU type and the ability to operate concealed by SMM, while the loaders are tailored to the boot process and available NVRAM/EEPROM hiding places in each platform.

Looking at the PIX/ASA versions supported compatibility starts more or less at the transition from 486DX/Pentium Pro to Celeron or Pentium III processors. It would be interesting to know if the Pix 501 (the sole AMD PIX) was supported by Jetplow/Bananaglee as an aside.

This view of platform specific loaders for a common payload tends to be supported by use of almost the same graphic on the page with the slide linked from each article. The process in each case described here seems to be that something hiding at or below BIOS level patches the operating system to add a hidden process (Bananaglee) which is the working part of the implant. As such it is a first cousin of PC resident advanced persistent threats.

Hope this helps

Iain

Coyne Tibbets January 13, 2014 5:50 PM

BANANAGLEE-related exploits (so far): SOUFFLETROUGH, FEEDTROUGH, JETPLOW, ZESTYLEAK, GOURMETTROUGH; more coming up I’m sure.

But BANANAGLEE seems to be a central/core exploit, described in several of the other offerings as being installed by the offering and providing an anchor for reinstallation of the other offering.

According to several of the entries, BANANAGLEE itself is persisted into the BIOS.

Given that, and the structure of the modern boot processes used by many platforms, it occurs to me that BANANAGLEE must be a specific exploit into UEFI. That once the child O/S is subverted with one of the other exploits then BANANAGLEE is pushed into the UEFI BIOS boot stack; where it provides a hook for remote upgrades and for re-installation of exploits removed by such activities as O/S upgrades.

To accomplish this would require that BANANAGLEE be signed with a signature known to UEFI. This would have to be done by a UEFI core manufacturer, such as Intel, or perhaps even leverage support hardcoded within UEFI itself.

This is stronger support for the idea that UEFI is compromised by the NSA; that every modern UEFI machine is NSA-ready out of the box.

I’ll be interested to see how many exploits from the catalog depend on BANANAGLEE.

galileo galilei January 13, 2014 6:22 PM

Note, that most recent architecure change in motherboards involved puting BIOS on a dedicated chip.
Manufacturers will not spend a penny extra for no reason, and extra chip makes no difference for BIOS’s funcionality. So,what reason could that be?

This was a hint that BIOS is used by NSA for implants, and manufacturers must be complicit.

I say, keep an eye on BIOS and its manufacturers.

I predict China will offer NSA-free architecture, which will cost American businesses severe financial losses.

Dorn Hetzel January 13, 2014 6:55 PM

Maybe we need to go back to the days of rows of toggle switches for bootstrap code… CDC Cyber style, anyone? 🙂

Rob Schneider January 13, 2014 7:33 PM

$10 says all this stuff is detectable with this:
http://www.mitre.org/capabilities/cybersecurity/overview/cybersecurity-blog/copernicus-question-your-assumptions-about
You can’t detect the SMM subversion due to the way the architecture works, but I doubt they’re able to hide their flash chip modifications.

@Coyne “To accomplish this would require that BANANAGLEE be signed with a signature known to UEFI.” No it wouldn’t. Obviously once you’re in BIOS you can just turn off signature checks.

Tony H. January 13, 2014 8:48 PM

I’m not quite getting this BANANAGLEE. These Exploit-of-the-day things are from TAO, and they all ensure that BANANAGLEE stays in place under adverse circumstances, or at least that there’s a backdoor waiting for when BANANAGLEE gets updated for the OS in question. But BANANAGLEE is more routine, i.e. presumably doesn’t require any TAO magic to get it in place. So of course the question is how does BANANAGLEE normally get installed? Or is it purely an administrative thing; TAO does all the installers, and DNT does the overall architecture, exfiltration network, etc?

I’m still astonished all these haven’t been found in the wild. What about all those firewalls and routers on eBay? I bought one that had a bank’s config info on it a few years ago; maybe I should go back and look for BANANAGLEE on the drive I replaced.

annoyed January 13, 2014 10:27 PM

I would assume it is installed using opcodes and/or assembly either to the processor(s) or micro-controller(s).

If they have access to the metal (remotely or directly doesn’t matter) there’s nothing that can stop them.

It’s a big tiny world to hide in.

Computer security has become Schrödinger’s cat: if you didn’t push that opcode yourself and if you don’t have something that continually checks it and everything else both at rest and work, and then checks its results then you have no way of knowing whether you are secure or not.

I love open/free/libre source but the minimum requirement for the future is open/free/libre hardware. Everything else is like building on quicksand.

There is one possible “shortcut” and that is to utterly compromise every part of the hardware in every way replacing everything that can be replaced with your own custom firmware. An astronomical task for even low-end computers including for whatever accessories one hooks up.

I hope Binnie fully subverts microSD cards soon and posts a detailed howto; something like that is about the only thing left to trust (if you continually “re-breach” it on startup).

It will be slow and cumbersome but opens the door for the next step which is to likewise fully “corrupt” USB-sticks (same innards) and powered USB-hubs (then one can aim for the same with USB keyboards and simplistic USB screens for a general purpose rudimentary computer).

Close the door, do not have any other electronics in the room, break out your mylar survival blankets and Snowdenize yourself and just maybe they won’t see you playing solitaire on that piddly thing 😛

Huawei January 13, 2014 11:11 PM

So, what of talk to the effect that the House Intelligence Committee’s blast at Huawei a little over a year ago was as much about facilitating NSA spying as about deflecting Chinese spying?

Iain Moffat January 14, 2014 3:26 AM

@Dorn Hetzel: The next step from toggle switches in the 1960s/early 1970s was to store the boot code either in a diode matrix ROM or a core ROM where a bit was signified by a diode or ferrite ring present or absent at a particular point in the grid. Either is non rewritable and can be visually inspected but avoids the error prone time consuming exercise of loading via switches. Building a diode matrix ROM for a few tens of bytes is quite straightforward (diodes are less than 2p/3 cents each and address decoding needs a few TTL devices) and could provide at least a bios checksum routine and stored expected answer to create a root of trust for a computer otherwise built with commercial parts. I think the bigger challenge is to find a complete kit of parts or commercial single board computer made this century that is free of NVRAM or EEPROM where malicious code can persist across a power cycle. This means not just the CPU and memory but also all of the peripheral devices (NICs, disk controllers, disk drives, ….). I the only other approach is for the trusted ROM code to explicitly zero all known NVRAM and EEPROM before proceeding to load the OS. This fails if some NVRAM or EEPROM embedded in the CPU or peripheral controllers is undocumented.

qwertyuiop January 14, 2014 5:30 AM

@Parker – I’m with you on that. Open source is touted as the solution to all of our problems “because you can verify the code yourself”. In theory that’s fine but in practice I’d say totally impossible because how many of us could actually verify the code? I know that I don’t have the skills myself and I know very few people who do.

@DB says “We need open source firewalls, running open source operating systems, with fully 100% open source drivers/bios/firmware/etc to be more safe from such abuse.” How many people have the level of understanding to actually verify the code in software as diverse as a firewall, an OS, drivers for dozens of different devices, a bios, etc? Every single one of those requires subject specific knowledge and I suspect the number of people whose knowledge embraces all of those individual expert areas could be counted on the fingers of one hand.

Let me quickly add that I’m not anti open source. I love open source and use it a lot myself, but please let’s stop kidding ourselves that any more than a very few people actually download the source, check the code, and then compile it (with a compiler that they’ve already checked). Most of us don’t even compile it ourselves, let alone check the code first; we go to a site which we believe to be trustworthy and download it from there.

And that’s only the software! If you want to be truly sure that you’re as safe as it is possible to be then you are going to have to design and fabricate the hardware itself, starting with the chips – how can you trust anybody else?

The reality is that at some point most of us are going to have to trust somebody else to be straight and honest, and we’re also going to have to accept that not everybody will be so we need to take extra measures to protect ourselves.

Bob S. January 14, 2014 6:42 AM

It seems to me BIOS and firewalls are a weak link.

Popular commercial software firewalls must be a laughably weak point then.

Bob S. January 14, 2014 6:49 AM

Re: ” trusted ROM code to explicitly zero all known NVRAM and EEPROM before…” ~Iain

So then, a ROM firewall, ROM Bios or ROM Vacuum Cleaner might be effective.

I don’t know, but I am wondering, is it really that easy to manipulate BIOS and firmware firewalls?

RonK January 14, 2014 7:21 AM

People are posting here that we will have to start fabricating our own hardware down to the VLSI design level. That seems to me to be an exaggeration. I rather doubt that the NSA has somehow installed hardware backdoors on FPGA chips which could be effective against custom programming, making them a useful high-level building block for secure hardware. The main roadblocks I see are (and please correct me if I am wrong, I’m not a real expert):

  • There is a strong lack of open-source programming tools for higher-end FPGAs
  • At the high clock rates we would want, PCB board design requires special skills
  • Small batch manufacturing of the PCB boards will be expensive, along with the high-end FPGAs which are powerful enough to be turned into reasonable CPUs

65535 January 14, 2014 7:30 AM

@Iaim Moffat

“…there seems to be a family of firewall implants comprising different loaders for different vendors and hardware (Souffletrough, Jetplow, Halluxwater and Feedtrough) with a common higher layer (Bananaglee). The higher layer seems to work across different unix-like operating systems (I believe PIX post version 6 and ASA are Linux based and Juniper generally use OpenBSD) so the common higher layer of the implant probably depends more on a common CPU type and the ability to operate concealed by SMM, while the loaders are tailored to the boot process and available NVRAM/EEPROM hiding places in each platform.”

“Looking at the PIX/ASA versions supported compatibility starts more or less at the transition from 486DX/Pentium Pro to Celeron or Pentium III processors. It would be interesting to know if the Pix 501 (the sole AMD PIX) was supported by Jetplow/Bananaglee as an aside.”

“This view of platform specific loaders for a common payload tends to be supported by use of almost the same graphic on the page with the slide linked from each article. The process in each case described here seems to be that something hiding at or below BIOS level patches the operating system to add a hidden process (Bananaglee) which is the working part of the implant. As such it is a first cousin of PC resident advanced persistent threats… “

“…the only other approach is for the trusted ROM code to explicitly zero all known NVRAM and EEPROM before proceeding to load the OS. This fails if some NVRAM or EEPROM embedded in the CPU or peripheral controllers is undocumented.”

That is interesting. I think you are on to something. I see that SMM started in the PIII series (1998).

“System Management Mode (SMM) is an operating mode in which all normal execution (including the operating system) is suspended, and special separate software (usually firmware or a hardware-assisted debugger) is executed in high-privilege mode. It was first released with the Intel 386SL. While initially special SL versions were required for SMM, Intel incorporated SMM in its mainline 486 and Pentium processors in 1993. AMD copied Intel’s SMM with the Enhanced Am486 processors in 1994. It is available in all later microprocessors in the x86 architecture.”

https://en.wikipedia.org/wiki/System_Management_Mode

iAMT is usually used for consumer products and seems to start in the 945 chipset. That would be the high-end P4’s and Core 2 processors (socket 775).

https://en.wikipedia.org/wiki/Intel_AMT_versions

I see the earliest wide spread bios virus was in 1998.

“CIH, also known as Chernobyl or Spacefiller [bios virus], is a Microsoft Windows 9x computer virus which first emerged in 1998. It is one of the most damaging viruses, overwriting critical information on infected system drives, and more importantly, in most cases overwriting the system BIOS. The virus was created by Chen Ing-hau who was a student at Tatung University in Taiwan. 60 million computers were believed to be infected by the virus internationally, resulting in an estimated $1 billion US dollars in commercial damages… Chen claimed to have written the virus as a challenge against bold claims of antiviral efficiency by antivirus software developers. Chen stated that after the virus was spread across Tatung University by classmates, he apologized to the school and made an antivirus program available for public download… In September 1998, Yamaha shipped a firmware update to their CD-R400 drives that was infected with the virus…CIH spreads under the Portable Executable file format under Windows 95, 98, and ME. CIH does not spread under Windows NT-based operating systems, such as Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8, and Windows 8.1.”

“…CIH infects Portable Executable files by splitting the bulk of its code into small slivers inserted into the inter-section gaps commonly seen in PE files, and writing a small re-assembly routine and table of its own code segments’ locations into unused space in the tail of the PE header. This earned CIH another name, “Spacefiller”. The size of the virus is around 1 kilobyte, but due to its novel multiple-cavity infection method, infected files do not grow at all. It uses methods of jumping from processor ring 3 to 0 to hook system calls…”

https://en.wikipedia.org/wiki/CIH_(computer_virus)

If Chen wrote an anti-virus program why can’t the major Anti-virus makers do the same?

@Coyne Tibbets

“This is stronger support for the idea that UEFI is compromised by the NSA; that every modern UEFI machine is NSA-ready out of the box.”

That’s a very troubling thought.

@Tony H

“the question is how does BANANAGLEE normally get installed? Or is it purely an administrative thing; TAO does all the installers, and DNT does the overall architecture, exfiltration network, etc? I’m still astonished all these haven’t been found in the wild.”

Good question. I would assume a bios flash would require a cold boot (possible a warm boot). And, a re-boot of a firewall should show up the logs and start alarm bells ringing. I wonder if this is an inside job (or possible a NSA/CIA “bag job”).

@galileo galilei

“Note, that most recent architecure change in motherboards involved puting BIOS on a dedicated chip. Manufacturers will not spend a penny extra for no reason, and extra chip makes no difference for BIOS’s funcionality. So,what reason could that be? This was a hint that BIOS is used by NSA for implants, and manufacturers must be complicit.“

That’s is a possibility.

@Nix

“Does anyone still think System Management Mode is a good idea?”

No. Not any more. The system was based on trust. That trust is gone.

Clive Robinson January 14, 2014 8:20 AM

@ Dorn, Iain, Bob,

The problem with NVRAM be it flash or other “electricaly alterable” ROM is it’s in many places.

There are two reasons for this, firstly it alows fast turn around of designs with field upgrades. This is historicaly not a hardware engineering but a software code cutter and marketing manager solution original pushed by the likes of Micro$haft.

The second reason is down to “dark silicon” basicaly transistor packing density on silicon is rising faster than individual transistor efficiency. It is in effect nolonger possible to pack continuously active transistors on silicon without causing it to glow quite nicely. However RAM/ROM is in effect static most of the time it’s only the small fraction you read or write to that is active. Therefore packing an SoC with RAM/ROM is in effect a freebie in that it can in effect be considered to be effectivly “dark silicon”.

Thus you find SoC’s on peripherals like HD’s that have three ARM CPU Cores (often of different generations) that have a large effectivly common block of memory into which programs can be placed. The number of different types of these SoCs in use is considerably smaller than the number of Intel CPU’s (some recone it’s only a couple).

The code on these SoC’s tends to be generic cludges of organicaly grown code that few code cutters would dare mess with, thus it makes a very easy target for those who have the skills to “Boldly Go…” etc.

Whilst this makes it easier to find and copy the cludgy code it does not solve the “pre-instaled malware” problem.

The solution to this is actualy to take a leaf out of the NSA’s book, which is “inline media encryptors” or similar.

Basicaly what you need to do is put your own micro between the motherboad and the periheral this micro needs to have two appropriate interfaces. Whilst most micro’s can data shift from I to O and back again it’s the processing that gives the hit and creates a bottle neck. However I’ve built one or two that do 10Mbit ethernet and 100M (but only 25M sustained) with limited processing/encryption.

Iain Moffat January 14, 2014 8:24 AM

Regarding countermeasures, the firewall vendors are in a slightly better place than the average PC user because they know exactly what their hardware build is so should be able “in theory” to go down the route of making a “good bios” that systematically clears all writeable storage in the machine as soon as it runs, for each model/board variation. This assumes that the BIOS flash process is not subverted by the implant to relocate new code and insert itself in the chain of execution.

I suspect that additional integrity checks and zeroing or checking of all EEPROM and NVRAM is all that can be retrofitted to most existing products. There will (or should) now be market pressure for at least security devices to boot into something like that from really readonly ROM before executing flash BIOS (the modern version of a 1980s Power on self test POST ROM if you like). The bigger problem is how to validate flash BIOS and legitimate NVRAM storage (like a router configuration) without adding a separate secure store for keys or checksums accessible only to the POST which needs to be designed into hardware. The POST almost needs to be a separate mask programmed CPU which is able to become bus master at power up to achieve that.

Having writable persistent storage in a separate chip or card is not necessarily bad – especially if it is socketed – as the removal beyond reasonable doubt of a “bad BIOS” only requires a spare good chip and an extraction tool ! the converse is obviously true unless the computer is physically secure of course …

CallMeLateForSupper January 14, 2014 10:15 AM

@ Dorn Hetzel

Thanks for the memory (Basic Binary Loader). 🙂 I cut my assembler teeth on an HP 2116 and its bullet-proof(!) toggle switches. Small words on that machine, so we worked in octal, not hex. My thumb developed a callous from clearing all switches, a la pianist, countless times. Forget to set the upper memory write-protect and an errant program could trash the BBL => lots of practice toggling in the BBL again.

Load Address
Run
Ahhh… those were the days.

By the way, our mainframe at that time was a CDC Cyber 71, which we accessed with DECwriters. I never laid eyes on the Cyber though (“glass house” mentality in those days).

Nick P January 14, 2014 10:36 AM

@ RonK

Yes, those are problems that need to be dealt with. Especially open tools for generating FPGA logic from low- or high-level specs. There’s some work in this area.

@ 65535

““Does anyone still think System Management Mode is a good idea?”

No. Not any more. The system was based on trust. That trust is gone.”

Intel SMM was included in a 386 variant for power management. That was way before they were worried about trust or security. Heck, they had just gotten proper memory management. The evolution of it with more and more features is easily explained by business drivers. Yet, it was identified as a potential attack channel back in the 90’s due to privileged nature. Just another weakness Intel couldn’t get rid of for backward compatibility is most likely explanation. Same reason why so many 32-bit GB+ boards booted in real mode at 20bit and limited to 1MB of memory. 😉

I’ll also add that Intel’s radical i432 architecture solves tons of security problems at OS or app layers. Yet, market turned it down in favor of insecure, fast, backward compatible chips. They tried another route of removing baggage with the introduction of Itanium architecture. It has take-up in mission critical computing but its future is uncertain. So, that’s one massive loss and one probable that Intel’s had because they bet on quality/security. One could say they’ve been forced in certain direction by the market & the market as a whole is getting what it deserved.

DB January 14, 2014 10:55 AM

@qwertyuiop:

Open Source is not “the solution to all our problems” because “you can verify it yourself”…. indeed most people don’t have the expertise to do any verification. You are correct about that.

It is the solution because the principle of openness restores trust when trust has been violated and destroyed in secret. Continuing secrecy and more and more remonstrations that “everying’s ok, go back to sleep,” will just continue mistrust at this point. But being more open about things will restore trust.

@Nix, @Parker, @Iain Moffat:

This principle of openness could apply to both our software and hardware. Software is not the only thing that can be “open source” but hardware can also be “open source.” Currently the only “open source” hardware I’m aware of is small embedded hobbyist devices like http://arduino.cc/, but I think the principle could be applied to bigger things… like, yes, chipsets, CPUs, whole motherboards, etc…

Nick P January 14, 2014 1:19 PM

@ DB

See opencores.com for other applications. As for Arduino, far as I can tell it’s just the board design that’s open: the CPU and other key chips are closed commercial designs. The sophisticated NSA attack vectors we’ve seen include vulnerable BIOS storage areas, CPU weaknesses (eg SMM), and firmware on peripheral device chips. So, an open board design can still be targeted quite well if there’s weaknesses in its components.

The minimal requirements some of us worked out are:

  1. A trusted main chip
  2. A trusted, non-writable BIOS
  3. Trusted boot process in control of owner
  4. IOMMU or similar restrictions for untrusted devices
  5. Verifiable tools for compiling, installing, etc software onto the device (esp privileged software).

  6. (optional, but I prefer) Instruction set extensions for provably safer/secure software

There’s currently no open device that meets these requirements. Hence, all open devices are potentially vulnerable. The good news is that the research papers I posted a while back and open core designs have already produced components that can meet these minimal requirements. What’s left are knowledgeable engineers getting paid to put it together, get it peer reviewed, and get it built.

sobrado January 14, 2014 1:35 PM

@qwertyuiop, @DB: “Open Source is not “the solution to all our problems” because “you can verify it yourself”…. indeed most people don’t have the expertise to do any verification. You are correct about that.

I would say you do not understand the open source model. No one asks each individual user to read and verify the source code himself. It is enough if the code has been audited by a few clever members of independent development teams at the community. As the code is open to public review anything wrong on it, either bugs or an intentional backdoor, will be spotted.

DB January 14, 2014 3:06 PM

Why the fixation on a non-writable BOISes? You want to order a new chip every time a bug is found in the code? The solution is not preventing change, the solution is being more open about what’s in there (so that peer review can happen, and things have the capability of getting fixed, instead of festering)…

DB January 14, 2014 3:17 PM

@ sobrado:

Just being “open source” does not guarantee that there is a single member of any individual project community that is “clever” enough to audit it properly… it only guarantees that it’s POSSIBLE for there to be. There are plenty of open source projects that are full of security holes, because their communities are not knowledgeable enough to fix them or have good architectures etc. So open source is not a panacea that is automatically trustworthy, it just makes trust possible because of its openness.

But closed source can’t be trusted, because it is NOT POSSIBLE for someone with knowledge to verify it. Whereas open source can.

Marcos El Malo January 14, 2014 4:31 PM

@DB

So, as I understand it, open source isn’t a panacea, but is better than nothing as long as it’s not instilling false confidence. It’s still placing trust in individuals, most likely strangers, but those strangers are at least publicly accountable (only important if their public reputation is important to them). A trusted developer could be suborned or a deep cover mole could work his or her way into a position of trust.

The major difference, all other things being equal, is that open source is slightly less likely to be compromised than proprietary software.

Regarding open source hardware, how do you even begin to check? Each individual batch of a CPU? Each individual CPU? Who does the testing?

As someone else suggested (if I am understanding correctly), it looks like we’re back to soldering diodes onto breadboards.

DB January 14, 2014 6:38 PM

@Marcos open source hardware isn’t a panacea either. it’s just a glorious giant leap in the right direction. It’s just an application of the principle of openness, rather than secrecy. The more openness there is in both software and hardware, the harder it becomes to nefariously hide stuff in there.

A true open source hardware system would be safer because there would be TONS of copycat places fabricating open source hardware designs, all of which can be verified both visually and behaviorally that they are as intended. Any minor deviation and that place is liable to go out of business, let alone legal problems. The building of it collaboratively would also eventually tend to emphasize robustness, modularity, optimization, simplicity, ease of building, ease of verification, and many other good attributes that are almost completely nonexistent in today’s large secretive market-driven monoculture, where the only thing that matters is making a buck off of it.

RobertT January 14, 2014 8:16 PM

@Iain Moffat

Thanks for sharing your thoughts, unfortunately I’d say it is near on impossible for any normal user to know what functionality is included in any chip / card within a typical PC/laptop or router system.

In part this happens because exact functionality is often hidden from even the device manufactures. Chip making is a batch process so we inherently want to create a single device that can be customized by code for a variety of applications. In today’s world that means that chips contain dedicated hardware for specific functions ADC’s, PLL’s, DAC’s, RF mixers, clock drivers …..ect additional to this they contain RAM, ROM , Flash and several CPU cores (8085’s are very common as are ARM’s, Xcores, MIP’s, PiC’s and ADR’s) Many times the systems house is clueless about the existence of these support cores, All they typically see is the interface to the core that the chip maker makes available for manufactures customization.

Flash is often included as a stacked die (flash chip goes on top of SOC chip AND is often internally bonded as in you dont even see the pins externally)

The chip guys do not tell the manufactures what is inside the chip because they want to sell functionality rather then simply selling silicon by the yard (mm sq). Many times exactly the same chip is utilized for several products at the same manufacturer without the manufacturer even knowing this. To support this there are sticky bits and fuses that configure the chip (sometimes at the first boot), this means that as a chip guy I only really have to support one product yet I can charge different amounts for each platform. Helps enormously with profitability.

Now if you bother to Xray the chip you’ll see the Flash chip sitting on top, similarly if you bother to decap (remove black epoxy gunk) you’ll be able to see that the dies are identical for several different products.

The most extreme case I ever saw was when an ADC (analog to digital converter) that existed on a GSM chipset was sold as a stand-alone ADC, the chip was configured into a test mode whereby the ADC parallel output was observable on the chips IO pins.

I guess if nobody ever knew this than it does not represent a security risk BUT we know that the NSA is a long way ahead of the general public so I’d guess that they are very well aware of these tricks AND are fully aware of the hidden functions on the various chips. In some ways it is easier to focus on the actual different die that a chip maker produces rather than get caught up in the whole obfuscation process. It seems to me that it makes sense for the NSA to just focus on the chip makers and focus on leveraging their chip firmware configuration processes.

Nick P January 14, 2014 10:39 PM

@ DB

“Why the fixation on a non-writable BOISes? You want to order a new chip every time a bug is found in the code?”

How often do you update your BIOS? A physical switch blocking the electrical signals that make a write happen is an extremely cheap and simple way to minimize risk of BIOS compromise. The system can be booted into a clean state (eg LiveCD) before new BIOS is validated and installed. This update mode might also have almost all other functionality disabled and run things like the filesystem with all time-consuming (but effective) security checks on.

The alternative, which I’ve also promoted, is a dual BIOS design. One BIOS is read-only. One BIOS is writable (eg flash). The CPU boots from the read-only BIOS. This has a bootloader that loads other BIOS into memory, authenticates it, and transfers control if check is successful. The idea is that the first BIOS is simple enough to be made with high assurance techniques ensuring that replacing them a lot isn’t likely. The writable BIOS is where most of the firmware (or even initial kernel) is. It can be updated in a controlled way. This method has been used in a commercial system before.

The two ideas can even be combined. In any case, there are methods for making unauthorized writes to firmware hard or impossible. Some were used in older, inexpensive systems. There’s some today. That it’s not incompatible with commercial software business model makes it more likely to be accepted.

65535 January 15, 2014 2:26 AM

@ Nick P

“…it was identified as a potential attack channel back in the 90’s due to privileged nature. Just another weakness Intel couldn’t get rid of for backward compatibility is most likely explanation.”

Is “backward compatibility” like a banana up the rear? Either way we get screwed.

“Intel’s radical i432 architecture solves tons of security problems at OS or app layers. Yet, market turned it down in favor of insecure, fast, backward compatible chips… One could say they’ve been forced in certain direction by the market & the market as a whole is getting what it deserved.”

Nick, as of now I have not seen Intel “getting what they deserved” such as lost revenue. Intel’s customers are getting it up rear-end (or getting what they deserved). At the end of the day we are less secure.

“…many 32-bit GB+ boards booted in real mode…”

That’s true. And, worse there are many popular 32 bit programs still running on Intel’s x86-64 architecture (after the OS being pwned on boot in real mode). I wonder what type of security hole this presents.

DB January 15, 2014 2:49 AM

@Nick P: what you just described seems to me more of a “limited writable” BIOS, rather than an “impossible to ever fix no matter how broken” BIOS. I see no issue with properly-done hardware write-protection schemes, only an issue with burning half-baked code into a ROM chip that can then never be changed forever, and then integrated into an expensive motherboard.

I have flashed my BIOS or firmware in other devices on occasion. Not too often, but it does happen. It should be rare enough that allowing people the option to enforce physical access for an added security layer seems reasonable to me.

Clive Robinson January 15, 2014 2:51 AM

@ DB,

As RobertT, Nick P, myself and others have pointed out now and quite a few times in the past on this blog, the “computing stack” is difficult to secure at best.

This is because of the “bubbling up” issue where the assumption is that each layer of the stack needs the layers below to be secure all the way down to the electrons wizzing around under quantum effects.

However the laws of physics tell us we cann’t (currently 😉 control these quantum effects all we can do is design around them to in effect mitigate them to a point where we can make usable devices. The laws of probability are not constrained to the electrons level either, it also applies at gate level –look up metastability– where latches amongst other basic logic devices need mitigation, which adds all sorts of complications to designs.

The point is device scientists and engineers know the only option is to mitigate. The problem is they have done a very good job, so much so that people working at higher levels of the computing stack belive in fully determanistic systems and further draw unwarrented conclusions about what can be done. This unfortunate assumption spills over into security and this in turn gives rise to the “bubbling up” issue.

What is worse is at higher levels we also tend to forget that even before the first electronic computer had been designed it had been proved in the 1930’s much to the dismay of mathmeticians and logicians that there are some things you just cannot do. One of which is that at any level in the computing stack it’s not possible at that level to determin if it is fully self consistant. Which boils down to an interesting argument about the futility of AV software, simply because it’s not possible for a computer to tell you it’s free of malware or for that matter free from defect either.

So contrary to the “bubbling up” belife, you cannot tell if the layers below are secure or not, you will always come to a point where the answer is expressed as a probability.

Contrary to the majority I tend to view it as not a hair pulling out disaster requiring the donning of sackcloth and ashes but a simple engineering issue.

That is like those working down close to the electron level I accept the issues and work out ways to mitigate them.

In this view I’m far from alone when you look at the history of the telecomms and space industries and likewise military wquipment supply requirments and modern high availability and high assurance systems.

So if you look at Nick P’s list above the first item is,

    1. A trusted main chip

As RobertT has explained in the past the reality is with conventional design and production this is not possible for various human and technical reasons. And I suspect Nick P has said it to not get bogged down in details.

What it should say is,

    1. A mitigated main computing engine

Which leaves the question of “how to mitigate” to get the required level of trust?”

In mythology there is the “guards riddle” where there are two guards and two seamingly identical doors. Unfortunatly behind one door lies certain death. If you do not know which you are permited to ask the guards one question. Unfortunatly one guard always lies the other always tells the truth, they will also kill you if you attempt to run away.

The solution to this riddle is to mitigate the lying guard to arive at the knowledge of which is the safe door. The way you do this is to ask a question that involves the answer of both guards, thus you know you will get a false answer which ever guard you ask. Thus the mythological question asked was to ask either guard “If I was to ask the other guard which door is certain death, which would he point to?” The guard will then point to the safe door to walk through (you can extend this to three or more guards providing you know how many of them are liers).

This “serial mitigation” can with more than two guards be done in parellel provided the number of lying guards is not the same as the number of truth telling guards. That is knowing if there are more or less lying guards you ask all guards to point to the certain death door and count how many guards point to either door. If the number of lying guards is greater than truth telling guards you walk through the door with the majority vote (it they are the minority you walk through the minority vote door).

This gives rise to the idea of voting protocols which appears to have originated out of the New York telephone company to improve system reliability and was then most often quoted as being used by NASA in space craft and launch systems. It’s also used on high high availability computer and high reliability military and industrial systems

Without going into all the ins and outs the minimum you need to start with is one computing engine that has been found to be reliable (as lier or truth teller) in operation by vigourus testing, you then work your way up to get three reliable systems that have as near as possible fully independent origins. This then provides the basis of your voting protocol system.

Provided they are of sufficiently independent origins then externaly injected malware can only effect one system at a time, so a disagrement will register untill all are infected. So if the system is immediately isolated on first error the malware will be caught and found.

Unfortunatly it does not stop “insider” injected malware if the voting system can be disabled and all three independant systems infected prior to the voting system being re-established.

One of the papers (“N-variant Systems”, cox et al, USNIX 2006) Nick P posted links to a few days ago talks about using a limited version using just two semi-idependent systems.

qwertyuiop January 15, 2014 3:38 AM

@sobrado – “I would say you do not understand the open source model. No one asks each individual user to read and verify the source code himself. It is enough if the code has been audited by a few clever members of independent development teams”

I do understand the open source model and in an ideal world it is enough that the code can be verified by suitably qualified members of the community.

The problem is that we don’t live in an ideal world, never have. Unfortunately we now realise that it’s even further from ideal than we used to think!

What the open source model means in practice is that there may exist a version of the code that has been independently verified by people I’m prepared to trust and there is also a version of the code that I can download. They may be the same, they may not. If I’m not able to check for myself how do I know? OK, I can rely on MD5 or something similar but if the code can be replaced so can the checksum.

Am I paranoid? Yes! But it was only a little over a year ago many of us were being called paranoid for suggesting that the alphabet agencies around the world, but the UK and USA in particular, were intercepting vast amounts of traffic and now we don’t look so stupid after all.

DB January 15, 2014 4:26 AM

@Clive Robinson

That’s interesting. I’m going to need to read it a few more times to digest that…

Along similar lines, I’ve also wondered if it might be possible to create a sort of “secure virtual machine” that can securely run on top of an insecure host environment, that would protect things inside it to some degree from insecurities around it… If it could be made light enough it could even wrap individual programs in a secure blanket instead of a whole operating system… panacea? of course not, but every bit helps…

Marcos El Malo January 15, 2014 11:17 AM

@Clive Robinson

Thanks for the entertaining and clear explanation, perfect for this layman. I read about Kurt Gödel in a philosophy class many years ago. That was the reference to what was proven in the 30s?

Nick P January 15, 2014 12:45 PM

@ 65535

“Is “backward compatibility” like a banana up the rear? Either way we get screwed.”

Haha I feel that way. To many customers, esp huge enterprise market, it’s the biggest selling point for a product. “Will all our existing software and systems continue to work when we add yours to the mix?” People who can’t say yes are often told to get lost. The net result: one must use similarly insecure stack to seemlessly interoperate with their existing components. There’s exceptions but that seems to be the rule.

“Nick, as of now I have not seen Intel “getting what they deserved” such as lost revenue. Intel’s customers are getting it up rear-end (or getting what they deserved). At the end of the day we are less secure.”

I was talking about the customers, not Intel. It’s the customers’ fault whether they want to hear it or not. Our IT industry is a capitalist, demand-driven industry. Security isn’t in demand [people pay for], so it isn’t produced. I made the point before in a blog post where people asked why it USB sticks were so insecure by design and why manufacturers do nothing about it.

My assessment of situation in 2011 that still applies

“[It’s simple logic why systems are so insecure: ] Because manufacturers don’t focus on building secure systems. Why don’t they build secure systems? >>BECAUSE USERS DON’T BUY THEM!

Most users want the risk management paradigm where they buy insecure systems that are fast, pretty and cheap, then occasionally deal with a data loss or system fix. The segment of people willing to pay significantly more for quality is always very small and there are vendors that target that market (e.g. TIS, GD, Boeing and Integrity Global Security come to mind).

So, if users demand the opposite of security, aren’t capitalist system producers supposed to give them what they want? It’s basic economics Bruce. They do what’s good for the bottom line. The only time they started building secure PC’s en masse was when the government mandated them. Some corporations, part of the quality segment, even ordered them to protect I.P. at incubation firms and reduce insider risks at banks. When the government killed that [requirement] & demand went low again, they all started producing insecure systems again. So, if user demand is required and they don’t demand it, who is at fault again? The user. They always were and always will be.

On the bright side, those same users are the reason I can send photo’s to friends on a thin, beautiful smartphone. They also gave us short-lived 1TB hard disks whose low cost made the short-lived part tolerable. They are also probably why I have a full-featured, fast, cheap wireless router at the home. So, at least some good comes from the users choices of demand. But, they definitely don’t accept the tradeoffs of real security, they don’t demand it, it doesn’t pay to produce it, & that’s why it’s their fault. ”

End of quote

Clive Robinson January 17, 2014 12:46 AM

@ Marcos El Malo,

    I read about Kurt Gödel in a philosophy class many years ago. That was the reference to what was proven in the 30’s?

Yes, he, Church and Turing showed there were limits to what can be known with determanistic systems. And they based their work on that of Georg Cantor (diagonal argument for uncountable sets) from the 1870’s.

Cantor upset a lot of peoples “apple carts” in the mathmatical world and for that matter those in Christian churches as well. Whilst not as well known as Darwin he certainly had a similar effect on “mans understanding of his environment”.

Peter Gamache January 20, 2014 11:05 AM

There’s lots of speculation above about countermeasures, DIY systems to replace packaged VLSI, etc, etc. Why is nobody talking about reliable detection methods? Without backdoor detection, all other effort is fruitless – your adversary may have already subverted the system you thought you’d made clean (again).

There needs to be a secure DETECTION platform before there’s anything else. Worrying about firewalls, checksums/signatures, etc is putting the cart before the horse. Start by looking at the wire. If it’s remotely controllable, it has to talk to something. Environmental variables are also a clue; sudden changes in CPU load can be hidden from the OS, but thermal output cannot.

Clive Robinson January 20, 2014 1:11 PM

@ Peter Gamache,

    If it’s remotely controllable, it has to talk to something

True but the problem for a detection platform is “how” and is the platform capable of recognising it?

And the answer to that is very probably no, if the malware writer is half way competant.

There are basic domains for signaling in, where the information is impressed onto a carrier of some kind. The domains are,

1, Amplitude (AM etc)
2, Frequency / phase (FM etc)
3, Time (PWM, PPM, etc)
4, Sequency (CDMA, SS, Walsh)
5, Wavelet (UWB etc)

The carrier can be any energy signal be it direct or alternating, which carries out of a unit by conduction, radiation and in the case of thermal energy convection as well. Broadly they can be considered to be,

1, Electromagnetic (E&H fields)
2, Acoustic / mechanical
3, Thermal
4, Gravitational

We are aware of the NSA using the first two, we also know of low bandwidth attacks using the third, and although gravitational effects can be generated (by an appropriate quadrupole radiator) they are not currently exploitable for either transmission or reception of information as far as I’m aware.

Unfortunatly theinformation that gets impressed does not require a unique carrier. For instance information can be convayed by fractionaly modulating the transmission of another signal such as legitimate network packets. If the information is pre-encoded by a long cryptographic sequence –ie a stream cipher– it will not be possible to distinquish it except by refrence to an unmodulated signal source from within the unit. This may not be available if the signal bandwidth and modulation is below a certain threshold and injected into the unit from an external IO device such as the keyboard, because the unit is effectivly transparent. This is a consiquence of making a system as efficient as possible.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.