IRATEMONK: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

IRATEMONK

(TS//SI//REL) IRATEMONK provides software application persistence on desktop and laptop computers by implanting in the hard drive firmware to gain execution through Master Boot Record (MBR) substitution.

(TS//SI//REL) This technique supports systems without RAID hardware that boot from a variety of Western Digital, Seagate, Maxtor, and Samsung hard drives. The supported file systems are: FAT, NTFS, EXT3 and UFS.

(TS//SI//REL) Through remote access or interdiction, UNITEDRAKE, or STRAITBAZZARE are used with SLICKERVICAR to upload the hard drive firmware onto the target machine to implant IRATEMONK and its payload (the implant installer).l Once implanted, IRATEMONK’s frequency of execution (dropping the payload) is configurable and will occur when the target machine powers on.

Status: Released / Deployed. Ready for Immediate Delivery

Unit Cost: $0

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on January 31, 2014 at 2:17 PM58 Comments

Comments

Nicholas Weaver January 31, 2014 2:53 PM

This is probably the most interesting of the BIOS-type implants. The idea:

Rather than recording your Master Boot Record (MBR) malcode on the disk, you record it in the firmware of the drive itself. That way, if someone tries to examine the MBR after the system boots up, they will never be able to find it because the firmware only presents the “bad” MBR when the system does a cold boot or other similar condition, any attempt to read the MBR is useless.

Thus it is useful in hiding the MBR malcode from a “boot from CD” detector, where the system is booted from a CD/USB and the MBR examined once the CD has booted up.

At the same time, however, it does show the NSA’s strange emphasis on persistence.

The disk itself can’t tell the difference between a cold boot and the case where the disk is removed and its being remotely examined, yet the cost of evading the “boot from CD” detection is now you have guaranteed “NSA WAS HERE” writ in big glowing letters if it ever IS detected.

Instead, in the wild, it would probably make sense not even to sabotage the MBR, but just to implant some malcode into the Windows kernel and, hey, if it gets noticed, how do you tell the difference from all the other malcode out there?

Or, in many cases, just make the malcode memory-resident only: yeah it won’t survive across reboots, but it can be very very hard to detect, let alone capture, if it only exists in kernel memory. And when was the last time you reset your servers, laptop, or dekstop?

Yet in all the slides from the ANT catalog and elsewhere, even the QUANTUMNATION mass-attack stuff (the SILKMOTH malcode/implant, which self destructs after 30 days), it all seems persistent, needlessly so. As a result, the NSA has to work a lot harder at evading detection.

Mark January 31, 2014 3:14 PM

Do they really need to worry about detection if the device is ultimately going to be hit with a drone strike?

Nick P January 31, 2014 3:58 PM

@ Mark

There’s a guarantee of no forensic recovery after a drone strike? That’s news to me.

Copper January 31, 2014 4:38 PM

And what happens, if a user doesn’t have a MBR on the disk, but instead boots from an external USB stick?

Valdis Kletnieks January 31, 2014 4:43 PM

@copper: It’s a safe bet to take. How many people boot from an external USB stick as their normal boot procedure?

f3nkjf3nkjf January 31, 2014 4:50 PM

No way they reversed engineered all the controller firmware for all these vendors. I know they could, but think of the economics. It’s extremely likely they just payed vendor providers for collaboration or ready solutions..

By the way this is probably the most profitable backdoor I’ve seen out of the NSA tools posted.. If malware authors got the source or binaries for this you’d see it going through a lot of exploit kits, email campaigns, and botnet PPI..

Still, this stuff seems like outdated tech. NSA has probably saturated the field with the stuff posted thus far, so it doesn’t eliminate the likelyhood this is some form of PSYOPS.. Declassifying old information for some CIA&NSA campaign..

Carpe January 31, 2014 4:53 PM

The last time I got hit with anything (about a year ago one a windows laptop of course) it was an MBR rootkit. Luckily I caught it right away (yay for checksums), but mbr rootkits are very interesting. This though, seems to be an actual hdd firmware level rootkit that then payloads the mbr, which is one of those “why didn’t I think of that?” sort of things. I wonder if they have it for GPT now too?

Bronx January 31, 2014 5:18 PM

If only anti-virus companies would start scanning firmware…

you would think they would include this technology especially in light of these disclosures…

but something smells… probably cooperation with TLAs which prevents this.

Benni January 31, 2014 6:16 PM

Maybe I’m not up to date, but what are these:
UNITEDRAKE, STRAITBAZZARE, SLICKERVICAR what are they doing exactly?

Anura January 31, 2014 7:10 PM

@Bronx

Antivirus only works for widespread, generic malware. If you are being specifically targeted, it’s about as useful as something that isn’t useful at all. To protect against firmware, bios, or kernel malware, there needs to be code specifically in place to prevent anything from modifying them without your knowledge in the first place. If the attacker has physical access to your machine, which the NSA does in many cases, then that’s especially difficult to protect against.

3jh34fhb3jh January 31, 2014 7:28 PM

@Anura: This is inaccurate and a common poor assumption in the software and security industry. It’s trivial to hook service handlers and mapping functions, even just using ring3, and create a impenetrable MAC HIPS, then supplement it with behavioral analyses that doesn’t use idiotic run-time trace-analyses or static analyses of binaries. For secure-install on an unsecure box just do offline breach-loading.

That’s actually cheaper than current development practices by vendors, which basically patch all NDIS, watchdog kernel timers, and process creation to support firewall rules and signatures engines..

There will always be subscription based products because it’s not profitable to charge a single license cost for something that needs long term support..

Bryan January 31, 2014 8:29 PM

Please do remember the USB FLASH memory stick exploits talked about a couple moths ago. All FLASH drives use a controller to handle internal functions. They also have extra blocks that they can erase in the background so they are ready for use for the next write. It would be no problem for the controller firmware to be replaced, store hidden files on some of these extra blocks, then serve up them up as exploit code much like this does.

Nick P January 31, 2014 9:37 PM

@ 3jh34

“This is inaccurate and a common poor assumption in the software and security industry. It’s trivial to hook service handlers and mapping functions, even just using ring3, and create a impenetrable MAC HIPS, then supplement it with behavioral analyses that doesn’t use idiotic run-time trace-analyses or static analyses of binaries.”

That inaccurate itself. There’s nothing trivial about protecting untrusted code from other untrusted code with code on untrusted hardware. Your system would be defeated by numerous items in the NSA catalog, kernel mode vulnerabilities, device related attacks, etc.

There are simple methods for stopping a huge variety of code injection attacks by design, such as control flow integrity. Several could be combined to greatly increase the security of a system without huge security engineering budget.

My old meme: “Tried and true is usually better than novel and new.”

Buck January 31, 2014 9:43 PM

Memory-resident only malcode would certainly have their perks! 😉
Plenty of leaner, more specific, and persistent hooks for loading the real payloads…
Plus, covering your ass only requires a simple forced reboot/memory-wipe!
It would really be a shame if the whole network stack is vulnerable all the way up from the physical layer… :-\

47 January 31, 2014 9:53 PM

“All FLASH drives use a controller to handle internal functions.”

What is the mode(s) of detection for such malware?

3jh34fhb3jh January 31, 2014 10:46 PM

@Nick P: Not really.. How would any ROM write into encrypted page tables with write-back hashing and a CPU encryption oracle to protect keys that protect a encrypted chain of trust from untrusted execution? If you properly do a PKI or eliptic curve based ROM loader to implement chain of trust it enforces the model and any type of corruption through overflows and glitching naturally fail to yield leverage even if they get past something like DLPAR and directly manipulated low level MMU stuff..

With software security there are already interfaces like kernel allocation handlers, ring 3 service and mapping handlers, and NDIS(NT) for networking. These are all that’s needed for my suggested solution.. It’s just not profitable to produce a single licensed solution and market and support it long term, so you have signature subscriptions. A lot of suite type anti-virus solution already use these interfaces the just don’t implement MAC based HIPS even on highest settings.

Again, there is a lot of misinformation when it comes to this. My method naturally defeats any attempt to map unsigned code even through sophisticated DMA and cache payloads and direct-bus RAM glitching..

Roger January 31, 2014 11:15 PM

@Nicholas Weaver:
“At the same time, however, it does show the NSA’s strange emphasis on persistence.”

It’s not particularly strange. All of these TAO tools that Bruce has been showcasing lately, are clearly not for mass surveillance, but are designed for targeted penetration of well-protected, sophisticated opponents. People like the Syrian regime, say.

Getting the exploit in place in the first place probably risks somebody being tortured and killed. So it is definitely worth considerable effort to avoid having to re-install it.

Roger January 31, 2014 11:28 PM

@f3nkjf3nkjf:
“No way they reversed engineered all the controller firmware for all these vendors. I know they could, but think of the economics. It’s extremely likely they just payed vendor providers for collaboration or ready solutions..”

Wrong, for two reasons. Firstly, it is clear from the TAO catalogue that reverse-engineering firmware is pretty much the full time job for these guys. It’s what they do: they have a bunch of engineers sitting in a well-supplied lab*, full of hardware samples from all over the world, and they sit there hacking on one thing after another, day in day out for at least the last decade if not longer.

Secondly, they don’t need to reverse engineer controller firmware “from all these vendors”, because most HDD manufacturers (mostly) don’t make or program the controllers, they buy them from a relatively small number of specialist HDD controller companies. The top 3 HD controller companies account for well over 90% of HDDs.


  • Actually, from the several flavours of product lines in the catalogue, it’s more likely half a dozen labs, each with its own dedicated engineering team.

RobertT February 1, 2014 12:15 AM

@f3nkjf3nkjf:

As Roger has just said, the problem is easier than you think because the semiconductor market for Controller parts only has 3 or 4 vendors that supply Controllers for 99% the HDD makers. Although the parts may be “customized” for each HDD vendor they will be based around the same core functionality and use the same Linux like OS’s typically running on either Mip’s or ARM cores. The actual firmware will be VERY similar especially in areas like Firmware update protection methods.

Clearly the TAO is well aware of how pathetic on-chip firmware protection really is because they do seem to target it regularly.

It is rather sad because some of us have been banging on about On-chip firmware vulnerability for years, but by the look of it the only one listening was the NSA. I’ve heard some recent rumors that our friends on Datong Lu have also been taking a very close look at on-chip firmware persistence for their own malware.

Nick P February 1, 2014 1:51 AM

@ 3jh34

So this discussion went from your original Mandatory Access Control-based HIPS with behavioral analysis to one of the CPU designs I linked to before with memory encryption and authentication. That’s progress for your scheme at least. Combining those two might accomplish something. You still need to worry about the effect of malicious input on trusted software but a number of other worries are eliminated [1].

[1] Maybe: Despite my liking cleverly cryptoed CPU’s, we haven’t seen them pen tested by bright black hats. There might be attacks we don’t know about yet.

RonK February 1, 2014 3:34 AM

@ Roger

All of these TAO tools that Bruce has been showcasing lately,
are clearly not for mass surveillance, but are designed for
targeted penetration of well-protected, sophisticated opponents.

Well, I don’t know about that, the persistent router vulnerabilities seem to me to be more applicable to mass surveillance (as in, all the customers of a particular ISP).

Dave Walker February 1, 2014 5:11 AM

Noting that the TAO catalogue as leaked isn’t exactly recent, a further interesting question to ask is how likely it is that more current filesystems (such as ZFS) are similarly nobbled, today. My guess is that it wouldn’t be too hard to do, given what the exploit already does.

A very important thing to verify, is whether a TPM measured boot catches this.

Benni February 1, 2014 6:32 AM

@Roger:
“Getting the exploit in place in the first place probably risks somebody being tortured and killed. So it is definitely worth considerable effort to avoid having to re-install it.”

Now thats a fun one.

Actually, NSA has targeted engineers of telecomunication companies to get a grip into the GSM network.

Those router implants are also not designed for some evil terrorists in somalia.

NSA targets companies with these implants. with the data from these implants, they then hack into the internal networks, and can spy on everyone. These implants here are just the keys for opening the door to their mass surveillance.

name.withheld.for.obvious.reasons February 1, 2014 9:47 AM

@ Nick P
Despite my liking cleverly cryptoed CPU’s, we haven’t seen them
pen tested by bright black hats. There might be attacks we don’t know about yet.

Mentioned before that I had a IBM ThinkPAD with a cryptographic BSP management module. It may have been the complexity at the time relative to experience. The challenge you’ve thrown down is interesting, being a gray hat I might think about it. Let me define “gray hat” so there’s no confusion…

Gray_Hat = (White_Hat – Black_Hat) = (Black_Hat + White_Hat)
Being versed in the dark arts, not for nefarious or criminal purposes, it comes from the days as a hobbyist back in the early 70’s. Black hats don’t like me, I tend to out methods and processes used to gain advantage. White hats don’t like me, make them look foolish and stupid and in some way my methods put fear in their hearts. So, as a gray hat–stuff sucks.

As there is a plethora of ideas and suggests, there seems to be any effective method of providing the broad social trust/integrity that will be necessary in the near future as those less knowledgeable begin to understand the lay of the land–they just don’t know how screwed they are.

Dave M February 1, 2014 9:56 AM

I assume we can’t get the HDD manufacturers to publish their code binaries so concerned and paranoid people can verify them. Do you think it would be possible to at least get them to publish firmware signatures, and tell us how they calculated them? JTAG could maybe work… Hard disk hacking – Hooking up JTAG

Who is to say that some or all of the manufacturers are not now including code that allows remote activation of this kind of exploit for new drives via, say, an undocumented SATA command?

pointless_hack February 1, 2014 10:33 AM

NSA seems to have gotten a little giddy with the naming convention. It implies a lack of discipline, and a kind-of a frat house atmosphere of irresponsibility.

I’m worried our Open Source IRC will become a pedestal for technically arcane flame wars, in a mud slinging reaction to it.

“Truth is the first casualty of war.”

Frabj February 1, 2014 10:39 AM

It is pretty much certain that all manufactures allow this kind of “activation” via SATA commands. The same commands that allow them to fix buggy firmware in the field can be used for this. There will be commands to access whatever flash is in the controller, plus reading/writing blocks in the secure manufacturer area.

It would be interesting if the SATA command spec had documented commands for getting access to a hash (SHA-1, say) of the firmware. At least that way users could defend themselves by publishing the hashes they see, and comparing to hashes of other identical drives.

Dave M February 1, 2014 11:08 AM

Yes, I now see that there are SATA commands to read/write drive firmware. But can you believe what the drive reports back? If only a small part of the exploit firmware image is different then it wouldn’t take much room to make it invisible from the SATA level. Would be even easier to have the exploit firmware report the “correct” firmware signature. The JTAG approach (almost certainly available on all newer drives) would allow real hardware access to the drive firmware for verification.

3jh34fhb3jh February 1, 2014 11:28 AM

@Nick P: You said “There’s nothing trivial about protecting untrusted code from other untrusted code with code on untrusted hardware.”, so I responded..

I mentioned hardware reinforced environments that supplement things like DLPAR, but I also pointed out the pure software solution to a MAC HIPS has always been there. Most of it doesn’t even require a driver and is already being used by AV vendors for self-protect and firewalls. All researchers know about it too..

You seem set on playing down other peoples statements but incapable of any technical discussion.. Things like NT service handler API, NDIS, UEFI signing, allocation API that go up to NT mapping functions in ring 0 via IOCTL, aren’t near the fiction your statements imply.. Even where there isn’t UEFI signing, like on legacy x86 and PPC hardware, there are efficient cryptograpgics methods for kernel level allocation that defeat even DMA and disk firmware base MBR attacks..

Nick P February 1, 2014 11:42 AM

@ 3jh34fh

The recurring problem in your designs is that they’re based on Windows. The discussions are about the NSA attacks. The NSA has been compelling US companies to backdoor products or give source code for 0-day hunting. Windows has a history of both issues. All of your protections have merit in ensuring the code that runs is the shipped Windows system while making black hat’s job harder.

The problem is that the running system can still be subverted in this threat model as one must trust Microsoft, the board/BIOS vendor, and the chip vendor. These so far haven’t been trustworthy although its hard to say how much for each one. Windows is definitely not trustworthy against a TLA. So, a long technical discussion on all of your Windows anti-hacking techniques is irrelevant when the opponent can put bypasses directly into Windows or UEFI.

That said, you’ve mentioned many different tactics that a reader might find useful against regular black hats if they must use Windows for critical stuff.

Nick P February 1, 2014 11:54 AM

@ name.withheld

“Mentioned before that I had a IBM ThinkPAD with a cryptographic BSP management module. It may have been the complexity at the time relative to experience. The challenge you’ve thrown down is interesting, being a gray hat I might think about it.”

I’d assume it’s a vulnerability until proven otherwise. Such is the record of onboard management devices. It was something I always applied high[er] assurance methods to due to all the risks it poses. Far as testing a given crypto technique, your talent would be better applied to something like SecureMe, CODESEAL or Air Force Lab’s HAVEN virtualization scheme. These are things we might be able to clone and use. Knowing their “actual” vs “projected” strength against attackers would require good hackers laying into them at every level.

If they pass muster, people can then put them to use.

“Being versed in the dark arts, not for nefarious or criminal purposes, it comes from the days as a hobbyist back in the early 70’s. Black hats don’t like me, I tend to out methods and processes used to gain advantage. White hats don’t like me, make them look foolish and stupid and in some way my methods put fear in their hearts. So, as a gray hat–stuff sucks.”

That’s funny as I get a similar reaction to them. If anything, black hats respect me a bit but don’t think my methods are “cool.” They’d rather use whatever crap they saw at DEFCON or what other black hats are using. The professionals are an exception: they’ll use the best tool for the job as their paycheck is on the line. Some don’t think I’m “dirty” enough as they can’t get me to tag along on their next criminal endeavor. Those won’t even talk to me but for a short time. (shrugs)

The white hats are a more interesting bunch. They have shown more interest in my security tech and designs, along with old papers/tech I bring up. They still suffer from being attracted to whatever fad is in fashion. However, they do try to use their skills to actively improve things. There’s hope for them.

My hands-on hacking skills are mostly gone. I’ve mostly worked on higher level designs and activities over the past years. I can still see plenty of problems at that level, simultaneously knowing strength of plenty solutions against various attackers. My main role is evangelist of high assurance tech who knows enough about the field to throw suggestions at hands-on guys until something sticks in their head. Then, they build the better tool and we all benefit.

Brandioch Conner February 1, 2014 1:49 PM

@Nick P

The problem is that the running system can still be subverted in this threat model as one must trust Microsoft, the board/BIOS vendor, and the chip vendor.

The next question is how will the non-USofA governments deal with this issue?

I’m hoping that they put pressure on the chip manufacturers to open their systems so that they can develop their own means of validating/replacing the firmware. Or start building their own fabrication plants.

Steve February 1, 2014 2:40 PM

I want NSA to name my software. Surely they would pick a cool name that would really help with sales.

In a PR move they should have public auctions for various services and send the money to a charity.

Students could have a NSA person test the code they wrote. I would even approve of that if they tested and only disclosed the right amount of information.

Clive Robinson February 1, 2014 3:30 PM

@ name.withheld…, Nick P,

    Gray_Hat = (White_Hat – Black_Hat) = (Black_Hat + White_Hat) Being versed in the dark arts, not for nefarious or criminal purposes, it comes from the days as a hobbyist back in the early 70’s.

Tsk Tsk you guys and “Fifty Shades of Grey”, you buoys need to follow the cardinal rule and get a bit more colour in your lives… Me I’ve got a nice Red&Green striped bobble hat so you can’t tell if I’m going Right or Left [1]

On a more serious note, much of the modern attack vectors are not realy new, they are just variations on older ideas that go back into the late 1950’s through early 70’s. The “new kids on the block” just have not seen them befor and nobodies told them so “the wheel get’s re-invented with every turn”.

In some ways it highlights the poor training given to up and coming “security proffessionals” and it reflects the “code cutter” not “engineering” approach.

I’m realy not surprised that certain ex-NSA people shake their heads sadly when they look at the way the industry is going. We cann’t fix certain fundemental problems do rather than take a solid well tried and proven engineering approach, we go for layer upon layer of obsfication in the faux hope that people won’t work their way through it.

But… because of the multiple layers complexity goes up exponentialy and this has several consiquences. One of which is it gives focused attackers more vectors to play with. A second and much worse effect is that the code cutters are “all at sea” and put in place much “test code” that stays there. As I’ve said before the difference between a useful test harnes and a security evading backdoor, is only the use you put it to…

The simple fact is, that too many people won’t face the truth that there is no guarenty of security at any level, in fact the exact opposit, so any layer you add is in effect resting not on dependable bed rock but shifting sands. The more layers the more “top heavy” and thus unstable things become.

Engineers worked out solutions to these problems back in the 1940’s and 50’s just to build the first (thermionic valve/tube) computer systems with unreliable components, and then we appear to have forgoten them. The simple point is that the methods to solve unreliable hardware issues are transferable technology into the security field for untrusted hardware. Likewise methods for improving quality are transferable technology into the security field for design, manufacture and maintanence.

So yes I can see why those who have been around the blog a few times shake there heads sadly. It’s also the reason that old style hardware engineers along with those who have a good grounding in experimental research in the physicsl sciences tend to produce better quality software than many CS graduates who get tied up with the latest tools and don’t progress beyond being code cutters (and sadly that’s also what most managers want “artisanal code” not “engineered code” with a “solid scientific and mathmatical basis”.

[1] For those that don’t get the joke “sea”, http://www.trinityhouse.co.uk/lighthouses/buoys/cardinal.html

Steven February 1, 2014 5:55 PM

The supported file systems are: FAT, NTFS, EXT3 and UFS.

Probably supports ext4 by now. Sigh…

3jh34fhb3jh February 1, 2014 5:59 PM

@Nick P: Name an OS that hasn’t had page tables, so my “recurring problem” as you say is more than just a lazy illogical argument meant to make me look like the person who doesn’t know the specifics..

Even CP/M and the first Linux kernel had page table allocation handling. You can use cryptographic methods on those as well to defeat nearly anything from DMA injection up to the most advanced ROP attacks.. I admit, there is no encryption oracle to protect keys there, but generating them at run-time with some entropy and RNG defeats any distributed attack using dumped keys. Use a stream cipher per private set within page tables to make RAM glitching to dump worthless..

Again, you’re unable to say how to defeat my proposed solution, only to repeatedly suggest I’m wrong based on absolutely nothing..

Nick P February 1, 2014 7:38 PM

@ 3jh34

I’m not sure if you read someone else’s post as I never said a thing about page tables. I said Windows-based systems were a no-go for anything supposed to resist a US domestic TLA. I also mentioned risk of them backdooring chip or firmware your scheme depends on [which is more likely on Wintel systems].

Nick P February 1, 2014 10:42 PM

@ Brandioch Conner

“I’m hoping that they put pressure on the chip manufacturers to open their systems so that they can develop their own means of validating/replacing the firmware. Or start building their own fabrication plants.”

It will be interesting. We’ve explored the heck out of those issues on this blog. Chip fabrication is a complicated, black box process that’s unlikely to be compelled into doing anything transparent. The complexity of implementing chips at the cutting edge means that creating a chip design at that level might reduce risk of chip level subversion. Irony, eh?

So, what chip(s) to design. The strategy I see working is starting with a well-understood solution like Linux (or Linux API on microkernel) for the software. The board will be designed and vetted by many parties. The main SOC will use open components (eg opencores.com) whose source can be examined. One of the open firmware projects can be made to use this. Security features like TRNG, secure boot assist, memory protection via crypto, control flow integrity, etc. might be integrated early on into the SOC. Many will be delayed until the next phase/chip.

The point is using what’s readily available with limited additional engineering to get stuff with less risk of built-in subversion. I’d combine this with some supply chain security for the SOC’s and maybe manufacturing. Some security might happen at the fab level if several countries or big firms get together with a struggling fab that needs the cash. In any case, having chips well-understood design and with lower risk of pre-installed backdoors means most worries shift up a level or two in the stack.

Next step will be to deal with those. The governments and big spenders wanting to develop such a solution will need to involve someone with these traits:

  1. Understanding of security risks at every layer and use case of computing stacks.
  2. Knowledge of high assurance security engineering methods rated to stop or hamper TLA level threats.
  3. Experience dealing with real world issues of system or project implementation, particularly tradeoffs involving legacy systems.
  4. Be highly unlikely to be working for US govt, either voluntarily or involuntarily.

In other words, me and few other guys on this blog. 😉 More realistically, some local experts or defense contractors will be given the job with lots of cash while promising to stop foreign TLA’s “this time.” Their next step will be to make CPU’s which make secure/safe software easier to write. I already did a massive paper dump here [plus a supplement] that cover plenty of CPU’s and hardware techniques for accomplishing this. I also established the bare minimum properties the system design will require to be securable. They’ll need to produce at least one SOC like this.

Simultaneously, they should be increasing the modularity and assurance of their software stack. They need OS’s that are inherently safer from coding bugs, useful, fast, easy to modify, and probably free of subversion. Previously, I proposed SPIN and Wirth’s Modula/Oberon operating systems as a start as they meet all of these goals. There’s little to no reason to think any malice went into these systems’ design. Worst case scenario for them will be accidental vulnerabilities in design/implementation which should be easier to correct given use of typesafe, modular, easily compiled implementation.

Eventually, the subversion resistant development processes, OS, libraries, chips, etc. will come together in a strong coupling with each layer or component complementing anothers’ strengths. These systems will probably not be compatible with MS Office, etc. However, they will be ideally suited to be workhorse machines for corporations’ or governments’ security critical stuff. That I see, certain design choices could make the system work for embedded, desktop, thin client, and server offerings. The versatility gives it greater market penetration which feeds back into it over time for both features and pen testing.

So, this is one way to do it. I know (and linked to) quite a few qualified people working on technology aspects. There are also solutions on the market with more robust design than typical COTS for people that are willing to buy them. If that’s not enough, any well-funded organizations panicking and wanting to go through extra trouble to protect critical functions from diverse TLA threats know where to find me. 🙂

65535 February 1, 2014 11:48 PM

@ Nicolas W

“Rather than recording your Master Boot Record (MBR) malcode on the disk, you record it in the firmware of the drive itself. That way, if someone tries to examine the MBR after the system boots up, they will never be able to find it because the firmware only presents the “bad” MBR when the system does a cold boot or other similar condition, any attempt to read the MBR is useless. …it is useful in hiding the MBR malcode from a “boot from CD” detector, where the system is booted from a CD/USB and the MBR examined once the CD has booted up… At the same time, however, it does show the NSA’s strange emphasis on persistence. “

[Yes, I agree. But “Persistence” of malware is NSA’s specialty.]

“…disk itself can’t tell the difference between a cold boot and the case where the disk is removed and its being remotely examined, yet the cost of evading the “boot from CD” detection is now you have guaranteed “NSA WAS HERE” writ in big glowing letters if it ever IS detected. “

[That’s big if!]

“in the wild, it would probably make sense not even to sabotage the MBR, but just to implant some malcode into the Windows kernel and, hey, if it gets noticed, how do you tell the difference from all the other malcode out there?”

[Very carefully… if at all]

“Or, in many cases, just make the malcode memory-resident only: yeah it won’t survive across reboots, but it can be very very hard to detect, let alone capture, if it only exists in kernel memory. And when was the last time you reset your servers, laptop, or dekstop?”

[It would be extremely hard to detect. The malware could be moved in the swap file, hibernation file, or temp files – even if the drive controller was changed]

“Through Remote Access or Interdiction, UNITEDRAKE, or STRAITBAZZARE are used with SLICKERVICAR to upload the hard drive firmware” -NSA

I would assume remote implant would be via SMM or iAMT pwnd systems. If that method of implantation failed the NSA slips you a pawned HD during shipping (after your original drive mysteriously fails).

I see that the NSA (in 2008) had a lot of file system covered. …”FAT, NTFS, EXT3 and UFS” The would include Windows servers and clients, memory sticks, backup boxes, and most Unix and *nix flavors. The only thing missing is EXT4 for *nix servers (which is what I see used for websites). But, that was in 2008 so I am confident that EXT4 is now hackable.

UFS:

“Vendors of some proprietary Unix systems, such as SunOS / Solaris, System V Release 4, HP-UX, and Tru64 UNIX, have adopted UFS. Most of them adapted UFS to their own uses, adding proprietary extensions that may not be recognized by other vendors’ versions of Unix. Surprisingly, many have continued to use the original block size and data field widths as the original UFS, so some degree of (read) compatibility remains across platforms. Compatibility between implementations as a whole is spotty at best and should be researched before using it across multiple platforms where shared data is a primary intent. As of Solaris 7, Sun Microsystems included UFS Logging, which brought filesystem journaling to UFS, which is still available in current versions of Solaris. Solaris UFS also has extensions for large files and large disks and other features.”

https://en.wikipedia.org/wiki/Unix_File_System

[That covers a lot of servers]

@ Bronx
If only anti-virus companies would start scanning firmware… you would think they would include this technology especially in light of these disclosures… but something smells… probably cooperation with TLAs which prevents this.

[It not only smells fishy but the HD controller coders are a small group of people. If I had the power of the NSA I would simply force them to provide a backdoor and tell them to keep their mouths shut.]

@nick p, Anura, 3jh34, roger, and others:

[Good discussion]

@f3nkjf3nkjf:
“No way they reversed engineered all the controller firmware for all these vendors. I know they could, but think of the economics. It’s extremely likely they just payed vendor providers for collaboration or ready solutions..”

Roger:
“they don’t need to reverse engineer controller firmware “from all these vendors”, because most HDD manufacturers (mostly) don’t make or program the controllers, they buy them from a relatively small number of specialist HDD controller companies. The top 3 HD controller companies account for well over 90% of HDDs.


  • Actually, from the several flavours of product lines in the catalogue, it’s more likely half a dozen labs, each with its own dedicated engineering team.”

[Again, that is still a small group. The group to be strong armed by the NSA into providing critical design details – very little reverse engineering needed]

3jh34:
“These are all that’s needed for my suggested solution.. It’s just not profitable to produce a single licensed solution and market and support it long term, so you have signature subscriptions. A lot of suite type anti-virus solution already use these interfaces the just don’t implement MAC based HIPS even on highest settings. Again, there is a lot of misinformation when it comes to this. My method naturally defeats any attempt to map unsigned code even through sophisticated DMA and cache payloads and direct-bus RAM glitching…”

[Good stuff. Do you care to start a firmware AV company? It could be profitable]

Dave M:
“Do you think it would be possible to at least get them to publish firmware signatures, and tell us how they calculated them? JTAG could maybe work… Hard disk hacking – Hooking up JTAG Who is to say that some or all of the manufacturers are not now including code that allows remote activation of this kind of exploit for new drives via, say, an undocumented SATA command?”

[We don’t know. But firmware signature check on boot would be helpful]

And

“I now see that there are SATA commands to read/write drive firmware. But can you believe what the drive reports back? If only a small part of the exploit firmware image is different then it wouldn’t take much room to make it invisible from the SATA level. Would be even easier to have the exploit firmware report the “correct” firmware signature. The JTAG approach (almost certainly available on all newer drives) would allow real hardware access to the drive firmware for verification.”

[That type of access to the firmware is interesting. I wonder if some code could be written to verify the firmware for alterations. The NSA’s pawning of firmware is very serious! Again, I wonder how the NSA keeps their firmware from being pawned.]

Brandioch C:

@Nick P
The problem is that the running system can still be subverted in this threat model as one must trust Microsoft, the board/BIOS vendor, and the chip vendor.
The next question is how will the non-USofA governments deal with this issue?

[Good question. The Russians and Chinese are not stupid. I wonder how much of all government systems have been compromised. Once, compromised I would guess the safest solution is to junk the HD or the whole box. The hacking of firmware opens a Pandora’s box of horrible problems.]

@Adjuvant
[Good link. It looks like what the NSA is using in some form. I don’t the NSA needs a JTAG. I think they can just put the HD in a “special hard drive tester” and reprogram the chip.]

I will note that SpritesMods leaves out some critical code for normal readers:

“…I want to release code, but I do not want to be responsible for a lot of permanently hacked servers… I decided to compromise: you can download the code I used here, but I removed the shadow-replacement code. Make note: I’m not going to support the process to get all this running in any way; it’s a hack, you figure it out.”

http://spritesmods.com/?art=hddhack&page=8

It’s clear the NSA figured it out and is using it!

Clive Robinson February 2, 2014 1:51 AM

@ Nick P,

    They need OS’s that are inherently safer from coding bugs, useful, fast, easy to modify, and probably free of subversion…

Unfortunatly I don’t think that is enough for a secure OS.

It’s the problem of “low hanging fruit” being exploited first causing incremental layers of fixes for each successive attack. Thus increasing complexity and available attack vectors whilst not actualy solving the fundemental flaws.

The problem is nearly mutualy exclusive design goals, for instance subversion free is unlikely to give a fast OS and keeping subversion down is going to jibe badly with easy to modify and inherently safer from coding bugs.

It’s down to one of my mems of “Efficiency -v- Security”, as a general case the more efficient you make a system the more open it is to side channel attacks of various forms.

The soloution is in the main to decrease complexity not increase efficiency, which is not what has happened in mainstream OS development as far back as I can remember. That is the desired goal has been “more features” combined with “more efficiency”.

Thus the first goal of an OS has to be the absolute minimum –but not less– of functional features, monolithic solutions are out. Perhaps the easiest way to do this is to split the OS into many parts where each part has very clearly designed and limited function with interfaces that can be clearly monitored for aberant behaviour be it by data, metadata, format or time.

Which gives rise to the issues of what priveledge the code parts have and where you split low and high priveledge requirments.

This brings us back to the interfaces which need to be both simple and effective for the given function, but unfortunatly like any communication channel can be abused to leak information either by chance or design. This calls for strong monitoring of the interfaces in a way that limits or prevents abuse. The down side is that either end of the channel has to be programed with the implicit assumption not just of error but active abuse.

Further the interface needs to be independently controled to prevent covert channnels being established. That is the interfaces have a “Mediator in the Middle” the job of which is to ensure communications are compliant with interface policy. Thus this mediator needs to be robust in design but also have it’s own protocol for signaling exceptions etc and to do this it will need to some extent be statefull.

This sort of design has been seen in hardware solutions to reduce EmSec but not sofar in mainstream OS design partly because it’s seen as a “speed bump” but also because the COTS hardware does not provide desirable features to support it currently. Which gives us the “Chicken and Egg” problem but in reverse.

James Sutherland February 2, 2014 8:54 AM

“The group to be strong armed by the NSA into providing critical design details – very little reverse engineering needed”

Ironically, not even strong-armed I suspect: NSA/DoD could easily ask the manufacturers for source access for a “security audit” – wouldn’t want to risk the firmware getting compromised, would you? (MS give them access to Windows on that basis, after all!) They don’t even need much of an excuse: using a cross-section of drive vendors and models is a sensible precaution against clustered failures, and they’ll have all sorts of legitimate reasons to buy different drives: staff laptops, servers, SANs, the lot.

Ditto most of the servers, routers: “we’re considering approving your new firewall for government use, but we’ll need to audit the firmware first …”

Bruce Schneier February 2, 2014 9:24 AM

UNITEDRAKE is a software implant. I don’t know what it does, but I remember seeing it in a FOXACID manual.

name.withheld.for.obvious.reasons February 2, 2014 3:10 PM

@ Clive Robinson
On a more serious note, much of the modern attack vectors are not realy new, they are just variations on older ideas that go back into the late 1950’s through early 70’s. The “new kids on the block” just have not seen them befor and nobodies told them so “the wheel get’s re-invented with every turn”.

Couldn’t agree more, everything old is new again. Part of the marketing psychosis, just put a new label and a marked up price. Part of how security theatre works, the old fear is the new fear. I chaulk it up to the fascist overlords enforcing the validity of short term memory for all U.S. citizenry.

But what I would like to know more about is your “hat”. Sounds quite fashionable.

tom February 2, 2014 3:15 PM

UNITEDRAKE? I suppose whatever these have in common:

GECKO II – System consisting of hardware implant MR RF or GSM, UNITEDRAKE software implant, IRONCHEF persistence back door

RETURNSPRING – High-side server shown in UNITEDRAKE internet cafe monitoring graphic

SEAGULLFARO – High-side server shown in UNITEDRAKE internet cafe monitoring graphic

SLICKERVICAR – Used with UNITEDRAKE or STRAITBIZARRE to upload hard drive firmware to implant IRATEMONK

UNITEDRAKE – Computer exploit delivered by the FERRETCANON system *

WISTFULTOLL – Plug-in for UNITEDRAKE and STRAITBIZARRE used to harvest target forensics

Bruce Schneier February 2, 2014 3:47 PM

“UNITEDRAKE – Computer exploit delivered by the FERRETCANON system”

FERRETCANNON is the subsystem of FOXACID that decided what exploit to send to a victim. FOXACID never attacks people blindly. When someone is tricked to visiting a FOXACID URL, that URL is unique so that FOXACID knows exactly who is on the other end of the connection. Based on that, FERRETCANNON makes a decision. Presumably the primary difference in base exploits depends on how low- or high-value the target is.

[name redacted for National Security reasons] February 2, 2014 3:56 PM

I concur: they surely cover ext4 by now.

But… there is plenty of other formats left. There is hassle with compatibility now and then, but if you are a Syrian war criminal, you don’t use off the shelf Bill Gates’ personal poop in the first place.

As to persistence, collect-it-all policy would dictate implanting more rather then less, just in case. If I were a NSA spook, I would write an algorithm which would preselect targets based on certain criteria (what they already do by extratcting behavior from social media), and then implant just in case all across the board. Algorithm would also exclude certain targets, such as Kardashian sisters with their 200 million a day SMSs.

In fact, it seems, NSA is already at it. Once a while you type http://nytimes.com and https pops out asking to accept strange certificate, although NY Times does not ask for a front page. You remove manually “s” from URL and get correct page. It clearly comes from those secret rooms at telcos in real time. I am talking about certificates here, but why not other implants such as IRATEMONK as well?

Clive Robinson February 2, 2014 4:31 PM

@ Name.Witheld…,

    But what I would like to know more about is your “hat”. Sounds quite fashionable

Beauty as they say is in the eye of the beholder 😉

However being slightly older than Bruce and having a beard that came runner up in a competition, I think it safe to say I’ve reached an age where comfort is more desirable than fashion especialy now my “thin spot on top” gets sunburn or frostbite –depending on the season– rather to easily 🙁

Whilst a “Dry-z-Bone” hat is fine for a couple of seasons (and kind of makes me look like the “town drunk” in “Blazing Saddles”, when the wind gets up I need something a bit more “snug fitting”. So I have a Swedish knitted hat that has sides that keep the ears warm and if realy required can be tied under the chin. Rather than the traditional snowflake pattern I went for red and green vertical stripes and a white bobble on the top. Yes it looks compleatly ridiculous on me, and as I’m over 6’6″ tall and a bit over two ft wide and look like a streched “Brian Blessed” I stand out above the “sea of heads” at London’s Waterloo station. So much so a friend spotted me from a considerable distance away and said I looked like a “Lateral Mark bobbing in choppy waters”… The big advantage for me is not only does it keep my delicate ears warm, but when people see me steaming towards them in it they move out the way PDQ especialy when I’ve a “face like thunder” on me… The scary thing though is the number of young ladies who come up to me and say they like my beard…

tom February 2, 2014 5:15 PM

The SLICKERVICAR and WISTFULTOLL slides make it sound like UNITEDRAKE and STRAITBIZARRE play more or less the same roles. If so, we can leverage off STRAITBIZARRE to better understand UNITEDRAKE (or vice versa).

Thus the STRAITBIZARRE positioning in the 11th slide in the QFIRE deck might also be more or less applicable to UNITEDRAKE.

My reading of QFIRE is that it is a later and more sophisticated integration and replacement for the various Quantum tools, FerretCannon, and FoxAcid pieces.

http://www.spiegel.de/fotostrecke/qfire-die-vorwaertsverteidigng-der-nsa-fotostrecke-105358-11.html

G. February 3, 2014 12:34 PM

The malware code will have to be executed, if loaded by bios. The malware code will have to be hidden, if loaded by a scanner. If the payload is only served once after powerup, if sector zero is not accessed first -> boot from other medium -> dont show malware sector. A flag could be used in code, to show malware code only once the system is poweredup, in reasonable time and Sector 0 is read first. This could be a solution to get access to the malware.

Whiskers in Menlo February 18, 2015 7:31 PM

Of all the exploits this one has very few defenses.

Once the disk microcode has been altered it has astoundingly large
resources for overlays given the spare blocks and spare tracks
that the controller manages. One payload can update the
BIOS so even booting from a USB stick could be hacked.

Booting from a USB stick might be the easy way to
compromise the box. Physically removing the drive
and attaching it to another computer would also be an
easy attack.

In some cases JTAG could be used to explore the controller
but that demands vectors and test vectors. Some devices
keep code on media. Exploring the media is darn hard but
possible if the positioning signals and the data can be attached
to a test fixture or perhaps a known good controler.

Media encryption by the controller intended to protect data
confounds efforts at forensic analysis.

Laptops that travel and spend any time unattended in a hotel or in security
are at great risk. Other common attacks can result in the same result.

One very difficult problem is that of a VM attack hidden in the disk.
As long as the disk is present the bios can boot the VM from the disk
and from that point it is turtles all the way down as the initial hosted
code could look like the BIOS.

Trouble… at a lot of levels.

Cryptic October 10, 2017 10:28 AM

Wow. Here we are and 2017 is almost over. This is almost the same stuff we were doing in the early 90’s and no one listened.

Going way back to the early 1990’s, I commented to several teachers / professors about a few concepts that I had which could be very dangerous if too many people figured them out:

1) Use AntiVirus software to spread your infections.
– Rather than trying to infect a computer and attempt to keep hiding from AntiVirus software, why not seek out vulnerabilities in the AntiVirus software in order to use it to actually spread the infection.

2) Attack and or intentionally damage the BIOS.
– We didn’t really know much about how to code a BIOS, but we did manage to corrupt parts of the BIOS that affect either the keyboard or video. Placing an infection in the BIOS is merely an extension of what we could already do way back then.

We were kids / teens and didn’t even know how to code very well. Everything we knew was literally exploration while reading some really boring / dry facts about micro processors and assembler. Pascal was popular reading too, though I don’t really doing anything useful with that.

Anyway, the point of this is that OUR mischievous pranks only worked on systems that didn’t have a pin jumper to flash the BIOS or systems that would boot while the flash enable was jumpered.

IF NEW SYSTEMS STILL REQUIRED A PHYSICAL LOCKOUT TO FLASH THE FIRMWARE NONE OF THIS WOULD BE POSSIBLE.

It’s just insane how much “Firmware” can be reprogrammed directly though normal use. WTF does a USB mouse need new firmware?!?!?! Or a basic keyboard?

At the core of every system, there is a fundamental process that should never change. That “Stage 1” has no reason to be reprogrammable in any convenient way. If “Stage 1” implements adequate checks into “Stage 2”, then a reprogrammable “Stage 2” would automatically be safer to reprogram.

Another thing I’ve been chasing for a few years, is a concept of “How do we establish trust in an untrusted environment?”
In my approach, I envision the scenario of a used computer, be it purchased used or taken from a contaminated enterprise network.

1) Can we ensure the computer firmware (BIOS / EFI) is infection free?
– Users can “update” their firmware, but this does not always ensure an existing infection is removed, especially if the firmware was already the newest at the time of injection. (Many systems will not allow you to re-flash the same firmware.)

2) If I can’t use generic “Update / Upgrade” methods to cleanse / ensure the firmware, will the manufacturer offer a way to ensure this or to re-flash the same version firmware?
– Apple could not provide a way nor could they even assure me that their factory refurbished units covered this internally.
– Dell presented the same scenario.
– I tried a few other manufacturers via chat and help desk calls to end up with the same answer.

I’ve pushed Apple the hardest on this, hoping they would be less lost in their sheer volume and more concerned with their product than others. Thus far, I’m baffled to find no process nor to even get solid confirmation that their in-house refurbishing process can assure this (just in case they simply don’t want to expose a security trade secret.)

The rest of the real trust comes only after the hardware.
From new, we can be compelled to begrudgingly assume trust, since we’d have to trust their provided firmware anyway.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.