TRINITY: NSA Exploit of the Day

Today's item from the NSA's Tailored Access Operations (TAO) group implant catalog:

TRINITY

(TS//SI//REL) TRINITY is a miniaturized digital core packaged in a Multi-Chip Module (MCM) to be used in implants with size constraining concealments.

(TS//SI//REL) TRINITY uses the TAO standard implant architecture. The architecture provides a robust, reconfigurable, standard digital platform resulting in a dramatic performance improvement over the obsolete HC12 microcontroller based designs. A development Printed Circuit Board (PCB) using packaged parts has been developed and is available as the standard platform. The TRINITY Multi-Chip-Module (MCM) contains an ARM9 microcontroller, FPGAA, Flash and SDRAM memories.

Status: Special Order due vendor selected.

Unit Cost: 100 units: $625K

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on February 7, 2014 at 2:53 PM • 21 Comments

Comments

55j56j655February 7, 2014 3:38 PM

Another boring covert computer PCB.. Only thing I see it can do is be programmed to control something else over a bus socket or trace-tap..

Again, the code that has been developed for this is what has worth and valuable information because IT is what's based on insightful intelligence.. The PCB is worthless and reveals nothing but the fact the federal government manufactured a PCB with some standard chips.. Basically a prototyping board with slightly better chips than citizens can buy off DIY sites..

TFebruary 7, 2014 4:18 PM

Theres a couple of side channels, the design of the PCB, what components they like to use , the traces(maybe the traces are RF antenna, with the chip being the switch), what country they used for parts, the type of contractor they got supplied to(maybe detected), the value of work in dollar terms, the fact that there is a whole sale supply network to contractors, they ask for components, they pay the price, and then they sale the stolen info back to HQ. Field craft, by the fact that the sub let the equipment out. Small team, like eyespy, from some quietly removed ex spys.. Edit latter

classifiedFebruary 7, 2014 4:23 PM

Could that "TAO standard implant architecture" be used for generic detection of these devices?

James SutherlandFebruary 7, 2014 5:27 PM

@55j56j655: At $625,000 per unit in quantities of 100?! Even interpreting that loosely to get $6,250 each for a batch of 100, the price alone rules out anything "boring".

Probably a very big and/or fast FPGA to do all the heavy lifting - perhaps capturing and compressing screen output, or something similarly data-intensive? Or intercepting/modifying hard drive accesses.

name.withheld.for.obvious.reasonsFebruary 7, 2014 7:26 PM

Doh, what a moronic design--even if you need fab level isolation for some asynchronous application/feature, there have been SoC's from Xilinx, Microsemi, and Alacatel for a couple of years that have more going for it in the same form factor.

Heck, an old Actel Fusion device I have is about as robust and is on a single BGA. The BGA shown looks a lot funky. Looks like all the pins go to ground, hey, maybe that's the only way they could get it to work?

MakerFebruary 7, 2014 7:55 PM

Is there an Edward Snowden working at the chip manufacturer(s) printing all these TAO chips? Or am I naive in not recognizing that the government/military/spooks have their own production facilities?

MakerFebruary 7, 2014 7:58 PM

Correction:
Is there an Edward Snowden working at the chip manufacturer(s) printing all these TAO chips?
should be:
I hope there is an Edward Snowden working at the chip manufacturer(s) printing all these TAO chips?

55j56j655February 7, 2014 8:10 PM

@name.withheld.for.obvious.reasons: Even then.. Boring FPGA+Arm AP PCB with no code.. Sorry, but the code is still what's valuable..

The price has nothing to do with it's value to intelligence.. You and the others who argue just don't get it.. This is hardware with no code.. it doesn't learn what to do when you plug it in and reveals NOTHING about the NSA or it's targets..

65535February 7, 2014 8:32 PM

Trinity looks about the size of Maestro-II. The differences appear to be the ARM 9 controller, SDRAM is bigger and the FPGA has 1M gates. Interfaces to the ARM9 include JTAG, SPI, USARTs, USB, RMS/M?, I/O 82 appears to configure both the ARM9 and FPGA (there is a JTAG for the FPGA).

It is unclear if “Trinity” works in conjunction with other components such as Howlermonkey or other air-gap jumping devices. As others have noted the software is key in this implant (which I suppose is a persistent BIOS implant as with the other implants). Trinity looks about twice expensive as Maetro-II but has upgraded processor and RAM. Maestro = $3 -$4K and Trinity $6250.

I assume it is powered by USB but no power requirements are noted. Since it is physical I would guess it is implanted during shipping (or other interdiction methods) or at the factory.

Without howlermonkey or the like, SMM or iAMT would be the communication methods. The only real way to discover it would be physical inspection of suspected devices (it probably tries to blend in with other communications protocols). Since it is dated 20070108 it probably has been superseded by another type of implant.

Nick PFebruary 7, 2014 8:38 PM

@ 55j56j655

I agree. One of the first observations commenters here made was that the software implants were typically (always?) free and the hardware implants often had a price. The price likely covers the hardware's limited production runs by defense contractors (with their nice profit margins).

If they were charging for I.P. or code, we'd see it in the software implants as we know the offensive security companies are paid well to develop exploit kits and such. I'm sure the software implants cost the NSA plenty of labor too. Yet, they're free, so they must just charge the distribution cost.

55j56j655February 7, 2014 10:19 PM

@Nick P: There is no software for any of these PCBs. The software mentioned in other articles don't work with these in any way since they aren't part of some network infrastructure; they are just for custom code and drop-placement. It's just ready-to-flash hardware made for concealment.. Why it costs a house is anyone's guess, likely the economics of low sales to production costs..

If you got a flash dump a buyer developed you could learn about environments that aren't accessable even to people with high security clearance, or audit gov. offices based on device fingerprints.. But why people continue to suggest these PCBs teach ANYTHING is beyond me..

JonathanFebruary 7, 2014 11:17 PM

55j, given that such a thing exists and is apparently worth $6250 apiece to someone (4x as much as the fabled DoD hammer, so the Federal premium is still about right), how exactly would you deploy one? That thought experiment could be educational, no?

Imagine implanting one of these by scraping solder mask off of and optionally interrupting traces on some low-medium speed bus (e.g. LPC, SPI/I2C/SMBus from BIOS, MII, maybe even PCI or USB), reflow-soldering the implant into place, then selectively enabling and routing FPGA I/Os to some bus-specific IP core, in much the same spirit as a bed-of-nails test jig for pc boards. Or, remove a surface-mounted flash chip or the like entirely and replace it with a TRINITY-powered emulator, appropriately packaged and labeled for plausible cover. Power could be supplied by those I/Os or actual power traces via the FPGA's input protection diodes. I seem to remember later members of the Virtex range were mixed-signal capable -- imagine dropping one of these across a mic input.

Against that threat model, it seems visual or X-ray inspection, perhaps including comparison to a known clean board, would be the most reliable way of finding one of these, if no "darn computers are so unreliable" behavior or timing anomalies otherwise suggest its presence.

Clive RobinsonFebruary 8, 2014 2:27 AM

@ 65535, Jonathan,

There is a way to detect these devices, but I'm not sure how reliable it would be...

It's already been used a while ago against a supply line poisoning of Chip-n-Pin reader terminals destined for a UK supermarket.

The "poison" consisted of a striped down mobile phone module that stored card and pin details and these got sent via the GSM network.

The problem was two part, firstly the number of units poisoned was --assumed to be-- low. Secondly with visualy looking for the devices was the issue that the terminals had all sorts of anti-tamper features which ment "cracking the case" for a visual inspection effectivly ruined the terminal.

The solution the investigstors came up with was "as added hardware added mass to each unit" then weigh them and crack open those which were "over weight".

Thus unless these hardware implants have equal mass removed from the host board it will be "over weight" by some fraction. The question is by "how much" and is it in or out of the manufacturing variability "noise floor".

Secondly the adding or removing of mass unless done in a very precise way will change the center of gravity of the host board, again it boils down to a question of "noise floor" on the measurment methods.

Thus I'm not saying it's a good method but it might be a quick and simple "pre-screening" method to direct host boards for further attention.

name.withheld.for.obvious.reasonsFebruary 8, 2014 6:29 AM

@ 55J56J655
You and the others who argue just don't get it..


No, I am afraid you didn't read betweeen the lines in my post. I didn't mention any general or specific application/code/IP, I mentioned a structural aspect of the the hardware that could be important to an application. The point I was making was that the COTS like nature of the HW is laughable. Having years with these platforms, there are any number of challenges with HW and the picture for the the SoC (either Solution or System on a Chip) are often what I call a "Swiss Army Knife" slice. Doesn't matter if its 22nm or 200um dies--the issue is the fabs functionality.
FPGA's and what is called Structured ASIC's often come with SRAM, CPLD, ADC/DAC (SAR), SMBus/SPI/UART's, comparators, DIO, AIO, and multliple clock/trigger/multivibrator domains. There are a few slabs out today that have some complex mixed application features. So, next time try reading my post and don't assume or conclude some abstract theory before flushing out the facts.

Also, there was a bit of tongue-in-check comment that no one seems to have picked up on...one hint is that I mentioned Alcatel, not Atmel, as a fab source. The joke is about one layer down from that reference. And like I said, a Fusion AFS600 I purchased in 2007 has a nearly identical feature set on a 5mm QFP die. Oh, one more hint--ground.

FigureitoutFebruary 8, 2014 7:17 AM

55j56j655
--I don't know how you interpreted NWFOR's comment to be towards you, you're not even addressing people replying to you. James Sunderland made comment about the cost. The "boring" hardware implants are boring from your perspective; increasingly hardware is becoming so small there's not really a way to even analyze it...

If you have some ideas (or maybe even some experiments) on firmware resisting analysis w/ physical memory access, besides encryption/obfuscation, do tell. Maybe previously unknown hidden memory, and how it hides.

OT question:
Does every or anyone have random bits of electrical tape around some power wires coming from the powersupply? One of my older PC's, it doesn't look like it came from a factory and I don't know why it would look so jank/tampered. I haven't analyzed it b/c I'm busy w/ another computer right now, but it makes me wonder b/c an implant would have prime access to power.

55J56J655February 8, 2014 1:14 PM

@Figureitout: No it's me who gets it.. THESE PCBs HAVE NO CODE WHEN THE BUYER BUYS THEM AND THEY ARE CONSUMER GRADE CHIPS AND ANALOG COMPONENTS.. THUS THEY REVEAL NOTHING EXCEPT THAT THE NSA WAS SMART ENOUGH TO CLASSIFY PROJECTS FOR SALE AND TO NOT PACKAGE FIRMWARE BASED ON THEIR OWN INTELLIGENCE..

Firmware can't resist analyses without a hardware oracle or RAM partition where RAM is in POP configuration and the MMU management code has no vulnerabilities that would allow dumping of page tables or fragments of the LPAR table contents..

If you're referring to my solutions to preventing untrusted code execution in other comments my solution still stands.. Write-back hashing and encrypted page-tables.. If you know a way to get past that then YOU explain..

In the mean time I'll keep waiting on logical arguments. I don't really care about the economical responses, they just give description to the sentence I FIRST posted that basically said the same thing.. Except they completely ignore economic rules and go with laymen theories..

JonathanFebruary 9, 2014 4:30 AM

Clive,

Eureka! After a fashion. :)

55j,

If covert computers bore you, and you won't address the deployment model I posited above, can you at least tell us on whose order and payroll you are sabotaging the discussion?

FigureitoutFebruary 9, 2014 8:36 AM

55J56J655
--No need to SCREAM at me. How to defeat? The usual side-channels that will drive you crazy and problems start burgeoning into unmanageable insanity. Basically getting the secret before it's a secret, cheating. It may even depend on your HARDWARE that is running your code and leaking info like a duct-tape speed boat. The Dell computers at my school, any movement of the mouse, and typing on the keyboard, I can fricking hear the buzzing noises in my head phones of info being modulated, it's annoying. So info is being outputed to audio ports, and that's just w/ my ears! I need to take some radio equipment and see how much leakage is happening...

As to technical defeats to your scheme, no I don't know yet. I'm still rookie there, MMU's are pretty interesting to me.

55J56J655February 9, 2014 10:31 AM

@Jonathan: Because you just gave scenarios based on the usage I originally suggested and then gave the illogical insinuation that you were somehow more wise to the subject matter..

@figureitout: On current x86 arch I wouldn't store keys. I'd generate at runtime a table of OTPs for all the operations, so what side-channel would you suggest?

Memory corruptions wouldn't be of any use except for maybe bus or glitch dumping which would yield OTP keys after RCE, but then you gotta inject your code into privileged memory which is caught by write-back hashing. Then next kernel init there are new OTP tables and your dump is useless even for exploit development..

Again, I wait for a logical argument instead of just suggesting I'm wrong based on petty social factors..

Clive RobinsonFebruary 9, 2014 11:40 AM

@ Figureitout,

    As to technical defeats to your scheme, no I don't know yet. I'm still rookie there, MMU's are pretty interesting to me

And so they should be :)

If you take a look at an MMU from 20,000ft you see a box with three ports,

1, From CPU Address in.
2, To RAM Address out.
3, Control Port.

Now in a single CPU system the CPU controls it's own MMU, thus any malware that controls the CPU controls the MMU and thus at best the MMU obsficates the address translation it does not even hide it and certainly does not act as a reliable security mechanism.

Which as the main use for an MMU is Virtual Memory control and to help fast context switching is not an issue.

However ask yourself the question of what happens when the MMU is not controled by the associated CPU but another CPU acting as a security hypervisor?

Malware on the main CPU cannot control the MMU and thus it's access to memory is controled beyond it's reach. To change the MMU it has to get the hypervisor to do it, if --and only if-- the hypervisor is properly implemented then the malware is blocked from vital parts of memory.

However there is another trick you can do with an appropriate MMU which is "limit memory availability". Depending on the fine grained control you can get the entire memory range of the CPU limited to just one real RAM address. In practice most MMU's fine grained control is page based with a page size of 4K, which is not fine grained enough for optimal security.

However if it was sufficiently fine grained you could restrict the amount of real RAM the CPU had access to, and thus not have spare for malware to hide in. With the use of other techniques such as memory usage tagging it would be possible to lock down memory quite well (but not perfectly for reasons I'll explain in a bit).

Having the ordinary work CPU and a hypervisor alows for an extra trick. As you are no doubt aware most processess in a CPU memory are doing nothing for most of the time, that is they are either blocked or siting in the schedualling que awaiting their run time slice. One thing the hyporvisor or another CPU could do is to check the program memory of the work CPU to see if the static code memory which holds the executable code has changed... if it has then in most instances this would be bad news as it would indicate malware or memory coruption. Either way you would want to treat it as a minimum as a seg fault and clean up and chuck the core image off to another security function for checking.

The "fly in the ointment" for using an MMU for tight memory control is where it is placed, which is generaly on the external memory bus after L1-L3 cache, on most modern CPU's cache memory far exceeds the real RAM memory on early PC's (which has malware). Thus in theory malware could bypass the MMU memory restrictions and make it's self a home in cache memory, but without going into the ins and outs of it, it would not be an easy task to accomplish due to the way associative memory used in cache generaly works.

One way around this is to halt the work CPU and inspect the cache memory (not impossible but...). Another is not to have associative cache or no cache at all, obviously for some types of executable this would involve a performance hit, however often it's not as bad as portrayed due to the effect of a cache mis on context switching.

The reality is that having more CPU's without associtive cache memory and thus removing the need for context switching and attendant cache misses can more than make up for the loss of addociative cache, whilst still alowing fairly easy checking of memory for modifications and extra illicit code etc.

One of the things I've been thinking about off and on is the amount of silicon wasted for minimal performance gain. One such is "out of order execution", whilst in theory it has much to offer in practice... It often represents a chunk of real estate that is as large as the actuall core ALU and instruction decode blocks and draws more power and to be honest realy does not pay it's way when looked at in terms of CPU power and heat budjets.

Thus it could be stripped out and the realestate more usefully employed with a second CPU core. Or it could be if programers could write sensible threaded code, or code blocks small enough to stay within the limits of the core to get better utilisation.

However what if instead of out of order execution some of that real estate was used to provide on the fly cache and RAM checksums to check code memory was not being activly corupted and data memory was staying in range, likewise CPU registers etc?

It's only by asking these questions in terms of power and heat budjets will acceptable trade offs be found in future silicon real estate. This is because we have already crossed the "terminator line" where all circuitry in a given area of silicon can actually be functioning at full power and speed and not cause "heat death" of the silicon. That is reduction in transistor size is not actually gaining as much usability as it is reduction, and it's also the reason why things like cache memory is quite large, basicaly the way memory works it's mainly inactive so it's power requirment to real estate area is quite small when compared to ALU etc.

FigureitoutFebruary 10, 2014 7:09 PM

55J56J655
--None, b/c "you get it" and would probably scoff at the threat model I'm getting at. But you're really making me itch to try some...and they're very anti-social. Let me just put it this way, the light fixture above my toilet fell when I was showering one day, that's why it was rattling, b/c "something" loosened it. Oh, hey there's a nice power source and a nice peep hole too.

Clive Robinson
--Bah thanks as always. It would be nice to have an MMU chip and some CPU's that can be guaranteed clean. I can't see how it would work now, the design, and the code; and I don't have assurance and that really gets to me...Right now, I'm focused on recovery from a known infected system, very bad infection. Before you mentioned "blowing a chip back", I may resort to that. So we'll see what I can do w/ software, my dad just wants Ubuntu on it and connected to the internet again...I want to corner whatever the hell this is and turn it into a network analyzer instead of RasPi and beaglebone. Also looking to get going on a z80 computer, still getting accustomed to z80 asm. Then a Forth pc that Aspie sent me; all in the spare free time I get...Fun but I need more time! grr..

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..