BadUSB Code Has Been Published
In July, I wrote about an unpatchable USB vulnerability called BadUSB. Code for the vulnerability has been published.
In July, I wrote about an unpatchable USB vulnerability called BadUSB. Code for the vulnerability has been published.
Anura • October 8, 2014 4:08 PM
Attacks like these demonstrate a need to fundamentally rethink the hardware architecture of systems. All communication between components, whether hardware or software, needs to be strictly governed.
foobar • October 8, 2014 4:53 PM
No, we don’t need to rethink architecture. We do need to rethink the process of hardware design behind closed doors, with all kinds of bad actors exerting their power. You know who I mean.
@foobar no we need to rethink both.
Who am I? • October 8, 2014 5:08 PM
We need to rethink the way this information is released into the wild too…
Anura • October 8, 2014 5:09 PM
This isn’t just about the fact that this device can be attacked with malware, this is about the fact that USB devices have access to more than they should. If you plug a USB thumb drive into a port on the same controller as your keyboard or wifi card, it can snoop all of that information or spoof the device (e.g. it can steal your password, and then run commands with root access on your machine by spoofing your keyboard).
Anura • October 8, 2014 5:14 PM
On top of that, although this particular exploit attacks existing hardware, there is nothing you can do from the hardware side to prevent a TLA or other resourceful party from straight up manufacturing their own implant to physically attach to USB drives and act as a man in the middle (doesn’t TAO already do that in the first place?).
Who am I? • October 8, 2014 5:24 PM
Can someone provide some light about how vulnerabilities like this one affect users of non-Microsoft operating systems? Specifically, can this vulnerability be ported to secure operating systems like OpenBSD without requiring a privileged account?
Will this code target a single operating system and/or architecture (e.g. OpenBSD/amd64) or will an infected drive execute its code on an OS-agnostic way?
At last, now that we know the exact details is there a chance to write a software tool that will check and clean infected drives?
Thanks in advance!
Grauhut • October 8, 2014 5:38 PM
@Who am I? “We need to rethink the way this information is released into the wild too…”
You are so right. We need harder and faster full disclosure. Otherwise they dont learn it!
And every updatable coprozessor needs a write protect dip switch. Every means your bios, smartphones baseband processor, your scsi controllers bios extensions, your ssd firmware, your offloading nic, your usb flash stick, dvbt-stick… Each and every copro needs a working write protect switch. And you must be willig to use it gracefully, it saves your day someday, sooner or later. Until this day has come, we are potentially pwned, penetrated, f***ed, you name it. Makes no difference if we know it or not. A blackhole exists by itself and works by its gravity, if you try to look at it or not, behind its event horizon you will be eaten and Schroedingers cat will not save you, because you have her box in your hands and she will be eaten with you, dead or alive, if you look at her or not.
Ladies, prepare your glue guns! 🙂
Anura • October 8, 2014 5:39 PM
@Who am I?
Detection, I don’t know, but I described a scenario above. The device can listen for your keyboard input, sniff out your password (which if you are in sudoers they don’t specifically need your root password), and then from there it can spoof your keyboard to run arbitrary commands on your system to, for example, install executable code with root priveleges.
To protect yourself, I would make sure the port you plug your thumb drive into is on a separate controller from your keyboard/mouse. Even then, that’s far from a guarantee of security.
Grauhut • October 8, 2014 5:54 PM
Take a headless raspi without usb hid and com driver modules as an automated copy box, if paranoid with clamav.
Just needs a small auto mounting service to check avalability of flash sticks on in and out port, when both ready mount, copy and scan. When finished, flash green led until drives removed.
Anura • October 8, 2014 6:04 PM
For USB, that doesn’t solve much as the malware can be installed before you even get it, in fact I would bet that would be the main root of attack – someone giving you a thumb drive with an exploit (alternatively, modify something like a logitech USB receiver and switch it out when they aren’t looking).
Miles • October 8, 2014 6:53 PM
@Who am I
This attack hits the firmware on USB devices — the security of the operating system is somewhat irrelevant. Soon, we may not even be able to trust a keyboard that was attached to an infected computer.
If your secure operating system has a root console window running when you attach an affected USB stick, it could behave like a keyboard and try to run some commands designed to create havoc.
However, I suspect the frequency of BadUSB exploits targeting secure operating systems will be about the same as the frequency of viruses. 90% of the world’s computers run insecure operating systems, so people looking to steal money and secrets will focus their efforts there.
Stan • October 8, 2014 8:25 PM
Why does a USB stick have a rewriteable firmware? And even more so, why is it rewriteable at any time? That’s the exact same problem we had with printers not too long ago — printers getting malicious firmware installed.
Operating systems need to deploy a clippy shield: “it looks like you plugged in an XYZ. Would you like to use it?”
Nick P • October 8, 2014 9:37 PM
It’s probably what RobertT taught us years ago. The industry is constantly looking for ways to reuse existing parts or I.P.’s. The firmware component in the USB stick is probably the same as in other products, maybe a third party vendor. Updateable firmware also allows issues to be fixed via updates instead of expensive hardware redesigns or recalls. So, it makes sense economically. And the result is naturally detrimental to security. As always.
Clive Robinson • October 8, 2014 10:31 PM
Why does a USB stick have a rewriteable firmware? And even more so, why is it rewriteable at any time?
As Nick P pointed out the reason is based in the economics of the product. More specifically it’s to do with cost savings on the production line.
The profit in low value high volume (LVHV) fast moving consumer electronics (FMCE) is made on minimum production costs and lost on rework or returns.
Thus even the use of a jumper adds a significant cost of not just the part but the increase in size PCB and Case, and the cost of a user operating it for the five or ten seconds per operation on a bare board (uncased PCB) could easily double the time per item. However a re-work might be on a finished “cased” item, the cost in time to open the case without damaging it alone can swallow up any profit.
Put simply the “security” of a jumper would add a couple of dollars to the consumer price of the item which for some thumb drives would effectivly double their price…
So what does the general consumer want, cheap goods or invisable security? We already know the answer to this and security loses every time.
But you also have to consider if the link even if added would do anything… simple answer is not because it’s the chip manufacturer that makes the choice and the cost of having an external line on the chip then bonded out is a significant cost.
Grauhut • October 9, 2014 2:15 AM
@Clive Robinson “Put simply the “security” of a jumper would add a couple of dollars”
Nope, the price of a jumper or dip switch in an automated production environment in china is in the low one digit cent area. There is no device that would bebome unaffordable this way.
FCC approval has a bigger share on end user prices. 🙂
Grauhut • October 9, 2014 2:18 AM
@ Anura “For USB, that doesn’t solve much as the malware can be installed before you even get it”
There is no magic malware, just magic mushromms.
A HID malware without HID driver is a dead piece of binary junk.
Dev Null • October 9, 2014 3:03 AM
Maybe, we are all infected. We should make USB devices by ourselves.
T!M • October 9, 2014 3:08 AM
I think all data coming through an interface must pass a kind of white-listing process. If the device says it’s HID, then only actions for HID have to be possible.
Maybe I need a dedicated system just for using USB-Devices and my keyboard will be ps2 again and my mouse serial … f**k my systems only have usb for this and a ps2->usb converter wouldn’t help.
Ok, then I use my computer will all the USB-devices and use remote desktop with rdp/citrix or vnc or teamviewer or whatever … so my system could get infected (but without the ability to access the internet, because of only allowed to initiate my remote desktop) and I won’t have a problem with that.
Maybe I have to give this a few more rethink-cycles, because I am sure you have many ideas why this solution isn’t that helpful I think 🙂
Andrew_K • October 9, 2014 3:19 AM
Most fascinating of all is to me the selection of Ruiu as primary target.
It makes me believe that this is not a state development. They would not risk what is happening now: Publication.
Many answers could be find, if Ruiu would be able to reconstruct how he got the initial bad USB device. Bought from stack in a shop? Given at a conference by a fellow hacker? Promotion? Found on the parking lot? Or did he trade USB sticks with someone who was targeted by bigger powers and is blowing a program now?
Anyhow, this is a good day. We learned a new vulnerability so we can target and fix it. Read-only BIOS would be a nice step. Read-only firmware on critical devices, too.
Andrew_K • October 9, 2014 3:38 AM
I like the idea of device whitelisting. Unfortunately, it won’t stop code running without the OSs knowledge. In fact, the BIOS is probably infected before the OS even realizes the new device. USB protocol would need a pre-usage-auth which is forwarded to the user for approval.
Your approach of using RDP/… is what I’m doing for some years by now. It works quite fine for me, but it’s more a convenience thing: I can use nearly any machine with Internet access to access my Desktop. Regarding security: Unfortunately, you still type locally. A Keylogger will still see target, username, and password. Thus, I intensively log connections and review them with what I’ve done and from where connections came.
T!M • October 9, 2014 4:11 AM
I thought of device whitelisting on a layer outside the OS to be independant of what the OS does or doesn’t. The pre-usage-auth could be a simple number to say the checking-instance (BIOS or something else) what kind of device it wants to be … 1=HID Keyboard, 2=HID Mouse, 3=Mass storage, … , 12=HID Keyboard+Mouse, …
Regarding security: Unfortunately, you still type locally. A Keylogger will still see target, username, and password.
Yes, but if nothing but the e.g. ICA-Protocol goes through the firewall to your remote desktop system, the keylogger can’t give your credentials to someone else and because you work in the remote system the only way to get information of what you are doing remote would be based on screen-capturing … and even then this information would be useless to know, because it would stay on your thin-client.
A few years ago I played with the good (very) old software I had and installed DOS 5.0 with Win 3.11 (you know the Windows for Workgroup thing;) on a notebook I didn’t used and with netscape and java-support I could establish a vnc-connection to a WinXP system. That was just for fun, but it worked and actual malware would have problems to run on DOS 5 and Win3.11.
Ok, USB-Support would be a problem in this constellation, but what, if you run the system with e.g. linux and the DOS/WIN3.11-System within a virtual machine. To raise the convenience you could use a tool unter your host-system to create a network-share for usb-devices plugged in and in the guest-system (Win3.11) it gets connected automatically over tcp/ip.
This would be a way to bring new life to the old floppy or CD collection 😉
ATN • October 9, 2014 4:28 AM
Am I the only one who read the link under “been”? Mostly this sentence:
Then, when Ruiu removed the internal speaker and microphone connected to
the airgapped machine, the packets suddenly stopped.
Bruce, did you cut of the microphone of your air-gaped PC – or do you think it is already too late for that?
That is the first time I heard about networking over high frequency sound, I bet the dogs won’t like it…
Thoth • October 9, 2014 7:47 AM
wouldn’t be surprised that microphones could be used as covert exfiltration channels. That is another item to be discarded if you want to air-gap a computer unless you have a good reason to have it on.
Another way is to solder in a switch so that without you pushing the switch, it wouldn’t work.
Clive Robinson • October 9, 2014 8:38 AM
You must have at best skim red my comment.
If you go back and read it again you will see,
Thus even the use of a jumper adds a significant cost of not just the part but the increase in…
And I go on to list just a few of the effects the addition has that increase the cost of the item which the consumer sees not just the BOM.
As for FCC or CE or other type testing and aproval they add a fixed cost amortized across the entire production run, not an additional cost to each individual item that a jumper or DIP would add.
I’ve not done FCC testing recently but the last CE testing I did came in at well under 3000USD. I know this is actually high –as I’ve done it for a lot less– but it was a small run and I was in a hurry, Far East costs of testing in house in a more or less automated process is a fraction of this, and done across a production run of a million or so parts is down in the fraction of a cent per item.
FMCE is a funny old game, because what a designer of low quantity production would see as a significant cost is often irrelevant, whilst what they see as a non issue such as a single component placement or orientation can have a very significant production cost. For instance the cost of a “PCB solder bridge” that needs to be made during assembly is expensive as it is a violation of the “No hot work rule”, thus has to be done on a compleatly different assembly line with differently trained workers and safety systems and quite a few other problems involving amongst others inspection and test as well as increasing rework costs and returns rates….
Dan Hough • October 9, 2014 8:52 AM
OK, doesn’t this really apply to anything connected to USB that has updatable firmware? Printers, DVD drives, keyboards, hubs? In the end, it seems that unless we fuse the firmware, we will have vulnerabilities. (Code signing too would seem to be risky, if you have an android phone and spend time on XDA, even signed bootloaders are cracked on a regular basis.)
The second thing I see is more malicious. I work for a defense company. And I plug peripherals into my PC that were made in countries that are, for want of a better term, not entirely friendly. How do I know the firmware, even if it was fused, was not contaminated from the get-go? Likewise, the NSA might drop some code in someones peripheral. So we also need behavioral scanners/detectors and we need a much stronger permission structure on USB ports on our PCs. For instance, if a list of capabilities per port were being requested, and my USB stick was asking for the ability to send keystrokes…I’d be concerned. Those ports need to be firewalled just like ethernet.
MrC • October 9, 2014 8:59 AM
Merely restricting it-says-it’s-a-HID devices to HID operations isn’t going to help. A HID keyboard can open a command prompt and reposition it off the visible screen. It can then run any executable the current user has permissions for and/or run an executable provided by the attacker by typing out binary code converted to base64 and converting it back to binary with powershell (or equivalent). Going the other direction, the num-lock, caps-lock, and scroll-lock LEDs provide a low-bandwidth data channel from the PC to the keyboard.
All of the above applies to a device that presents solely as a keyboard. As I understand it, there’s some trick to making the same physical device present itself as both a keyboard and a storage device. In which case, the keyboard can just issue commands to move files back and forth to or from the storage device.
This situation with USB is really and truly a mess. I don’t see how it’s fixable. For those not hardcore enough to abandon USB outright, the following mental exercise may help as a stopgap: Treat USB devices like hypodermic needles — remove them from the manufacturer’s packaging, and then stick them into only one PC ever before trashing them; and never accept one from someone else that’s not in the manufacturer’s packaging.
T!M • October 9, 2014 9:38 AM
A HID keyboard can open a command prompt and reposition it off the visible screen.
Would you say, that this is something a keyboard should be allowed to do?
If your answer is “no”, then this would be a good example for an action blocked by the device-whitelist. If there would be a security check outside the OS-Layer that provides some information to the OS that can give the User the Information “A USB-keyboard has been attached to Port 01”, then the user could see if the USB-Christmas-Tree from China uses the USB-Connection only for power or to show itself (to the OS) as something different.
All of the above applies to a device that presents solely as a keyboard. As I understand it, there’s some trick to making the same physical device present itself as both a keyboard and a storage device.
I have a programmable USB-Device and can turn it into keyboard (to make input by itself as you described) or mass-storage or both. There is no trick needed and thanks to plug’n’pray the device wouldn’t even need a driver, because the systeme provides it. The bad thing with HID-devices compared to mass storage … it isn’t in the usb-devicelist (“remove hardware”-icon) at the system-tray.
The problem with mass-storage-devices is, that they have the purpose to receive and provide data. But they shouldn’t have the right to decided this by themself, I mean without a request from the system/user.
It’s an interesting discussion I think and the used mechanisms are to complex to become secure in the long run.
Man with Clue • October 9, 2014 9:56 AM
“Bad USB” isn’t an attack on the USB device, it’s an attack on the host computer. The Git source shows you how to do it yourself. The USB Rubber Ducky physically looks like a USB drive and acts like a keyboard. Or a drive. Or both. Heck, the thing has a 32-bit 60MHz processor! How many of you remember when something like that was the latest and greatest, fit only for the mightiest of hardware? (Once upon a time, I yearned for a KIM or AIM 6502 board.)
Here’s the thing: Moore’s law is out to get you. Want a 32-bit processor? You can get one in an 8-pin DIP chip (LPC810). Want to go with a form factor a little larger than a USB stick? How about the Adapteva Parallella board, featuring a dual-core CPU, and a 16-core coprocessor? Anything that sports a programmable USB interface will allow the device to look like anything it chooses.
The first thing that the computer does when a USB device (or even a floppy) is inserted is ask, “how do I work with you?” It’s up to the device to answer honestly. But what if the device is dishonest? That’s the heart of security. In this case, the USB device can say that it’s a keyboard, and then just act like one, sending in whatever commands according to a script. It could be sniffing the bus, waiting for the right moment to do its deeds. It could delay “inserting a CD” and then giving keyboard approval to playing that CD.
At some point we’ll have USB firewalls.
paul • October 9, 2014 10:02 AM
If you want programmability on the assembly line but not after, why not a fusible link? In the volume that usb widgets have, the per-unit NRE would be in the noise. (Yes, there are attacks that would work against this, but it would reduce the surface significantly, and make it much harder for infections to spread.)
T!M • October 9, 2014 10:45 AM
@ Man with Clue
At some point we’ll have USB firewalls.
And now the golden question … What shall the USB firewalls do that USB devices are no security problem anymore?
I don’t want more firewalls, security-guards, scanners, detectors … just to let the complexity of my infrastructure grow, pay much more money to hardware-sellers and consultants only to see how all this stuff becomes hacked through bash-tunnels, trojans, …, because of insecure implemented security features or well implemented backdoors from whatever.
I want less complexity to raise stability, maintainability, clear borders, obvious ways for access, …, to get the chance of more security and new trust in the technology we use to drive this techno-world.
Clive Robinson • October 9, 2014 10:53 AM
Part of the problem is that at a fundemental level the interface of a USB device just like many others (I2C early Ethernet like 10BT) are a “shared bus” architecture, without mandated hardware device selection.
Thus one device can see all the others and if it choses to imitate them. There is nothing the host end controller can do to differentiate the devices as long as they share the same bus.
To stop this then the host USB needs to see each device on a seperate hardware channel not a common channel. Thus USB like similar busses is “insecure by design” and there is no way around it.
It is this issue that was part of the reason that later ethernet designs are “switched” not on a common channel hub, this enables an appropriately featured managed switch to prevent devices on different physical ports impersonating each other or even see each others traffic.
For security you realy need one hardware channel for each device and for the OS to be able to clearly use the channels to prevent such issues (so no common stacks etc as well).
MikeA • October 9, 2014 11:00 AM
The more things change… The IBM 1620 (late 50’s early 60s) had a disk controller that honored “write protect” per sector, controlled by bits in the sector header. One could only set/clear those with a “write track” command, as if formatting the disk. The write-track command, meanwhile, could only be done if a physical key-switch was set to allow it, enforced by the controller hardware. And the OS would nag you to turn it off after manipulating protections, and refuse to do much of anything else until you did. Of course, nowadays that switch would be a dialog box and everybody would just click “OK”.
On fusible protect-links, back in the 1990s, my employer had an issue with counterfeit parts. Since the firmware was in an EPROM microcontroller with a read-enable bit, we wondered how it had been copied. Turns out the adversary had figured out how to mask the EPROM area so as to erase only that bit. Possibly with some collateral damage but that just made it take several tries. A similar attack took the simpler method of buying genuine parts from the “fourth shift” at the contract manufacturer.
Nick P • October 9, 2014 11:08 AM
re price of a jumper
Clive’s argument still stands even at a few cents cost. Multiply that by every product sold. This is the total cost of the feature. Now, ask the following question from perspective of management, “Can we sell the same amount to the market without that feature?” If the answer is yes, then that dollar amount shifts from cost to profit. And we know where the priorities stand on profit.
“dedicated system for using USB devices”
My old physical security solutions did this. Another person mentioned a Raspberry PI. I used to do it for offloading peripherals, storage and networking onto dedicated, simple systems. That put their code outside the TCB of the main system. The physical interface to host could then be under my control, simpler, and safer. Various mediation could occur.
@ man with clue
“At some point we’ll have USB firewalls.”
They’ve been available for some time, both software and hardware. Not sure how effective they are as I just did offloading & avoided USB devices.
The best solution to all these problems is what I’ve been advocating since I found it: dedicated I/O processors with a baseline I/O protocol and mediation built in. That is how mainframes have worked for decades. The main processor instructs logical chunks of work to be performed, does other work, and eventually is notified when done. A dedicated processor handles all the interrupts, runs I/O programs that do low-level work, splits the work across dedicated channels, and can enforce policies on each channel. The I/O chip itself has DMA, of course. Throw in I/O MMU, device profiling, and protocol engine mods. Now, you have a more secure solution that also has massive utilization and throughput.
Cost varies. It might be an extra core or chip. Embedded chips can do a lot with little power or cost. One for every device would still be inexpensive.
John Hardin • October 9, 2014 11:19 AM
(1) Can the computer that a device is plugged into detect whether the device is compromised?
(2) Can the firmware on a compromised device be overwritten again with clean firmware such that the compromise is reliably removed? Or can the compromised firmware block that action?
If both are true, then it should be possible to write software (I’m thinking a bootable CD) that you can launch on some low-end potentially-sacrificial USB-enabled box (like one of those ten-year-old laptops collecting dust in the closet) that can be used to inspect and cure any USB device you get before you plug it into your real computer.
I suspect this would require cooperation from the manufacturers, in that they’d need to provide a way to identify the device, checksums for the current valid firmware, and the current valid firmware itself.
name.withheld.for.obvious.reasons • October 9, 2014 12:07 PM
@ Nick P
As you probably suspected, I’m going to have to chime in here. I find it funny that people won’t take the time necessary to verify source (AKA Heartbleed) or binary distributions (via hashes, etc). Where does it get better when individuals are not incentivized to ask–no demand–manufactures lose their EULA’s and become accountable for producing what amounts to garbage. If we’d transpose the problems in terms of our transportation systems where most people got somewhere but more than occasionally we’d have a airliner, train, or bus disappear from the airways, rails, or streets we might be able to formulate a clearer, more cogent response to this stupidity. That in most cases under this analogy, people were compromised during their transit form point A-to-B…Z on a continuous basis–where would the conversation be?
Man with Clue • October 9, 2014 12:22 PM
@T!M, Nick P:
The USB firewalls currently don’t protect against this kind of attack, where the device is an “evil keyboard.”
What is needed is for the host to evaluate the device’s BIOS, and then compare its checksum against the manufacturer’s published checksum. Then you have a higher assurance that things are OK. However, if the device has enough sophistication, it will fully mimic a good device.
At home I have a couple of old Lexar 128Mb USB sticks, and they have write-protect switches on them. However, I don’t know if these also protect the controller’s BIOS.
Here’s the point: the device acts as if the malicious user were sitting at the computer, typing and loading things. Now, aside from a typing speed of 1,000,000 words per minute, how does the OS know that it isn’t you at the keyboard? When I sit down to help other people with their computers, what would raise a flag that an expert instead of an idiot is at the keyboard, or if the idiot is following instructions from a book?
What’s especially pernicious about something like a hostile, intelligent USB device is that it can emulate multiple devices. Keyboard, mouse, USB drive, networking device, anything the attacker likes! The only defense is to disable the USB ports.
Here’s the problem: all keyboards look alike. Yeah, sure, they’re supposed to have individual identities, but when something like Bad USB or USB Rubber Ducky is plugged in, there’s no way for the host to know that the device is misidentifying itself. Even something like commanding it to read the BIOS means that the device may be honest, and it may not be honest. Some keyboards have programmable macros. What’s the difference between BUSB/URD and a keyboard with hardware macros? There is no out-of-channel verification of the device.
John Hardin • October 9, 2014 12:35 PM
Yeah, that’s one thing I was wondering: would the malware save off a full copy of the valid firmware somewhere and return that when asked to dump it (assuming getting a firmware dump from a USB device is even possible)?
Johan Wigzell • October 9, 2014 2:02 PM
Looking at that GitHub page linked to by Bruce, the USB firmware is (at least in some cases, if not always) on a NAND flash chip. Is it possible to even set this type of chip as read-only?
Would an EEPROM be safer than a NAND flash chip?
Man with Clue • October 9, 2014 3:09 PM
It’s hard to say if possible malware would save off a copy of the device’s firmware or not, because it depends on the controller. Some controllers only allow write updates to their internal firmware, some don’t. The only way to actually disable writes to a NAND is lift and then tie off the R/W pin. If the NAND is internal to the device, then there may be no way it can be made read-only.
Adafruit has a nice article on what was done to start reverse engineering the driver for the Microsoft Kinect.
Mr. C • October 9, 2014 4:15 PM
Would you say, that this is something a keyboard should be allowed to do?
Afraid so. Unless you’re willing to make a mouse mandatory (which only shifts the problem to another HID device), the keyboard needs to be able to run executables (including the command prompt) and move windows around the screen. The real problem lies in the windowing system allowing windows to be moved all the way off the visible screen.
Nick P • October 9, 2014 4:19 PM
Liability and market expectations would help a lot. The market doesn’t demand it though. I’ve blamed users previously, and coincidentally a USB story. The closest thing we have are the niche companies that warranty the quality of their systems or software. Then, there’s service level agreements in that sector. Of course, Common Criteria Protection Profiles show that vendors need some formal standard with features and assurance activities to go by. So, for USB, a full threat analysis would be needed along with proposals on features and activities to catch problems. It would be certified by 3rd party pen testers. If warrantied, the quality would be high to reduce their liability. We see this in the DO-178B market.
@ Man with Clue
“The USB firewalls currently don’t protect against this kind of attack, where the device is an “evil keyboard.””
I was just saying USB firewalls exist. Like other firewalls, they stop some attacks and not others. A USB keyboard attack to bypass DLP was published online many years ago. So, the same vector working again is just evidence of the industry not learning from its mistakes.
“Now, aside from a typing speed of 1,000,000 words per minute, how does the OS know that it isn’t you at the keyboard?”
Require authentication. The USB device isn’t trusted until the password comes from it.
“The only defense is to disable the USB ports.”
Or use a USB chipset and stack that restricts what a USB port can do. Assign one a keyboard function, one a mouse function, etc. IOMMU’s and well-designed TCB’s restrict them from there. I also advocate things like USB be disabled by default at system boot, then enabled only if the system chooses. And then restricted.
@ Mr. C
The keyboard and mouse only need to send input to a subsystem that collects it. The system can be designed to enforce POLA on that subsystem. This is how it was done in the old “trusted” window systems and more recent Nitpicker. Each component does one thing, is accessed through careful interface, only can access what it’s supposed to access, and so on. Combine this with my recommendations above and attacks like this are prevented by design with minimal inconvenience to user.
Grauhut • October 9, 2014 4:52 PM
@Nick P “Clive’s argument still stands even at a few cents cost. Multiply that by every product sold. This is the total cost of the feature.”
Thats simple. Have a law that makes shure that on every hardware item containing firmware and its package has to be a sticker, green or red:
“Contains hardware protection against firmware manipulation” or a sticker
“Insecure: Does not contain hardware protection against firmware manipulation!”.
As easy as with cigaret boxes, the marketing department will then tell the production guys what kind of sticker it wants… 🙂
Jeff • October 9, 2014 5:17 PM
I wish these articles would specify what they mean my “PC.” Is it any computer running Windows, or does it include Mac OS, and others?
Sancho_P • October 9, 2014 5:46 PM
“If you plug a USB thumb drive into a port on the same controller as your keyboard or wifi card, it can snoop all of that information or spoof the device (e.g. it can steal your password, and then run commands with root access on your machine by spoofing your keyboard).”
Are you sure?
I thought USB is always broadcast from host to ports, but unicast from single port to host, so the thumb drive could not directly listen to the keyboard data. However, data sent to a USB WLAN adapter can be seen by the thumb drive or keyboard – assuming they all share the same chip (e.g. a one to four USB hub / controller chip like the TUSB2046B) ?
Clive Robinson also hinted to the ”… “shared bus” architecture, without mandated hardware device selection”, which is IMO correct for the host sending to a port, because there are only logical pipes to separate the ports. Imitating wouldn’t work (or isn’t the correct term), as answering as an existing device would result in garbage. You can have several “keyboards” at once, of course.
”switched” – I’ve heard of some routers with special switches. So encryption is the only way out of that tar-pit.
“so no common stacks etc as well” is excellent, but I’m afraid ….
@ Paul (fusible):
Because you’d have to touch the device anyway after production, even if it must not be updated. $$
And it’s lost when a problem arises at the shop / customer, during warranty. $$$$
One problem here is we’re talking about mass production and super sophisticated malware devices in one pot.
The topic is “USB”, so if your “PC” supports USB it is included.
name.withheld.for.obvious.reasons • October 9, 2014 5:49 PM
@ Nick P
If warrantied, the quality would be high to reduce their liability. We see this in the DO-178B market.
Having spent some time in high assurance manufacturing and systems integration and I see two downsides to ISO15408 and DOB-178B, complacency and a lack of continuous SPC and robust QA/QC departments and personnel. Some of the problem are in the architectural and design engineering process. I was agassed at an engineering and design meeting at the implementation on an aircraft platform that uses real-time Java (oxymoron of course). The primary vendor could not be questioned in the meeting as I’d piss the customer off (I do that a lot).
We need some independent but cross verified standards and quality process. Where a high spirited and robust third party counter-verifies process and standards say against a NIST body. ISO is enough when you consider just how the audit procedures and methods lead to less than a clear understanding of where an organization is it. In fact, my new business model provides for a robust and “through-the-organization” view at any time. Kind is a process control structure as part of the business model.
Larson • October 9, 2014 5:56 PM
At least the Kanguru USB drives seem to be protected against BadUSB…
Concerned about “BadUSB?” Don’t Be. Kanguru Has You Covered
By design, Kanguru’s firmware on Defender® secure hardware encrypted flash drives, hard drives, and solid state drives are inherently protected with what is called digitally signed secure firmware. This fundamental feature makes it nearly impossible for any firmware-based attack to be successful on Kanguru’s secure USB drives, making them the most trusted USB devices on the market.
Kanguru’s hardware encrypted drives are designed in compliance with NIST requirements of digitally signing the device firmware, and is verified through a rigorous process known as FIPS 140-2 certification.
Because the secure firmware is verified with a self-test on start-up, if any attempt were made to tamper with the firmware on a Kanguru secure drive, the USB device simply would not function. Kanguru’s FIPS 140-2 Certified Kanguru Defender 2000 and Defender Elite200 have even more advanced protections that make them perfect for government, financial and enterprise organizations.
Sancho_P • October 9, 2014 6:09 PM
I do not share the sentiment of “USB is evil”.
We want UNIVERSAL devices with UNIVERSAL extensions. For all gadgets.
From keyboard and LAN adapter to host to host … and cheap (the latter should be mentioned in the first place !!!).
For the “one fits all is cheap” most USB devices must be field programmable.
A jumper or such is wasted money as the “evil” USB device / firmware will ignore it anyway.
You never know what is inside the device you are going to connect, this is not a USB problem only.
So the only conclusion is: The master + host must be cautious.
In case of an HID keyboard: I’m really glad when the BIOS accepts it as input device, otherwise I could not boot my simple machine using a pwd.
However, booting from a thumb drive must be possible (ehm, not so easy nowadays) into any so called OS – but not without warning from the BIOS.
But those who boot from USB must not expect to flash their BIOS without any notice and request of confirmation from their existing BIOS.
Master (human) + host (device + so called OS) are responsible in case of a breach.
For the host I see the manufacturer accountable, in contrast to it’s “perfectly legal” fine print (yes, liability would be necessary – but this is too late now. Remember “Too big to fail”).
Now the just attached USB device identifies as a keyboard and starts to send keystrokes into that so called “OS”.
This is the same situation as the user would start any executable from the device.
It is not wrong for a device to behave like a keyboard with storage, it might be legit, keep in mind this is a UNIVERSAL interface as we urged it.
However, USB has not DMA, as FireWire and TB have – a dangerous idea for any machine.
And please, do not ask my neighbor “May ‘Christmas Tree’ also behave like a keyboard? Y/N ”
Yes, he would confirm with pwd when asked for (he’s 82).
For the USB we’d have an OS to watch and guard – if we only had an OS.
We need an OS, not only USB firewalls (I fully agree with the rest @ Man with Clue).
Anura • October 9, 2014 6:28 PM
You are probably right, I wasn’t thinking about the direction of the traffic.
paul • October 9, 2014 6:42 PM
There are two (at least) attack scenarios we have to think about here. In one, the bad device was “born bad” and intended for attack. In that case, the attacker is limited only by their own ingenuity and hardware resources: they can get a copy of the device they’re attempting to impersonate, make a copy of its firmware, analyze its security and so forth. And even if the device (say, a mouse or keyboard) doesn’t normally have megabytes of nonvolatile memory to store impersonating information in, they can add that to their evil counterfeit fairly easily.
The other scenario, where an infected or malicious PC converts a “good” USB controller to the dark side, will be limited by the resources available in the device it’s corrupting. For a keyboard or a mouse, that means the controller’s code and data memory (which may still be enough to carry out some very interesting attacks). For a thumb drive, that’s the controller’s memory and most of the memory of the flash chips, but no sensors or actuators (unless you can figure out how to read something interesting from the status LED).
Adjuvant • October 9, 2014 7:11 PM
@Larson Good catch. Looks like they also feature a physical write-protect switch (which I haven’t really seen anywhere else since the Imation Clip disappeared). I see they also offer a less exorbitantly-priced alternative for those who could take or leave the hardware encryption: Kanguru FlashTrust™ USB 3.0 flash drive is the world’s first unencrypted, USB 3.0 flash drive with onboard, trusted secure firmware. Useful to know as this list of drives with write-protection becomes increasingly stale. (Now I’ll just wait for the first report of someone circumventing it.)
Adjuvant • October 9, 2014 8:09 PM
Hmm. This is distressing (from comments on the link I just shared):
Citing this complaint:
[Q:] When I slide the the write-protect switch to the locked position while the SS3 is connected to the computer, I am still able to write/delete data on the drive.
[A:] The SS3’s write-protection switch must be set in either the locked or unlocked position BEFORE connecting it to your computer. Once the device is connected to a computer, it will remain in whichever state it is set in regardless of whether you change the switch position.
The commenter states:
I’m no EE, electronics hobbyist, etc – I’d been operating under the (naive) assumption that write-protect switches enable/disable a conductor/line that a write signal is sent over. Seems I’m wrong.
Sounds like an overly complex implementation, with potential unknown weaknesses. Maybe the only way to be sure is to disassemble and physically inspect — or do it yourself
John Hardin • October 10, 2014 1:28 PM
The SS3’s write-protection switch must be set in either the locked or unlocked position BEFORE connecting it to your computer. Once the device is connected to a computer, it will remain in whichever state it is set in regardless of whether you change the switch position.
Ok, so their “hardware write protection” is actually only a suggestion to the firmware. The only “hardware” involved is the switch.
RonK • October 12, 2014 6:39 AM
@ Dev Null
Exactly what I thought, but with a totally different meaning: I’ve been waiting for something like this to pop up, so that I’d be able to use cheap USB fobs as embedded devices. Bunnie Huang’s idea of removing the encapsulation to get at the controllers wasn’t exactly what I was longing for. (And actually the original disclosure claimed that the vulnerability was developed in order to use these fobs as a hardware-accelerated A5/1 rainbow table.)
Of course, the vulnerability itself would probably limit the usefulness… I’ll have to do some research.
nota • November 8, 2014 3:48 AM
Moderately paranoid solution to having a computer know if it’s you at the keyboard: One time “password” captcha-like authentication. You go on your computer and plug in your keyboard, it asks you to type out “FfyanAp” or something. Once you do that, the device is authenticated until you leave. You can repeat that process each time. A normal password would not be a good idea because a malicious keyboard could just sniff it, but there’s no (realistic) way a malicious keyboard could know what’s on the screen, unless they are given access and can load malware to do screencaps.
For the more paranoid, grsecurity’s “deny new USB devices at toggle” sysctl option (kernel.grsecurity.deny_new_usb) is a better solution, albeit one that, once toggled to 1, deny all new USB devices at the kernel level until reboot. Since this disables all ehci/xhci drivers and has the kernel completely ignore any USB devices, rather than refuse them, it ensures that more advanced exploits which take advantage of compromising the actual driver directly and gaining instant ring 0 access would fail.
Adam Danischewski • February 2, 2015 2:42 PM
“A jumper or such is wasted money as the “evil” USB device / firmware will ignore it anyway.”
That is not necessarily true, it can be designed such that no software based firmware updates are possible without the jumper settings being appropriate.
@Clive “Put simply the “security” of a jumper would add a couple of dollars to the consumer price of the item”
I agree with Grauhut, the cost is practically negligible in the low cent[s] not dollars range.
A jumper would probably solve most of these problems rather easily yet it would make firmware based remote espionage comparatively expensive. It is extremely convenient for state espionage agencies to be able to deploy and access firmware based surveillance.
If you think its paranoia to consider this angle, please read the available documentation with regards to how many huge well-known tech companies are funded by In-Q-Tel / CIA.
Clive Robinson • February 2, 2015 5:05 PM
@ Adam Danischewski,
I agree with Grauhut, the cost is practically negligible in the low cent[s] not dollars range.
And that statment tells me why neither you or Grauhut have worked in FMCE or other very cost sensitive manufacturing market.
Whilst the pins and jumper link would be cents as component parts, for a security application it can not just be connected to a CPU IO line as that would be for software only and could thus be ignored. It actually needs to work in the hardware side of the write circuitry to not be bypassed by software, and there are implications to this with modern data rates.
But even if that was not an issue, it requires space on the PCB which means quite a few things, such as the board will be larger or more difficult to layout and importantly requires extra work for the “pick and place” machine or assembler. It also means you have to consider where on the PCB it will be with regards external access through the case, this means that to be approved for CE it means having a cover that’s not removable as it could be indirectly connected to a voltage that brings it into the LVD requirments. This means considerably more expensive tooling etc. Then there is the increase in rework costs as jumpers increase the fail rate down the production line. Further there are replacment / returns costs as such jumpers do cause an increase in failure to the consumer in transit. Oh and don’t forget it will need written instructions printed somewhere in or on the packaging…
These add tens of cents to the BOM that multiply up due to various mark ups and taxes etc to ten or twenty times that on the retail price the consumer pays. Which is why I very specifically said “…would add a couple of dollars to the consumer price…”.
Now personaly I realy don’t care if you belive me or not, because you are not working as a designer in that market. Because you ould know that those sorts of details decide if the product is going to be profitable or not in a very very competative market. I’ve designed for that FMCE market and thus have had to go through the process several times, which is why I know.
And odd things happen, such that it’s less expensive over all to fit a slide switch that actually costs more as a component part than the link pins and jumper, because it reduces the cost of case tooling, placing and testing down the line as well as reducing the cost of rework and returns.
Nick P • February 2, 2015 6:18 PM
Re USB drives and jumpers
Clive’s right if the device is very tiny. Larger ones, from PC’s to industrial boards, might add things like jumpers if they see a real marketable benefit to justify less profit. Plus, a rule of thumb of mine says market almost always defaults on security theater if it’s cheaper than the real thing.
You combine these principles you get “secure” USB sticks with insecure crypto implementations, firmware, and do on. The write protect being firmware implemented let’s them keep costs down by reusing an existing, cheap SOC. “Authenticated firmware” stopping all attacks just… makes no sense. That they think lower level FIPS 140-2 proves anything supports that they’re probably bullshitters.
The easiest way to secure USB is to put the write-protect feature in either the USB physical IP or A physical switch in front of it. This would be in all the licensed IP cores with it off by default. Those making use of it could have a pin and physical switch connected to it. Switch also connects to microcontroller so it knows WP is active. Once USB is ASIC proven and gets in SOC’s, the security feature starts spreading to every device that uses it.
Elvis • June 26, 2015 7:17 AM
Kanguru has FlashTrust; signed with 2048-bit RSA firmware or something or other; not sure how that’s supposed to work though. Try going to an airport and saying “Good evening, sir! I’m not a terrorist! You can trust me, because I said so!”
A much simpler way is to have a 2-pin header (they don’t even have to be large either; 2.54mm pitch shunts are very small) physically present. Depending on whether the two pins are shorted (eg. with a jumper shunt) the firmware update will be allowed or disallowed.
But nope, corporate IT mindset is “we need bells and whistles for our security!”
Subscribe to comments on this entry
Sidebar photo of Bruce Schneier by Joe MacInnis.
Leave a comment