The Fundamental Insecurity of USB

This is pretty impressive:

Most of us learned long ago not to run executable files from sketchy USB sticks. But old-fashioned USB hygiene can’t stop this newer flavor of infection: Even if users are aware of the potential for attacks, ensuring that their USB’s firmware hasn’t been tampered with is nearly impossible. The devices don’t have a restriction known as “code-signing,” a countermeasure that would make sure any new code added to the device has the unforgeable cryptographic signature of its manufacturer. There’s not even any trusted USB firmware to compare the code against.

The element of Nohl and Lell’s research that elevates it above the average theoretical threat is the notion that the infection can travel both from computer to USB and vice versa. Any time a USB stick is plugged into a computer, its firmware could be reprogrammed by malware on that PC, with no easy way for the USB device’s owner to detect it. And likewise, any USB device could silently infect a user’s computer.

These are exactly the sorts of attacks the NSA favors.

EDITED TO ADD (8/14): Good writeup. Slides from BlackHat talk.

Posted on July 31, 2014 at 2:31 PM123 Comments

Comments

Spaceman Spiff July 31, 2014 3:03 PM

AFAIK, this is how the Stuxnet virus was propagated to the Iranian nuclear enrichment facilities. So, not new, but now better understood.

Name (required) July 31, 2014 3:16 PM

So now I have to start using CD’s for my sneakernet xfers, which is all xfers between my online machine and my work machine.

I swore off email 100%. I turned off cookies altogether. Occasionally I turn them on for a few minutes to DL something, before manually deleting them. I’ve gone back to using NoScript. Soon I’m going to reformat my online machine b/c the registry got corrupted by a bad Tor de-install – then I’ll go back to mostly Tor, no more regular Firefox. I always use HTTPS everywhere and Private Browsing mode. I don’t comment on any site that wants me to create a profile, or use cookies or scripts (thanks Bruce for not doing that). AdBlock: of course.

I’m just a home user. And I’ve discovered that I’m not losing out on very much with the above measures. The biggest thing I’m losing out on is being trackable and that’s a good thing. Because in all seriousness, screw the NSA. Calling them pigs is an insult to real pigs all over the world. Sorry for getting political.

About a year ago I moved and voluntarily went without the internet for over a month, during which I was ALSO house-bound. It was no big deal, despite that I call myself an internet addict (looking at it most of the day, everyday). Next step is to stop using the internet altogether. It gets more and more tempting everyday. Big change? Change, schmange. Done it, liked it.

Seriously I predict barely noticing a difference. Mostly I will feel more sorry than ever for people who are tied to the internet by their jobs, or by their youth (and naïvetée). Though most of them are probably happier than I am.

André Mello July 31, 2014 3:33 PM

Like I commented on the Wired article, I can’t see how what they describe is possible. I only have a basic understanding of the USB protocol, but, even if they can completely rewrite the firmware, how can the device take over the system and do whatever it likes with it, in a non-detectable, unfixable way?

What seems to be the case, exchanging ideas with other people, is that the device pretends to be of HID class and uses keyboard and/or mouse input to exert control, inject code, etc. But no matter how fast it happens, it can still be detected and counteracted by an antivirus, or by the OS, and even if it’s not, it’s possible to create some form of hardware CAPTCHA to make sure a human is controlling the input. So that would still not be impossible to fix, even if not in an ideal way.

As far as I know, the device has no active control of its host, the OS is entirely responsible of taking care of everything. So code execution would only be possible via an implementation flaw in the kernel driver, which is how it’s been done so far. The Wired article implies the researchers achieved a way to do that using just the USB specification, but if that’s really it, it blows my mind.

egeltje July 31, 2014 3:39 PM

@Name (required): have a look at Qubes OS (hxxp://qubes-os.org). They try to make a secure computer by running a micro kernel and have all contexts (eg. banking, work, untrusted) fully isolated in their own OS stack with clear visual feedback. Networking is done through a separate unprivileged stack, as are the UBS connections.
Not production-ready yet, but they’re getting there.

Porsupah July 31, 2014 4:10 PM

I’ll admit to being intrigued. I’ll check back through my composite device firmware (HID/microphone/speaker), and see if I can identify potential avenues, particularly within the spec, rather than the implementation.

If it’s along the lines as noted by Ian Mason above, exploiting insecure (but not per se part of any USB standard) vendor firmware updating mechanisms, I’ll be rather disappointed, even though that’s certainly an attack vector.

Jamie Bliss July 31, 2014 4:11 PM

I think this is not as big of deal as they’re making it out to be. It sounds like there’s a security problem in USB devices, not hosts.

I would speculate that the new security hole is that most USB controllers are reprogrammable from USB, allowing you to add secret functions to it. (Which is, frankly, dubious.)

The problem is, assuming there’s no driver vulnerabilities, is that you still have to go through USB channels to actually perform the attack. Meaning that a hybrid mass storage and HID device could insert key strokes to open and execute files it stores.

It’s not like a PC’s USB controller is all that in-depth. It’s not a magic “all IO goes through here!” chip. The OS still does all the heavy lifting. A USB device could pretend to be a network connection, but it’d still have to deliver packets to make anything work.

My guess is that it amounts to “USB devices can be programmed to have additional, malicious functions.” I don’t think this is like the firewire “any device can have access to all your memory” bug, or the heartbleed bug. This doesn’t sound like a primary threat, but another vector for payloads.

Oh, and this attack is totally detectable. You just have to look at what interfaces the device advertises and see if they match expectations.

For a primer on how USB operates, I would recommend Beyond Logic’s USB in a NutShell.

Roland July 31, 2014 4:15 PM

OK, so there’s a microcontroller on a thumb drive. Why would it have EPROM (writeable) firmware? Seems to me that sort of fixed-function device is what us old-timers would control with a ROM, containing code that is fixed as part of the original chip design. What’s the point of providing the ability to reprogram a chip that sells for a few pennies?

luckyluke July 31, 2014 4:17 PM

That’s not the only firmware related device that could be manipulated.

With enough knowledge in assembler and programming and all a lot of reverse engineering, plus a few ‘dead-modded’ sample firmwares, it’s feasible with other devices as well. Not so doable for the every day joe though,…

Scary is the part where nothing from a user perspective helps here.
The layer of ‘doom’ is so deep below filesystem level, reformatting is just so damned useless.

Just bad.
No matter who tries this kind of attack for whatever reasons.

John Galt III July 31, 2014 4:22 PM

An FPGA interposer board could be based on the BE Micro product and used to filter the data/instructions in both directions to insure that no misdeeds were being perpetrated by spookware. It would make a nice open-source hardware/open-source firmware/open-source software project that could extend to encrypting data from common USB devices like webcams, providing an extra layer of security for Skype. There is going to be a lot of money made and lost on surveillance measures, counter-measures and counter-counter-measures. This would make a nice Kickstarter effort.

Nick P July 31, 2014 4:33 PM

It’s an unsurprising vulnerability. Any device which has DMA or sends data to trusted processes is part of the TCB. One must either show it’s trustworthy or mediate it. My general rule is to consider every device untrustworthy. This is supported by the large amount of undocumented functionality, increasing use of programmable chips guiding the DMA, and recent discovery of a backdoor in a security-critical FPGA. The chips must be assumed to have functionality useful to the attacker.

So, what to do? Well, you essentially have to mediate. This might be an IOMMU integrated in SOC or inlined in PCI bus. There’s memory crypto schemes. There’s also my old strategy of offloading each I/O device onto a separate chip which has a safe interface to the main chip. That preserves COTS hardware compatibility, while allowing you to choose what chips to put trust in for mediation.

There’s also the economic model to consider. In a previous thread here on USB issues, I pointed out that users are largely to blame as the incentives they give manufacturers work against quality or security. Most that use high assurance methods won’t sell crap. Medium assurance barely sells. So, most stuff is low assurance. The profit motive of the private corporations might also increase the likelihood of them putting in backdoors for profit, as we saw with RSA (and ad-driven services in general).

An economic solution is to have several standards: low, medium, and high. This is essentially what CIA did in its ratings system. High should be considered secure. Low and medium should be considered insecure, yet medium puts substantial effort to reduce risk or make recovery easier. Reference implementations of useful functions (eg link encryptor) sponsored by public and/or private funding will give potential developers a head start on their products and an idea of the costs of each level of assurance. The reference implementation will itself be usable, with hardware sold just over cost. Private parties will differentiate with features they integrate into such models. An independent body (with plenty accountability) will certify each design against its goals and assurance level similar to Common Criteria’s Protection Profiles, but with a more practical than paperwork focus. Each company or individual can then look at a list of products, choose what assurance they want, and then get what they pay for. Lastly, the whole system’s rating is the rating of the lowest security component in the system in accordance with “weakest link” principle, although advanced users can look at ratings of each component.

dafydd July 31, 2014 4:41 PM

I’m with Roland. Why in the world are USB devices using writable PROMs?! Make that thing read-only!

supergeil July 31, 2014 4:52 PM

So as a first step they should implement warnings in operating systems when a USB device impersonates a keyboard or a network card. Generally cut down on plug’n’play. Also an interesting question is how to prevent actual keyboards from typing commands themselves.
Booting from USB is a different matter though…

Alex July 31, 2014 4:54 PM

Using a PS/2 keyboard & mice while disabling USB in BIOS – is known safety trick.
Also suggest using CF-type memory card with SATA adapter instead of USB stick.

sl149q July 31, 2014 5:14 PM

This appears to be a problem with devices leveraging class matching in Windows to get a malicious function connected (e.g. HID device).

This can be locked down using (for example) group policy to restrict hardware devices that are allowed to be used.

It might still be possible to hack an existing device that has an existing valid HID interface so that the OS will allow it to connect and be used. E.g. making that USB Keyboard do something unexpected.

But since (most?) USB Keys don’t have existing HID interfaces adding one would be noted and not allowed.

Evan July 31, 2014 5:41 PM

I think the idea is that you have a virus in the USB firmware that propagates itself to executables that are copied over, and then which, once inside a host system memory, implants whatever malware payload – including, of course, code to overwrite the firmware of new USB devices that come along.

It’s a nasty problem because there’s not really a good way to set up a trust system for USB devices short of whitelisting a few manufacturers, and apart from the problems that causes I’m not sure it can ever completely exclude tampering on a sufficiently smart device.

Nick P July 31, 2014 6:09 PM

@ Roland, dafydd

Writeable firmware has an obvious advantage: problems might be fixed with an update rather than recall. It’s the same reason Intel likes microcoding complex functions and a number of hardware uses FPGA. Flexibility for reuse, lower design costs potentially, and less risk of recall.

Old fart who remembers clipper and skipjack July 31, 2014 6:10 PM

It may sound trite, but the root problem here is the U, in USB – “Universal”.

We’ve delegated damn near every concievable peripheral function of modern computers (HID, mass storage, network devices, media devices, game controllers, sound cards, video input and output, etc etc etc) to this overloaded, in the computer science sense, interface specification that has few intrinsic security protocols.

Then, we act all surprised when it is discovered that a malicious party can fabricate, or in this case, modify poorly designed, off-the-shelf USB devices that can potentially gain elevated privileges on the target machine by spoofing one or more different classes of device.

The USB specification was drawn up around twenty years ago, when most engineers and IT specialists still regarded such things as Thompson’s “Reflections on trusting trust” as ivory tower foppery.

If a memory stick, costing pennies and containing a simple Turing machine, is allowed to present itself as a keyboard and be recognised as such by the host OS, then the interface specification and protocols need a major reworking, ASAP.

In the absence of the above, a useful band-aid might be BIOS or OS level access control limits on the class of USB devices allowed on a given interface, e.g.: only HID devices on USB 0 and nowhere else, and mass storage devices will only be recognised on USB 3 and 4.

A further protection might be the ability to lock USB x to, e.g., the keyboard with serial number ‘12345678’.

We managed for years with purple and green PS2 sockets for keyboards and mice – perhaps the time has come for a rainbow of colour-coded, access-limited USB sockets, too.

Old fart who remembers clipper and skipjack July 31, 2014 6:26 PM

@nickp “Writeable firmware has an obvious advantage: problems might be fixed with an update rather than recall.”

I agree with you in principle on the general case but, as far as mass-market USB memory sticks (the most likely vector for such an attack) are concerned, the likelihood of a manufacturer-initiated firmware update is strongly correlated to Old Nick’s use of ice-skates as commuter transport. 8)

Buck July 31, 2014 6:40 PM

@Old fart

Agreed! Unexpected firmware updates for cheap flash drives are far more likely to result in warranty-related losses than bugs in code that have been well tested for a decade or more…

Old fart who remembers clipper and skipjack July 31, 2014 6:56 PM

@buck: I don’t think f/w updates for inexpensive USB devices are likely – so the ability of such devices to potentially receive updates is unnecessary and dangerous.

The continued existence of such update mechanisms in, e.g. mass-market flash drives, is therefore mainly of significant utility only to those who would subvert the intended operation of the USB interface for undesirable ends.

LessThanObvious July 31, 2014 7:38 PM

This is good research and I have no doubt this will be a threat in the wild in years to come. It’s just another reason it’s critical that we have security throughout the supply chain. It would be pretty easy for these kinds of devices to come from the factory already compromised. Same reason that as much as a I like Motorola Droid devices I’ll never buy another one now that Lenovo has control. I also strongly urge anyone with a Lenovo Thinkpad laptop to uninstall as much of the pre-installed vendor software as possible, not to pick on Lenovo too much specifically as it’s possible any manufacturer could have compromised products knowingly or otherwise. After a virus infection used “Lenovo Access Connections” to encrypt exfiltrated data from my laptop, I can’t trust it. That is not to say that all superfluous software shouldn’t be nixed as standard practice.

Scott "SFITCS" Ferguson July 31, 2014 7:39 PM

@sl149q

This appears to be a problem with devices leveraging class matching in Windows to get a malicious function connected (e.g. HID device).

Sort of how it works. But it almost certainly doesn’t just affect Windows. It’s unlikely it wouldn’t affect all operating systems that support the USB protocol. If the device can spoof a manufacturer and product id (which is possible) then Linux’s udev (like Windows HAL) with treat the device accordingly. The Windows USB device handling process is functionally the same.

This can be locked down using (for example) group policy to restrict hardware devices that are allowed to be used.

I think you’ll find that belief is incorrect.
If Windows (or Linux/UNIX, and likely Apple) “identify” the device as a keyboard, camera, external video card, whatever on the basis of how it identifies itself using the USB standard (manufacturer and device id) – even if you create complicated device property rules that use additional characteristics of the device – those characteristics can be spoofed to (e.g. power usage, or any of the other values give by the device – in the example output below).


'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0/host8/target8:0:0/8:0:0:0/block/sdb/sdb1':
    KERNEL=="sdb1"
    SUBSYSTEM=="block"
    DRIVER==""
    ATTR{partition}=="1"
    ATTR{start}=="16"
    ATTR{size}=="6410672"
    ATTR{ro}=="0"
    ATTR{alignment_offset}=="0"
    ATTR{discard_alignment}=="0"
    ATTR{stat}=="     222     3087     4450      732        1        0
      1        0        0      544      732"
    ATTR{inflight}=="       0        0"

  looking at parent device
'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0/host8/target8:0:0/8:0:0:0/block/sdb':
    KERNELS=="sdb"
    SUBSYSTEMS=="block"
    DRIVERS==""
    ATTRS{range}=="16"
    ATTRS{ext_range}=="256"
    ATTRS{removable}=="1"
    ATTRS{ro}=="0"
    ATTRS{size}=="6410688"
    ATTRS{alignment_offset}=="0"
    ATTRS{discard_alignment}=="0"
    ATTRS{capability}=="51"
    ATTRS{stat}=="     228     3087     4498      740        1        0
       1        0        0      552      740"
    ATTRS{inflight}=="       0        0"
    ATTRS{events}=="media_change"
    ATTRS{events_async}==""
    ATTRS{events_poll_msecs}=="-1"

  looking at parent device
'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0/host8/target8:0:0/8:0:0:0':
    KERNELS=="8:0:0:0"
    SUBSYSTEMS=="scsi"
    DRIVERS=="sd"
    ATTRS{device_blocked}=="0"
    ATTRS{type}=="0"
    ATTRS{scsi_level}=="3"
    ATTRS{vendor}=="Kindle  "
    ATTRS{model}=="Internal Storage"
    ATTRS{rev}=="0100"
    ATTRS{state}=="running"
    ATTRS{timeout}=="30"
    ATTRS{iocounterbits}=="32"
    ATTRS{iorequest_cnt}=="0x12b9"
    ATTRS{iodone_cnt}=="0x12b9"
    ATTRS{ioerr_cnt}=="0x1"
    ATTRS{evt_media_change}=="0"
    ATTRS{queue_depth}=="1"
    ATTRS{queue_type}=="none"
    ATTRS{max_sectors}=="240"

  looking at parent device
'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0/host8/target8:0:0':
    KERNELS=="target8:0:0"
    SUBSYSTEMS=="scsi"
    DRIVERS==""

  looking at parent device
'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0/host8':
    KERNELS=="host8"
    SUBSYSTEMS=="scsi"
    DRIVERS==""

  looking at parent device
'/devices/pci0000:00/0000:00:02.2/usb1/1-4/1-4:1.0':
    KERNELS=="1-4:1.0"
    SUBSYSTEMS=="usb"
    DRIVERS=="usb-storage"
    ATTRS{bInterfaceNumber}=="00"
    ATTRS{bAlternateSetting}==" 0"
    ATTRS{bNumEndpoints}=="02"
    ATTRS{bInterfaceClass}=="08"
    ATTRS{bInterfaceSubClass}=="06"
    ATTRS{bInterfaceProtocol}=="50"
    ATTRS{supports_autosuspend}=="1"
    ATTRS{interface}=="Mass Storage"

  looking at parent device '/devices/pci0000:00/0000:00:02.2/usb1/1-4':
    KERNELS=="1-4"
    SUBSYSTEMS=="usb"
    DRIVERS=="usb"
    ATTRS{configuration}=="Self-powered"
    ATTRS{bNumInterfaces}==" 1"
    ATTRS{bConfigurationValue}=="1"
    ATTRS{bmAttributes}=="c0"
    ATTRS{bMaxPower}=="100mA"
    ATTRS{urbnum}=="9867"
    ATTRS{idVendor}=="1949"
    ATTRS{idProduct}=="0004"
    ATTRS{bcdDevice}=="0100"
    ATTRS{bDeviceClass}=="00"
    ATTRS{bDeviceSubClass}=="00"
    ATTRS{bDeviceProtocol}=="00"
    ATTRS{bNumConfigurations}=="1"
    ATTRS{bMaxPacketSize0}=="64"
    ATTRS{speed}=="480"
    ATTRS{busnum}=="1"
    ATTRS{devnum}=="9"
    ATTRS{devpath}=="4"
    ATTRS{version}==" 2.00"
    ATTRS{maxchild}=="0"
    ATTRS{quirks}=="0x0"
    ATTRS{avoid_reset_quirk}=="0"
    ATTRS{authorized}=="1"
    ATTRS{manufacturer}=="Amazon"
    ATTRS{product}=="Amazon Kindle"
    ATTRS{serial}=="B00AD0B11322188J"

  looking at parent device '/devices/pci0000:00/0000:00:02.2/usb1':
    KERNELS=="usb1"
    SUBSYSTEMS=="usb"
    DRIVERS=="usb"
    ATTRS{configuration}==""
    ATTRS{bNumInterfaces}==" 1"
    ATTRS{bConfigurationValue}=="1"
    ATTRS{bmAttributes}=="e0"
    ATTRS{bMaxPower}=="  0mA"
    ATTRS{urbnum}=="164"
    ATTRS{idVendor}=="1d6b"
    ATTRS{idProduct}=="0002"
    ATTRS{bcdDevice}=="0302"
    ATTRS{bDeviceClass}=="09"
    ATTRS{bDeviceSubClass}=="00"
    ATTRS{bDeviceProtocol}=="00"
    ATTRS{bNumConfigurations}=="1"
    ATTRS{bMaxPacketSize0}=="64"
    ATTRS{speed}=="480"
    ATTRS{busnum}=="1"
    ATTRS{devnum}=="1"
    ATTRS{devpath}=="0"
    ATTRS{version}==" 2.00"
    ATTRS{maxchild}=="6"
    ATTRS{quirks}=="0x0"
    ATTRS{avoid_reset_quirk}=="0"
    ATTRS{authorized}=="1"
    ATTRS{manufacturer}=="Linux 3.2.0-4-686-pae ehci_hcd"
    ATTRS{product}=="EHCI Host Controller"
    ATTRS{serial}=="0000:00:02.2"
    ATTRS{authorized_default}=="1"

  looking at parent device '/devices/pci0000:00/0000:00:02.2':
    KERNELS=="0000:00:02.2"
    SUBSYSTEMS=="pci"
    DRIVERS=="ehci_hcd"
    ATTRS{vendor}=="0x10de"
    ATTRS{device}=="0x0068"
    ATTRS{subsystem_vendor}=="0x1458"
    ATTRS{subsystem_device}=="0x5004"
    ATTRS{class}=="0x0c0320"
    ATTRS{irq}=="20"
    ATTRS{local_cpus}=="ffffffff"
    ATTRS{local_cpulist}=="0-31"
    ATTRS{dma_mask_bits}=="32"
    ATTRS{consistent_dma_mask_bits}=="32"
    ATTRS{enable}=="1"
    ATTRS{broken_parity_status}=="0"
    ATTRS{msi_bus}==""
    ATTRS{companion}==""
    ATTRS{uframe_periodic_max}=="100"

  looking at parent device '/devices/pci0000:00':
    KERNELS=="pci0000:00"
    SUBSYSTEMS==""
    DRIVERS==""

e.g. a device that identifies as a Flash drive (bulk storage device). Is seen as a bulk storage device- all I can do is change the rules on how that’s handled by the system.


Dec 15 22:08:05 vbserver mtp-probe: checking bus 1, device 9: "/sys/devices/pci0000:00/0000:00:02.2/usb1/1-4"
Dec 15 22:08:05 vbserver mtp-probe: bus: 1, device: 9 was not an MTP device
Dec 15 22:08:06 vbserver kernel: [37231.122454] scsi 8:0:0:0: Direct-Access     Kindle   Internal Storage 0100 PQ: 0 ANSI: 2
Dec 15 22:08:06 vbserver kernel: [37231.127116] sd 8:0:0:0: Attached scsi generic sg2 type 0
Dec 15 22:08:06 vbserver kernel: [37231.131048] sd 8:0:0:0: [sdb] 6410688 512-byte logical blocks: (3.28 GB/3.05 GiB)
Dec 15 22:08:07 vbserver kernel: [37231.234688] sd 8:0:0:0: [sdb] Write Protect is off
Dec 15 22:08:07 vbserver kernel: [37231.344693] sd 8:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Dec 15 22:08:07 vbserver kernel: [37231.565991]  sdb: sdb1
Dec 15 22:08:07 vbserver kernel: [37231.844747] sd 8:0:0:0: [sdb] Attached SCSI removable disk
Dec 15 22:08:08 vbserver kernel: [37232.678814] FAT-fs (sdb1): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!

I can change the rules that automount the file system. Which is fine and good when the device identifies as a drive.
But if the device mimics my keyboard…. I can’t create a rule that can tell the difference between my real keyboard and the spoofed one. With access to the USB stack exploits that can leverage escalated privileges through firmware or driver layers of the OS are a demonstrably realistic possibility.

If I had to come to a conclusion at this time I’d strongly suspect that the only workable, mitigating solution is a reliable cryptographic device identification trust system (perhaps a variation of the old IBM MCA/RS Bus card identifier system?) . And a continued healthy level of distrust when it comes to devices you attach to your computer.

Chris Abbott July 31, 2014 7:46 PM

Is there any way around this? I work on infected machines daily and (I know, this is really stupid, but in all the years I’ve worked I’ve never had a problem) I use a USB drive to install software on my customers machines, then I occasionally do a SHA-512 hash comparison on the executables on the drive. I often scan it too with AV. If the drive’s firmware can be pwned, should I start installing my usual software on machines with a CD/DVD? That would be such a pain in the ass!

@anon @Name (required):

Sometimes I worry that all of the security vulnerabilities in everything these days is going to force us back into the dark ages…

Nick July 31, 2014 7:52 PM

In the absence of the above, a useful band-aid might be BIOS or OS level access control limits on the class of USB devices allowed on a given interface, e.g.: only HID devices on USB 0 and nowhere else, and mass storage devices will only be recognised on USB 3 and 4.

A further protection might be the ability to lock USB x to, e.g., the keyboard with serial number ‘12345678’.

We managed for years with purple and green PS2 sockets for keyboards and mice – perhaps the time has come for a rainbow of colour-coded, access-limited USB sockets, too.

Agree with this sentiment, my first reaction to hearing about this yesterday was

1) USB firmware should be cryptographically signed.
2) Operating systems should prompt users to enable specific USB functions. When I pop in my badusb thumbdrive the OS should pop-up asking me to allow the USB Storage Device, and USB Keyboard. I think most users would be able to navigate, and be on notice, that their storage shouldn’t be a keyboard.

The problem with both of these obvious solutions is
1) Change to the USB spec is probably politically impossible or so far down the track as to be useless
2) Change to operating system behaviour is difficult to do correctly and is only slightly quicker to execute.

HiTechHiTouch July 31, 2014 7:52 PM

Do you remember the flack about printers (and rogue micro-code?)

There a lots of USB connect devices, not just sticks and memory cards, for this vector/exposure…

koanhead July 31, 2014 7:58 PM

Sounds like FUD and sensationalism to me. I don’t believe, for example, that USB devices are as homogeneous as they claim: all USB devices have writable firmware, really? And no trusted firmware exists for any of them? ‘Cause I’m pretty sure that the Debian repositories contain some fairly trustworthy open-source code for some USB devices, usb-dux in firmware-linux-free for example.

Until it’s released, I say it’s puffery. Yes, USB devices are not secure. It’s always been the case and will continue to be so. To say that an ownable device “must be regarded as owned”, though, is foolishness. Firmware images can be verified, and USB host adapters can be turned off.

It does underline the idea that only Free or at least open-source software is trustworthy from a security perspective. I would think that readers of this blog would already have received that memo. Non-free and proprietary firmwares are a problem in general, and not solely in USB land or relating to this specific class of vulnerability.

Chris Abbott July 31, 2014 8:28 PM

@Nick:

Let’s say we have a USB 4 that does that. It seems like that could present backward compatibility problems. Would it? Most people would probably use the old cheap flash drives anyway I would guess. People are lazy when it comes to security.

koanhead July 31, 2014 8:37 PM

OK, the Register’s coverage points out that it’s “not all USB devices”.
It’s not clear to me from their article that the research itself makes this caveat, and I don’t see any reference to it at the SRLabs link so I think that caveat is the Register’s (and my) own interpretation.
The Register further points out the security-vs-cost tradeoff of securing against BadUSB by making microcontrollers accept only firmwares signed by one of a hard-coded list of keys. I don’t think this will work any better than the Trusted Computing Initiative has, that is, a small increment in security for a massive increase in cost and inconvenience to users. I don’t think any solution that doesn’t involve open-source firmware is going to work well. It’s better to sign a hash of another, known-good image than to sign the image you’re sending.

Gweihir July 31, 2014 8:44 PM

I expect this is a non-issue on most devices, because the flash storage with the firmware will have the protection bits set. (I have never seen any USB thumb-drive where you could upgrade the firmware.) At that point, a software-only attack gets infeasible or extremely hard.

Also notice that if this works, it has a high probability of getting noticed, because this attack has a high probability of bricking the USB chip. It needs to get everything right in order not to and there are numerous different controllers with numerous different firmware revisions out there.

And finally, there is not much space in there that could hold attack code. In particular, there is not enough space to be able to work work with a lot of different sticks. In practice, this will likely be two-hop, i.e. PC->USB-stick->PC and lose its propagation capability to USB in the first hop.

Incidentally, the claim this could work on USB keyboards is complete hyperbole for normal keyboards. For example, I have a Cherry keyboard that has a non-erasable ROM and 256 Bytes of EEPROM in addition to store the keyboard map. Sure, the on-time programmable ROM is likely flash, but the device does not have the high-voltage generator and bus-lines required to erase the cells. It physically cannot do it. It is also a whopping 4kB of ROM. Nobody will be able to fit Malware in there and even trying will break the keyboard, which would be rather noticeable. And once again, this requires a specific attack for each different Keyboard hardware and firmware revision. As most USB keyboards and Mice are HID, there is a large component selection space.

So my guess would be “neat research, likely irrelevant in practice due to numerous issues, except for targeted attacks where the hardware to be attacked is precisely known in advance”.

Buck July 31, 2014 9:58 PM

@Nick D

While I agree with much of what you have to say here, there is at least one gaping exception…

I think most users would be able to navigate, and be on notice, that their storage shouldn’t be a keyboard.

From my own personal experiences, I would expect most users to simply not understand, not care, or click through (and accept all defaults).

Thoth July 31, 2014 10:01 PM

One huge take away…

As with all programs, functions, devices and utilities designed by humans, it’s prone to errors and irresponsible implementation/design/testing in an opaque environment.

It’s not just the USB that’s dangerous. Anything is dangerous if security is not part of the design/implementation.testing especially in an opaque environment.

Disabling write is not the solution to all problems. The solution to all problems is being responsible and transparent.

The mindset of most program creation (about 90% +/-) is simply, “well I just wanna push out this product and get the Wows and $$$ and I dont really care”. That’s the attitude that killed us all.

That’s the very key that allows malwares and bugs creep in… because people don’t care… because people don’t want to be accountable… because people want to make systems opaque.

Essentially, it’s the human issue right at the core and it’s very hard to fix unless the entire culture of humanity on the approach of building systems can be changed to be a more responsible and transparent one.

Buck July 31, 2014 10:10 PM

@Chris Abbott

Luckily, for near-future script-kiddie assaults, you could change your habits now! 😉
Unfortunately, if you’re dealing with well-resourced bleeding-edge attackers, your efforts are nil… :-\

Clive Robinson July 31, 2014 10:51 PM

@ Buck,

We know users can not tell the difference when their USB device changes it’s function.

The reason we know it is Broadband GSM etc mobile network devices from the likes of ZTE.

The devices contain firmware for two and often three different USB classes, a “read only” memory device such as a CD-ROM, an “AT command set modem” and “flash memory” thumb stick device.

Basicaly when you plug it in and it powers up it defaults to being the CDROM which uploads a small executable program that then runs and checks for Service Provider software on the PC, if it finds the software it simply tells the device to switch to being the modem. If it cannot find the software on the PC it then installs it prior to telling the device to switch to being a modem. The reason it appears as a CDROM is their is a fairly standard trick for a CDROM to appear as a native device for Apple mac etc, *nix systems and MS Windows systems. Thus the broadband modem device has enough “hidden” storage to contain three different file systems with different OS software on each.

The process of installing the service provider software is fairly standard, but I’ve yet to see an automatic uninstaller, which means your PC hard drive will get littered with such software, as each different broadband modem you plug in stuffs on another few meg of service provider software.

It’s the way things are supposed to work from the USB consortium, device manufacturer, service provider point of view. And although they are unaware of the mechanics behind it, how the majority of end users want it to work as well.

You can blaim much of this on the MS led/inspired “Plug-n-Play” ethos where the user would not have to worry about installing drivers etc theid just plug it in…

Way way back then security proffessionals warned it was the equivalent of getting drunk and then practicing “unsafe sex”, but as normal with humans the bad consiquences have to happen to the average individual before they will take heed, by which time it may be to late and they have the equivalent of mind destroying tertiary siph or worse…

Organon July 31, 2014 11:03 PM

@Thoth: “Essentially, it’s the human issue right at the core and it’s very hard to fix unless the entire culture of humanity on the approach of building systems can be changed to be a more responsible and transparent one.”

The ‘responsible’ part applies to the ethical layer, but the problem is deeper than ethics.

There’s an epistemological problem relating to reduction, which is the logical process of tracing a concept back to the concretes that it stands for.

For most folks abstraction is done by other people, and the foundation of ideas are not to be questioned. Products built under that regime are naturally not going to be transparent, and people wouldn’t know what to do with them if they were. Under Christian Fascism, there’s a cultural bias that favors believing over knowing.

With Google lowering the cost of knowing, it is becoming feasible on a ever larger scale to make transparent systems, which is to say systems that are reducible.

Now the NSA has provided the motivation for building reducible systems.

Mark H July 31, 2014 11:04 PM

I wonder how hard it would be to patch sudo to only allow passwords from a specific bus/port/keyboard combo. I.E. to only authenticate with my laptops built in keyboard. Or to disallow HID functions from specific usb ports.

Nick P July 31, 2014 11:15 PM

@ old fart & Buck

An update on a USB flash drive is unlikely. However, I suspect that it’s not actually the cause. I mentioned another driver in my post: reuse. Let’s say you develop all kinds of peripheral devices. You need a USB controller in most of them. The main difference in them will be the software on the controller and/or its performance. So, you now have two choices: design a bunch of separate USB chips (or ROM’s) with separate production; design one programmable USB chip with a production step selecting what firmware to put on it. One makes a hell of a lot of sense to a company focused on the bottom line. Now, repeat this process for many chip designs with maximizing reuse being a key goal and you get the modern IC marketplace.

Note: They might also be using a SoC they use in other embedded products which just happens to be field-upgradeable. They might keep their embedded RTOS, timing analysis, etc the same across product lines.

Another good example is hard disks. There’s many models available with different amounts of storage. Consumers assume that higher end models have more of some physical thing that results in a higher price. Actually, they figured out that it’s cheaper to design just one hard disk, then do something in the firmware that changes how much space is available. This saves them plenty of money. They also might need to support firewire, USB, eSATA, and so on. The next step that might benefit would be putting them all on the controller or buying an existing chip on the market that supports them. Then, just like configuring size, you tell the firmware what to enable. And now your new product just needs a different connector soldered onto it.

@ Chris Abbott

I have no idea. Any change could cause BC problems. Like I said above, I think it was functionality that was reused either due to being a COTS chip they bought or them using that functionality in another device.

@ Old Fart

Re Universal

It’s a good observation as it would seem to be the root problem. It’s not, though. There are existing methods to handle general I/O with quite a bit of security. I even posted a few new one’s on this blog. The thing is that the I/O mechanism must be designed for secure operation, which mainstream I/O typically isn’t. Combining insecure I/O architectures, insecure devices, and universal attribute predictably leads to insecurity.

@ Gweihir

I’m mostly in agreement that there’s only so much one can do on microcontollers. Yet, early computer virii were written in the hundreds of instructions range and an injection over DMA for a specific OS might not take up much space. Removing the password, changing existing code to admin/kernel/hypervisor mode, or leaking a key in memory would take even fewer instructions than a rootkit. So, size matters but I’ve seen too much black hat innovation to build assurance on such an attribute.

@ egeltje

The Qubes founder’s blog has a nice writeup on USB security issues:

http://theinvisiblethings.blogspot.com/2011/06/usb-security-challenges.html

I doubt Qubes has gotten to the point of truly isolating damage USB can do. They do use an IOMMU to restrict the device’s DMA and allow you to assign a device to a VM. Those two alone are quite beneficial. Yet, anything you send to or receive from the device still poses a risk as with any isolation method. And as we’re talking storage devices that’s a significant risk.

@ Thoth

Like a general purpose device, a secure microcontroller requires mechanisms to ensure loading of authorized code, protection of internal memory, control flow integrity, and prevent data becoming code. That it might loose power at any moment presents additional issues shared by smart cards. That the software is controlled by the manufacturer, special purpose, and rarely changes means it’s actually easier to secure this kind of device. That the device must be as cheap as possible to ensure profitability pushes against security. So, the solution must use little power, be very cheap, be adaptable to other stuff they do (reuse), meet all performance requirements, and be secure.

The best solution I see is starting with something like Sandia Secure Processor design, then adding trusted boot/upgrade and optionally an I/O coprocessor. The result would be cheap, performing, easy to modify, and unlikely to be hit with code injection. A transactional, versioning approach to persistent storage might also help with failing safe on power failure. Stateless design is the alternative (and default) for that kind of thing.

Buck July 31, 2014 11:21 PM

@Clive

Automatic installer… 😛 Don’t think I’ve run one if those (overtly) since windows 3.1! Though yes, user education and the techie/non-tech gap remains a major problem… (plus it seems that the OS publishers always go about mucking with SOP for disabling insecure default settings)-:

Clive Robinson July 31, 2014 11:28 PM

The general take away from this is,

    Security is much harder than it should be, because joe average wants an easy life.

It’s the old Security-v-Usability argument that has been around since the 1960’s, and if people think back they will see all new attack classes were preceded by new user action changes. For instance Boot Sector malware followed the advent of “sneaker nets” of the Late 70s and 80s, which in turn gave as the AV industry. Likewise other malware infection vectors, via modems, LANs, Plug-n-Pray, removable media, removable devices… with more rcently the problems with BYOD smart phones. The cycle repeats over and over, with users not learning from history easily in living memory.

Whilst users practice “Party hard, die young” mentality to their computing, we also have to remember the industry bears quite a bit of responsability. It’s the ultimate in consumerism, product life cycles are measured in months not years, way faster than less expensive “white goods”, which is why for many years the sector has been the leading light of FMCE (fast moving consumer electronics).

But with FMCE there are issues, the profit margin is slim and competition is high. It is set up as a “push system” where “returns” have to be eliminated as the cost of a single return can exceed the profit of upto several hundred units, and far far exceed even the retail cost of individual items. The joke of it is it’s now got so bad that for some retailers the items are sold at cost and the profit is made on post and packing…

When viewed this way it can be seen why things are the way they currently are. This leaves with the question of “Who is going to pay for security?” and the resounding answer in well over 99.999% of user level ICT purchases is “Not Me, No way, No how!!!”.

Buck July 31, 2014 11:29 PM

@Nick P

I’d be willing to wager that any company hella focused on the bottom line would definitely be able to track specific SOC providers to highly rated RMA customer locales… 😉

Figureitout August 1, 2014 12:52 AM

Hmm, running off a USB-stick now…such handy little f*ckers. Already knew it was infected as it was a 16GB drive but plug into and moving on to another drive I lose 2GB and there suddenly pops up 2 devices…Everywhere for me it’s been a separate device installed in the USB, w/ its own filesystem…it’s infected all my computers and all my accounts, so to remove it I need a complete purge which I’m not ready to do yet…verified this w/ my infected USB sticks many times, you should let me stick one in your computer if you want to see just how secure it is….

Not to mention my windows laptop I’ve been leaving out to pick up more infections…they came of course (not from the internet). Now I’m suddenly running “illegal windows software” after leaving it dormant all summer. “Let the framing begin”. Oh, my Temp directory got filled w/ garbage as usual, more encrypted garbage on a newly formated disk (total joke), now I’ve got more smiley faces showing up on encrypted partitions. A part of me wants to “accidently break” this PC but another part says wait a minute it’s still capable of processing…can’t use it for my school work as it’s got some hardware module in it and I don’t feel like ripping this laptop apart.

Just got an Ecrix VXA-1 External tape drive…uses SCSI-SE protocol Will investigate more later…

I think this really is a problem overall w/ just memory…who’s to say a little USB-chip isn’t in your hard disk w/ the power, clock, ground, signal line all just taking power from the computer power supply? That takes some analysis to find.

Also, /r/netsec on “BadUSB”.

65535 August 1, 2014 4:38 AM

@ Ian Mason

I think this subject has come up in other posts. I did review the video at your link.

I do see the clear problem with having re-programmable micro-controller on a flash drive connected to your computer. It could have a large attack vector.

http://www.youtube.com/watch?v=r3GDPwIuRKI
30C3: Exploration and Exploitation of an SD Memory Card

Here are the key points from the YT video on Exploration and Exploitation of an SD Memory Card:

19:10 Go to Baidu [Chinese version of Google] and down load SD controller Programming tool [screen shot of down programming tool site in Chinese]

34:21 Explanation of matching controller to actual flash chip “big fan of running any code on any hardware you own including your SD Card… no method of verifying what is running on these controllers”

34:59 Attack Scenarios

-Eavesdropping
* Report smaller than actual capacity [report the SD card at 4 GB when it is 8 GB, hide copied data].
* Data is sequestered to hidden sectors that are un-erasable.

ToC/ToU
* Present one version of the file for verification; another for execution
* Bootloader manipulation, ect.
* Selective-modify
* Scan for assets of interest, e.g. Security key, binaries, and replace with insecure versions.

37:21 Samsung MMC
* Samsung pushed firmware patch to eMMC cards in Android [Note?]
* Contain ARM7 code [hyperlink]
* Uses “class 8” instructions reserved for manufacturer

39:59 Wrap-up slide
* SD cards contain fully programmable micro-controllers
*Controller program modifiable via special host commands, Potential MITM attacks – extremely cheap microcontroller for fun projects, [Explanation of why the manufactures don’t hard code the micro-controllers; too many mix and match flash chips]

42:32 Demonstration of SD card attach to computer to reprogram SD micro-controller.

47:59 Question about USB sticks and SD cards [not much difference].
*Question about various attacks [various micro-controllers in laptop and so on] Sandisk controllers may have micro-controller protection from the factory but the user cannot protect it.
*Flash with WiFi controllers are attack targets.

@ Nick P

I think you nailed it. Any device with DMA capabilities is a risk. The reprogrammable microprocessors on these flash devices are for economic reasons [mix and match different flash chips cheaply as possible while managing the wear factor].

@ tz

I would think the rubber ducky USB would be really handy if you could make pirated exterior cases of major manufactures [preferably the exact case of your target], place the rubber ducky inside the fake case with a persistent virus and trick the user into putting it into his machine.

@ Clive

You seem to be knowledge about this USB device function switching trick. I gather it has been done before 😉

Daniel August 1, 2014 5:47 AM

Why code is allowed to be written / changed on a storage device ?

Can someone explain the data flow? The driver reads / execute firmware written to USB in order to detect its type, writing mode etc? Why that if there are still drivers on the computer who can identify USB type?

Something is very bad made here (read backdoor).

ATN August 1, 2014 5:58 AM

Any device with DMA capabilities is a risk.

Not if the host computer program the DMA (source address + length for reads, target address + length for writes), yes if the external device decides itself of those addresses or can modify them.

Tim Bradshaw August 1, 2014 8:12 AM

Although the risk from USB is, perhaps, higher because people tend to plug USB drives into their computers without thinking, it seems to me that there are a lot of peripherals attached to a modern system with a lot of field-upgradeable firmware in them. How much do you trust your disk drives? Your keyboard(-controller), your mouse? Anything?

Jeremy L August 1, 2014 8:16 AM

Signing the firmware doesn’t make it secure.

It just ensures the code was written by someone with access to the keys of a manufacturer somewhere on the planet. It also restricts open-source firmware, assuming that a signed key requires some payment and identity validation.

The fundamental issue is the hardware and the protocol. We have well-established infrastructure for ethernet/tcp connections. USB is just another bus. But most people know very little of how USB works.

One solution: ethernet/PoE peripherals. Unlike USB, we have tools to make that secure and monitor it.

Andrew Yeomans August 1, 2014 8:44 AM

Webkeys have been around for some time. These can look like a USB flash drive, but emulate a keyboard. They are cheap enough to be given away as a marketing promotion.

See http://my-key.co/webkeys/usb-web-keys or http://www.digital-key.co.uk/ for some examples.

One I looked at pretended to be an Apple keyboard, presumably so OS/X would not complain when plugged in. “lsusb” gave “ID 05ac:020b Apple, Inc. Pro Keyboard [Mitsumi, A1048/US layout]”.

It was amusing to see his expression, when plugging into the computer of the person configuring USB lock-down software to block all access, when a web-page popped up.

Security risk? Comparable to letting a stranger type on your keyboard. And maybe we should not trust any un-checked inputs from any device. Just as the industry had to turn off USB auto-run by default, maybe we need the user to type a challenge PIN before accepting a newly connected keyboard by USB, Bluetooth, etc.

However there is still the possibility of bugs in the USB drivers – I’ve heard stories of car electronics crashing when some types of phones are plugged into the entertainment system.

USB fuzzing has been around for several years, maybe this latest research will bring renewed work to detecting and fixing USB driver bugs.

Incidentally, is anyone aware of recent research on re-flashing USB controllers with open software? Most of the work reported seems to end up pointing to http://flashboot.ru/ which often relies on use of arbitrary Windows executables to do the re-programming. Hardly verifiably safe and secure!

Clive Robinson August 1, 2014 9:31 AM

@ Tim Bradshaw,

    How much do you trust your disk drives? Your keyboard(- controller), your mouse? Anything?

Simple answer, not at all.

A friend demonstrated an interesting prototype hack to me a while back. A well known manufacturer of printers has in some of the mid range and low end models both USB and WiFi hardware fitted. It is fairly easy to turn the printer into what is a Wifi dongle and covertly transmit copies of the printed documents out to a suitably access point. Some of the models also have scanners and other interesting bits such as memory card readers as well…

What my friend is trying to do is get the WiFi interface on the printer to also act as a “repeater” where it bridges two or more WiFi networks together. Thus form a covert wireless mesh network which could shift data across many links before reaching the final exfiltration destination or link out onto the Internet.

He pointed out a couple of sailient facts we should all consider,

1, Infinate Monkeys would stand a better chance of writing secure code for peripheral devices (especialy for those on ARM u-procs).

2, There’s no AV or other security products for end users to put on their peripherals to stop even the simplest of attacks.

Thus it’s like steping back to the days of Win95 but without the “Dr Solomon’s” to keep the idiots out. He also demonstrated some time ago a way for a webserver to find out not just the model but a whole bunch of other details about printers connected to the computer…

@ ATN,

The problem with DMA is it works below the CPU level in the computing stack, a little above bus level. This means that you need a hardware solution to stopping a peripheral DMA going rouge on you because software cannot see it (only it’s results). As others have noted above the OS manufactures don’t even try to lock down hardware even where it can be which makes the problem worse.

The solution is to put some kind of MMU between the peripheral DMA controler and the system memory –and other devices– such that fine grained lock down can be achived by the OS (providing the OS supplier support/allows it).

@ 65535,

Yes it’s been done before.

If you hunt back in this blog you will find that I have mentioned I was trying to develop a very secure USB memory drive that also had a RTC, GPS and GSM interfaces in it, such that it could be remotely erased, or self erase when it went outside a geographic area or time, or the user pressed “the big red button” due to stress or even just shouted etc near it (it used the trick for various things including becoming a secure GSM modem).

The problem is it’s not reliable in the ordinary sense, and thus you have to do key managment in real time from a remote control point. Whilst this can be done relatively easily, it had other issues…

One of which was it turned out there were no GSM chips/modules that could even remotely be secure… That is they all had large amounts of RAM/ROM flaky Java implementations and were designed or manufactured in very untrustworthy places such as China, France, Israel and the US. And as has been observed by others on this blog recently one of the Israel / US manufacturers has now come under the Chinese sphere of influence…

Nick P August 1, 2014 1:11 PM

@ Jeremy L

“Signing the firmware doesn’t make it secure.”

As I said, it’s one of a few components necessary in a secure device. The property is that only authorized code can run. Firmware signing is one of many technologies that might be used for this. It has the advantage of allowing firmware to change. If hardware never treats code as data and has firmware signing, it’s not going to run malicious code via a mere software attack. An extra property or two adds protection against attacks by CPU or other hardware.

Changing the protocol itself doesn’t seem to be an option at this point, although the protocol handler can be modified for increased security.

Wael August 2, 2014 5:36 AM

@unsecure,

USB drive to go the way of the floppy disk

Only to be replaced by something, of course, more “secure” like the smog — AKA the cloud.

Mike the goat (horn equipped) August 2, 2014 7:46 AM

Wael: I think it is fair to consider that the designers of USB didn’t really consider security to be a priority. Although when you look at some of the alternatives that were in vogue at the time – like FireWire which gave you DMA access to the host – it wasn’t nearly as bad as it could have been.

Wael August 2, 2014 10:09 AM

@Mike the goat (horn equipped),
True! USB had no security considerations at design time. Security came in as an after math, if it did at all. To some extent, the same can be said about blue tooth. PS2, serial ports and parallel ports don’t have much of a security either, to be fair. DMA (master mode) can be a hole, as @Nick P repeatedly stated, although I have a feeling his words are falling on designer’s deaf ears.

M.V. August 2, 2014 10:46 AM

USB Devices don’t have control over the DMA inside the USB Hostcontroller. So the DMA hole doesn’t apply.

Also USB up to Version 2.0 is based on polling, that means the device can’t send unexpected data, just malformed one to exploit driver and SW bugs.

For external devices the DMA hole currently only applies to Firewire, ExpressCard, Thunderbolt and some exotic stuff based on cabled PCI express.

Zig Fiedorowicz August 2, 2014 12:33 PM

I have some questions about this badusb vulnerability.

My understanding is that
(a) usb firmware is executable code which runs on the microcontroller
within the usb device, NOT code which runs on the host computer.
(b) the usb standard specifies a communication protocol between usb
devices, NOT the architecture of the microcontrollers within these
devices.

If my understanding is correct, then I don’t see how you could write
badusb firmware which would work across a wide range of devices. The
firmware would not be binary code compatible between devices, no more
than Intel machine code could run on an ARM processor.

If I am wrong, I would be glad to be enlightened.

Herman August 2, 2014 4:16 PM

This is just another trojan/virus delivery method. The effect on a system will be limited if there is proper separation between users and administrators and if RBAC or MAC is enforcing.

So, Enterprise Linux and Mac users do not have much to fear from this and Windows Enterprise users with a properly configured security profile and RBAC will probably be OK too.

It is therefore mainly the el’cheapo Windows versions without Rights Management and RBAC that is endangered and the big issue is that most people have no idea what the differences between el’cheapo Windows and Enterprise Windows are.

M.V. August 2, 2014 4:45 PM

@ Nick P

They don’t claim that they used an USB device to launch the attack. Their DAGGER firmware requires what they call first party DMA, ie. PCIe card. They talk about USB only in the context of finding the URB (USB Request Block) to locate keystrokes.

USB uses third party DMA with the USB host controller as the 3rd party.

Using an ExpressCard is a realistc example for their attack. Just insert into the notebook and power it up. No need to actually boot the machine, just inject the malware and hide it away in the Intel ME.

Nick P August 2, 2014 4:57 PM

@ M.V.

Oops my bad. There’s still the problem that one must trust the implementation of the USB functionality. Experience with write protect shows that what they say it does and what it actually does can be very different.

So back to my mantra of dont trust the device in the design unless it proves it’s trustworthy.

Eric August 2, 2014 4:58 PM

I remember reading about a G20 summit last year in Russia – how phone chargers were handed out to people right and left, but that the chargers themselves were USB and contained exactly this sort of malware that could in turn infect your phone.

Thoth August 2, 2014 7:59 PM

What would be the most efficient and safe way to transfer files if USB methods are now considered bad and unsafe ?

  • Secure network transfers ?
  • Rewritable optical disk ?
  • Back to floppy era ?
  • Manually copy hexcodes or base64 by hand….

Scott "SFITCS" Ferguson August 2, 2014 10:06 PM

@Herman


The effect on a system will be limited if there is proper separation between users and administrators and if RBAC or MAC is enforcing.

No. DAC and MAC are layers above the USB subsystem. If the USB device ID database matches the device characterstics of your keyboard, the system will treat the device as a keyboard. If the device leverages insecurities in the USB subsystem RBAC will have no effect. Neither will SE Linux.


So, Enterprise Linux and Mac users do not have much to fear from this and Windows Enterprise users with a properly configured security profile and RBAC will probably be OK too.

So, no – that is incorrect.

Scott "SFITCS" Ferguson August 2, 2014 10:28 PM

@Zig Fiedorowicz

<

blockquote>

If my understanding is correct, then I don’t see how you could write
badusb firmware which would work across a wide range of devices. The
firmware would not be binary code compatible between devices, no more
than Intel machine code could run on an ARM processor.

<

blockquote>

Correct – the “firmware” would not work across a wide range of devices. That’s a correct assumption even though it’s based on false logic.

Different dog to Herman’s misunderstanding (he “assumed” that the malware would be running as a user process and therefore subject to role-based permission restrictions, but same leg action (the possible reality is not bound by the false logic of the assumption).

USB attacks are not restricted to “one piece of malicious firmware” – there’s no reason they can’t use a library of firmware attacks.
There’s no reason to assume attacks simply consist of replacing firmware with malicious code – it could overwrite sections of existing firmware replacing or adding functionality (even with space limitations code can be reorganised to add more functions), it could leverage undocumented functions/flaws in firmware and/or the USB subsystem, or (random culprit) the Windoof EHCI library, or a combination of those methods.
I wouldn’t rule out buffer overflow exploits as part of the attack library either (coughError -200361 and cousinscough).

Scott "SFITCS" Ferguson August 2, 2014 10:38 PM

@André Mello

<

blockquote>
I can’t see how what they describe is possible. I only have a basic understanding of the USB protocol, but, even if they can completely rewrite the firmware, how can the device take over the system and do whatever it likes with it, in a non-detectable, unfixable way?


What seems to be the case, exchanging ideas with other people, is that the device pretends to be of HID class and uses keyboard and/or mouse input to exert control, inject code, etc. But no matter how fast it happens, it can still be detected and counteracted by an antivirus, or by the OS, and even if it’s not, it’s possible to create some form of hardware CAPTCHA to make sure a human is controlling the input. So that would still not be impossible to fix, even if not in an ideal way.

<

blockquote>

Your keyboard is detected long before your antivirus is started.
What rules do you propose that will prevent your OS from allowing a keyboard, while simultaneously (magically?) disallowing the fake keyboard?


As far as I know, the device has no active control of its host,

Unless it leverages that control using flaws in firmware, drivers, or other parts of the OS….
The absence of knowledge is not proof of absence.


So code execution would only be possible via an implementation flaw in the kernel driver, which is how it’s been done so far. The Wired article implies the researchers achieved a way to do that using just the USB specification, but if that’s really it, it blows my mind.

It’d “blow my mind” if the drivers, sub-system, and firmware did not have many major exploitable flaws. Some of them deliberate.

Nick P August 2, 2014 10:43 PM

@ Thoth

There isn’t a provably secure way to transfer files via software on an architecturally insecure machine. However, there are methods that reduce risk. The classic option is using a guard. They’re used, for example, on moving data between physically isolated networks of different classification levels. The guard just needs I/O to computers, maybe do crypto, run checks on the data, scrub potential covert storage channels, and is special-purpose enough to be built to high assurance standards. There’s been mail, network, and even web application guards with first two done to high assurance by some products. One can remove the DMA risk by using a high-speed PIO device that has no DMA. I once hacked a solution with IDE cables in PIO mode. Or one can leverage a microcontroller, FPGA, etc to build a custom I/O chip on the guard and each endpoint. You can ensure it has DMA, but what’s controlling it is secure. Or has an IOMMU.

The simplest guard is an embedded board with PIO, connections to the computers, and running a microkernel OS with carefully written drivers. OpenBSD on a more powerful embedded board or even old school server with certain Open Firmware changes can do as well. Worst case scenario, a regular computer with open BIOS and Linux with assurance enhancing extensions (eg SVA-OS, SMACK). Remove any piece of code from the system you don’t need. If it has to be there to compile, just change the code to return successfully instead of doing something. 😉 Just make sure that the application level of it uses no dynamic memory allocation, gets size/hash before hand, consistently checks timing to know if device is stalling on transfer, and does bounds checks on arrays/buffers.

Now, if we’re talking just normal computers, you have to decide what you’re use case is. Are we talking computers that were connected in some way to the Internet? If they were, then assume they might get hit with malware at some point. We’re mostly back to “no way to say they’re secure.” However, there’s still some risk reduction here. One is to apply an integrity check to what you’re sending, then send it via a reliable UDP-based protocol (eg UDT) while blocking TCP in the kernel. IP and UDP implementations rarely have flaws compared to TCP and similarly complex protocols. Plus, the UDP-based protocols tend to execute at application layer, meaning they’re in user-mode and you can write the code in a safe way yourself.

Last idea is one I came up with and posted here relatively recently [I think]. The method is to create a computer that’s essentially an external device you plug into a normal PC that can access its memory. There’s various PCI and USB embedded boards that might be used for this. The system runs simple, hardened, security focused software that basically moves files. It might have IOMMU-backed DMA or PIO. Like with the guard, it can be told to receive a file while doing it carefully with custom, highly-assured code. It will deliver that file on another machine similarly. This is more like a user’s normal experience with moving files than a typical guard. It might also be fairly cheap for the hardware and perform decent. The slower PIO’s I did a while back were faster than 10Mbps Ethernet by almost double. So, it’s not Gigabit but you’ll live.

Note: And if it’s an air gapped machine, assuming you’ve done that correctly, you can move files from it using write once CD’s or DVD’s. Moving files to it you’re trusting quite a bit of code you might not even know exists. Hence me bringing up things such as guards and secure pluggable transfer devices. If they sound complex, you’d be overwhelmed seeing what people do to securely receive input that might affect anything from microcode to application. Guards are child’s play in comparison to that job. 😉

Nick P August 2, 2014 10:50 PM

@ Scott

Nice comments. The layer below a given piece of software that can affect its operation is always a link in that software’s security chain. Which we INFOSEC pro’s call TCB. Also…

“I regret that I am currently too busy to take on more clients, watch this page (and the News page) for updates. In the meantime I am offering domain registration services – and will continue to provide existing clients with the same high levels of service (of course!)”

…I see you’re other work is going quite well. I think that’s the best apology I’ve ever seen on a web site. 😉

Note: There was once site whose “hamster-powered servers” went down occasionally with pictures of exhausted and adorable hamsters hamsters. I think that was the first I really laughed at which wasn’t a hacker forum or something like that.

Buck August 2, 2014 10:52 PM

@Thoth

If it’s a simple answer you’re after…
Use a custom combination of your own and others’ techniques! Abandon all hopes of ‘one-size-fits-all’ security solutions (magic boxes and services). Reassume your own authority! If one is to cede their own control (in any aspect of life), it seems to be very hard to reattain…

Nick P August 2, 2014 11:13 PM

@ Buck

He could always hand-enter the stuff by using a hex-editor on both machines. That he coded himself in assembler. On an assembler live CD version of KolibriOS. You get machine level control and safe data movement!

Buck August 2, 2014 11:32 PM

@Nick P

Seems appropriate enough 😉
Or perhaps we should ponder more on the types of files that Thoth wishes to transfer… If we’re talkin’ keymat here, I could certainly come up with plenty more guarded possibilities than the options we’ve been provided… Perhaps an air-gapped printer/scanner combo with some home-brewed recognition software could provide for a more fruitful solution..? Even a Morse-coded message sent in the clear has more of my trust than any ‘trusted’ method of ‘secure’ mass communication.

Scott "SFITCS" Ferguson August 3, 2014 12:05 AM

@Nick P

@ Scott

Nice comments. The layer below a given piece of software that can affect its operation is *always* a link in that software’s security chain. Which we INFOSEC pro’s call TCB. Also…

“I regret that I am currently too busy to take on more clients, watch this page (and the News page) for updates. In the meantime I am offering domain registration services – and will continue to provide existing clients with the same high levels of service (of course!)”


…I see you’re other work is going quite well. I think that’s the best apology I’ve ever seen on a web site. 😉


Note: There was once site whose “hamster-powered servers” went down occasionally with pictures of exhausted and adorable hamsters hamsters. I think that was the first I really laughed at which wasn’t a hacker forum or something like that.

Thanks. 🙂

Then you may find this informative:-
https://scottferguson.com.au/cloud/cloud.htm

Or the (Delphic) Oracle function. Ask a question phrased as a web page and receive an answer. e.g. When is Bruce wrong? is:-
http://scottferguson.com.au/when_is_bruce_wrong.html

;p

Scott "SFITCS" Ferguson August 3, 2014 12:29 AM

@Nick P


@ Thoth


There isn’t a provably secure way to transfer files via software on an architecturally insecure machine.

(IMO)

Secure yes. Secret no.
Any secure encryption mechanism is the basis of moving data securely between machines by any method. The integrity of the transfer mechanism should not reduce the security of proper encryption. Relying on the mechanism is to trust the messenger instead of trusting the message (outsourcing responsibility with it’s attendant failings).

Trying to select a transfer mechanism that can be trusted to preserve the privacy of the data being transferred is probably impossible using current protocols and hardware found on PCs. But securely encrypted data can be securely transferred even with untrusted tranfer methods/on untrusted hardware.

Either trust is ‘proven’ or it’s ‘assumed’ (conflating ‘faith’ and ‘fact’). We can ‘place’ faith in something…. but if that faith is misplaced it can bite at some point. So it’s safer to place your faith (unless you have proven the security – it’s just faith) in secure encryption than secure transfer mechanisms.

Verification of encryption is less difficult than verification of hardware/drivers/firmware etc.

That’s not to say we shouldn’t explore more secure transfer mechanisms, only that even with secure transfer methods it would be a security mistake to rely on them alone to protect the data.
If the data is a vehicle and the transfer mechanism is the yard – Lock the vehicle and the yard. But start by securing the vehicle and don’t rely on the yard alone to secure the vehicle.

Clive Robinson August 3, 2014 4:37 AM

@ Q,

The problem with datadiodes like any other guard mechanism is that they to may have “bugs” by design or lack of design.

Clive Robinson August 3, 2014 6:48 AM

@ Thoth,

What would be the most efficient and safe way to transfer files if USB methods are now considered bad and unsafe ?

Firstly what do you mean by “efficient” and “safe”?

As I’ve pointed out one or two times on this blog there are issues with efficiency when it comes to security. Whilst it is possible to have a secure system for many things in general the more efficient you try to make it the more likey it is to leak information via the opening of inadvertent side channels.

These side channels can be time based or power based or several other domains, the more efficient a device is in any given domain then the more transparent it is to side channels in that domain.

Further what is often overlooked in designs is that even transmit channels from a secure device can be used as input channels to it, by the down stream receiver simply manipulating the error handeling mechanism. Thus all reliable communications channels are bi-directional, a transmit channel also receives and a receive channel also transmits. Not just at the device interface but due to the device transparency through it to other devices two or more steps away in either direction.

All guard mechanisms are susceptible to these issues, it’s how you manage them which reduces the problem, but generally the managment of them limits the likes of bandwidth or increases delay, both of which most users find unacceptable… It’s this user issue that the likes of skilled adversaries rely on to exploit guards and similar systems.

I have seen a system which was “locked down” where it was supposadly not possible to send information up stream only down stream. However I found it was possible to manipulate the down stream channel such that the error handeling mechanism in the down stream device could be activated. This in turn got reflected back into the device and activated it’s error correction mechanism which in turn affected the upstream device error correction mechanism. Thus it was by using this reflection possible to send information back up stream…

This demonstrates another issue that even most EmSec designers do not think about, which is an active device is also a transducer and can take information leaked in one channel domain that is pushed into it and push the information out in a different channel domain. Thus activating the error correction mechanism would effect not just the time domain but the power domain as well. This is similar to the issue in the RF analogue domain where limiting amplitude information translates it into phase information, thus the information still gets past the limiter…

It can be shown that you cannot shut down these covert channels or stop the transparency or translation of information from one domain to another, all you can actually do is limit the amount of information they leak (bandwidth) and how long it takes to get out (delay).

Obviously knowing this reflects on your view / interpretation of “safe”.

But you also have to ask not just what you mean by safe but at what level in Human-Commputer-Communications (HCC) stack.

For instance you meet somebody in person and as covertly as possible exchange credentials to establish a trust mechanism. Once that person is out of your sight you have no knowledge of what they do with the credentials. Thus reliable trust cannot be established, you always have to suspect the other person has either accidently or deliberatly disclosed the credentials. Thus you cannot consider the trust mechanism as safe by a lot of human values.

We have seen this failure with “code signing” if the holder of the private key loses control of it or inadvertently signs bad code with it or any one of a number of other security failures then you end up with bad code on your system.

Similar considerations apply at all levels of the HCC stack, thus “safety” is at best a relative measure at all those levels.

As I’ve mentioned before I’ve a number of old systems including 486s, and I have a number of different microcontroler development platforms all of which have serial communications on them. I can and do configure these to create guards and store and forward pumps with various checking and oversight mechanisms in them. I further restrict any information to be transfered to “plain human readable text” and subject this to certain sanitation processes (such as white space and non alpha char sanitation). Is this safe, well not entirely but it’s safer than many other methods when trying to keep issues off of an otherwise air gapped and fairly physicaly secure system (it is as I’ve mentioned befor in a safe, in a RF cage, in a secure room in guarded premises).

Whilst this is overkill for most purposes there are some things such as KeyMat generation and managment where it is a sensible precaution. However for some people this is almost irresponsibly unsafe for they requirments and they will have all sorts of other mechanisms both physical and otherwise on top of this.

Obviously such mechanisms get geometricaly more expensive and difficult to use, so knowledge of this further effects your view / interpretation of “safe” or what is “acceptably safe”.

Czerno August 3, 2014 9:11 AM

@Clive re: back channels, information leaks…

Wow ! A 700+ words Clive post, the usual style and type of content but without the usual spelling/grammar mistakes ! Talk of a back channel : I’d infer you switched from hand held devices to a real computer+keyboard !

That, or switched drugs ;=)

Clive Robinson August 3, 2014 10:44 AM

@ Czerno, Moderator, Bruce,

Err neither,

I’ve had some probs with posting causing time outs at the server end.

Well I played around with the browser and phone including turning off the mechanical keyboard and gone to on screen kby, which for some reason has to have the spell checker on (all of which slows things down 🙁

Well none of it had any effects on the time out issue…

Further investigarion with an O’scope and wideband receiver showed there was issues arising from timing in the network. I lodged a complaint along with the relevent timing information and told them to chuck it to third line support not the phone ape hangers and sort it out…

Well the problem has gone away without changes at my end and I assume the surver as well. So it appears the service providers have sorted their act –or new snoopware– out. But so far…. no response from either the technical or turd line support at EE (which was T-mobile a moon or so ago).

So the question is do I turn the mechanical KBY back on again and speed things up by turning the spell checker off 😉

Zig Fiedorowicz August 3, 2014 3:35 PM

@ Scott

So the malicious firmware code would need to use some other vulnerability in the host
to inject “alien” machine code to execute somewhere on the host system. If it is the usual
kind of vulnerability in the OS or some other software running on the host CPU, then this is
nothing really new and would be mainly of interest to 3-letter agencies trying to
penetrate an air-gapped high value target.

The only way this would be something radically different is if the Berlin researchers
found a vulnerability in the software in the microcontroller running
Intel’s Manageability Engine in the host computer, which has direct memory access
to the host’s RAM. In that case the usb firmware might be able to inject malicious
code into the Intel microcontroller and such an exploit would be invisible to the
OS or any other software running on the host CPU. In that case the only remedy
might be to replace the motherboard with one whose ME microcontroller has been fixed.

Scott "SFITCS" Ferguson August 3, 2014 11:37 PM

@Zig Fiedorowicz • August 3, 2014 3:35 PM


So the malicious firmware code would need to use some other vulnerability in the host
to inject “alien” machine code to execute somewhere on the host system.

In many cases, no.


If it is the usual kind of vulnerability in the OS or some other software running on the host CPU, then this is nothing really new and would be mainly of interest to 3-letter agencies trying to penetrate an air-gapped high value target.

Lacking your experience I would call it something new. Perhaps you could post some references to this “not new attack”. 🙂

I’m struggling to understand why “it” would only be of interest to “3-letter agencies” (and why ASIO/ASIS wouldn’t be interested).

Why do you believe this method would only target “air-gapped” machines of high value?

If you don’t understand the attack how can you declare it “nothing new” – or set parameters on it’s usefulness. Downplaying the unknown is illogical.


The only way this would be something radically different is if the Berlin researchers
found a vulnerability in the software in the microcontroller running
Intel’s Manageability Engine in the host computer, which has direct memory access
to the host’s RAM. In that case the usb firmware might be able to inject malicious
code into the Intel microcontroller and such an exploit would be invisible to the
OS or any other software running on the host CPU. In that case the only remedy
might be to replace the motherboard with one whose ME microcontroller has been fixed.

The Berlin researchers have not made a full analysis of the attack, so it’s a little premature to state what the attack cannot use – don’t you think?
At no point do they emphatically declare “there is no way this sort of attack could utilize iAMT DAGGER-style exploits”.

FYI there are other DMA based exploits. coughNorthbridgecough

Nor is it the only “invisible exploit”. Exploits of the Southbridge firmware by USB devices are theoretically feasible and would be a logical inclusion in the arsenal of a USB spoofing “smart” USB device.

To me it only strengthens the reasons to take NSA recommendations to use Intel with a large glass of IPECAC (and continue to take a skeptical view when reading nationalistically fervorous Gish Gallop defenses of the NSA (no matter how syncronistically ironic the pseudonym of the poster).

Thoth August 4, 2014 3:17 AM

What I meant was transferring data between host computers knowing that no surprises would be injected into the data I attempt to transfer, modify data or give some nasty bite.

Imagine you want to transfer your emails from your internet PC to your air gap via downloading the email text data, compress them and transfer them to another PC by whatever means and ensuring the means you use doesn’t betray you in anyway possible.

Transferring keymats is also another possibility.

Switching on both PCs and manually copying hexcodes is one way (probably much safer but more error prone ?).

Of course the fanciful guard devices and other fanciful techniques or even rewriting and rebuilding your own hardware and software stack and creating your custom hardware is another possibility but how practical is it ?

The question in regards I want to bring up is the list of possible options everyone have left on hand to go about their daily businesses and occasionally transfer sensitive materials or private materials safely. Now we know USBs can host a variety of problems, what options do we have left ?

From the responses I believe our only option left is just hand copy the hexcodes so that no surprises would spring up (some kind of malware … etc …) unless a malware modifies the hexcodes subtly ?

I guess, for the average user who are security conscious and tries to protect their own privacy and doesn’t like malwares and surprises to spring on them … it’s over for security for the general public ?

Is security and safety only exclusive for those who understand the underlying mechanics and not available for the security minded average joe ?

I am guessing even for those of us here whom have strong technical knowledge and know how, it’s impossible to keep themselves safe and secure too considering the uphill battle one faces trying to secure their own boundaries.

Scott "SFITCS" Ferguson August 4, 2014 4:53 AM

@Thoth


What I meant was transferring data between host computers knowing that no surprises would be injected into the data I attempt to transfer, modify data or give some nasty bite.


Imagine you want to transfer your emails from your internet PC to your air gap via downloading the email text data, compress them and transfer them to another PC by whatever means and ensuring the means you use doesn’t betray you in anyway possible.

Verify the integrity of the data being transferred and obscure the data. Encrypt it.

If you can’t unencrypt the transferred data is was altered during the transfer process.
If you can unencrypt the transferred data it was not altered and even if the hardware or the process was compromised you know the unencrypted data remains secret. Of course if the air-gapped machine is compromised so is the unencrypted data.


Transferring keymats is also another possibility.

You have two boxes.
One is medium security box connected to the internet. It’s susceptible to malware and other forms of spying.
The other is a high security box kept disconnected from the internet (air-gapped).

Transferring classified data (keyrings etc) from a box that is known to be less secure, to a box that is more secure has obvious risks. If you must move classified data from untrusted to trusted then you must encrypt it (and reassess the security rating of the air-gapped box). SOP101.


Switching on both PCs and manually copying hexcodes is one way (probably much safer but more error prone ?).

It’s one way of transferring data – not practical (though it sounds good, to the ignorant). It’s not safer (than what?). Probably or no probably.


Of course the fanciful guard devices and other fanciful techniques or even rewriting and rebuilding your own hardware and software stack and creating your custom hardware is another possibility but how practical is it ?

As practical as buying special clothes and booking a haircut for the Rapture?

Your question has flawed logic – as it presupposes hardware is the answer to the question (sophistic).

The correct question is why don’t you encrypt classified data?

It’s a simple question that everyone who loses control of their security fails to adequately answer.


The question in regards I want to bring up is the list of possible options everyone have left on hand to go about their daily businesses and occasionally transfer sensitive materials or private materials safely. Now we know USBs can host a variety of problems, what options do we have left?

The same options you’ve always had – those you’ve ignored to your peril.

  1. Plan your security processes.
  2. Implement your plan.
  3. Practise proper Operating Procedures.
  4. Encrypt your classified data.

I’m constantly astounded by the amount of energy people will invest in complicated schemes to try and mitigate the risks they moronically take on by refusing to encrypt their data. Sadly, laziness and stupidity don’t adequately explain the phenomena.

Thoth August 4, 2014 8:27 AM

@Scott Ferguson

You said:
“I’m constantly astounded by the amount of energy people will invest in complicated schemes to try and mitigate the risks they moronically take on by refusing to encrypt their data. Sadly, laziness and stupidity don’t adequately explain the phenomena.”

That’s the common idea many of us down here have but not for the people out there who dont have any awareness, interest nor patience with security. If you ask them (non-security people) to key in a password, it is as good as a thorn in the side for them… let alone doing anything more secure.

I do not think hardware nor software alone is the problem. In fact, I think the problem is riddled everywhere. Softwares have their problems (insecure codes and all that backdoor stuff), hardwares aren’t always that secure (mostly insecure), human processes can be rather complicated especially if you begin to scale processes into organisations and departments where each portion of an organisation may have their own policies or even on a national level. On the individual level, the effort to install and setup your secure environment is far more tedious (not just a little more tedious).

I guess amongst all these revelations of security vulnerability, have we not asked ourselves what we can change and adapt comfortably enough and the level of security we are willing to trade off. A totally secure solution is possible but not going to be easily deployed widely and adapted by many.

I think you somehow miss the point because your SOP101 if you ask a journalist to do it (e.g. Glenn Greenwald), he might simply scratch his head yet again. Let alone hardware and software customizations (out of question) for Glenn Greenwald.

What I am asking are possible easy to use turnkey solutions for people (especially non-techies) to quickly deploy and not wade through the mess of technicalities reserved for technical people like us.

Here’s my first shot at it:
1.) Use writable optical disk. Hash data content and use visual checks for correct data hash. Of course, zip before encrypting data content if needed.

More contributions and comments are welcomed.

Clive Robinson August 4, 2014 8:28 AM

@ Thoth, Scott,

As I said the problem is multi-level in the HCC stack, and any solution needs to address all the relevant levels.

Just saying,

    Verify the integrity of the data being transferred

Is easy in practice it’s actually very hard to impossible to do.

There are three basic forms of message that you need to deal with,

1, Unverified.
2, Verified by a semi trusted or untrusted source.
3, Verified by a supposedly trusted source.

From the above it can easily be seen that you have to verify or reject any message prior to copying it for being moved onto the secure system.

You not only have to check the message as a whole but the individual components within thus the process can be fraught with difficulties.

Take case 2 above this covers the likes of patches from software vendors, like it or not they are a necessity of modern computing. Some vendors assume you have never installed their patches so that each patch they supply is a monolithic archive of all previous patches. The reality is it is beyond just about every one to break these 20MByte archives up and verify each individual component.

Even when a modest message arives from a supposedly trusted source, you still have to verify what you get, not to do so is asking for trouble. Afterall the credentials could have been stolen or the person subject to duress. Dealing with this is difficult if not impossible unless certain precautions are in place prior to the trust relationship being established. Some of these precautions are purely human in nature, as one gangster was heard to remark, he would only trust people who had a lot lot more to lose than he did.

As for the communication link, either you only send plain text with white space and non alphas sanitized or you should either encrypt or encode the incoming message such that if it’s been bobby traped in some way for the low level comms hardware it won’t work.

I could go on at length about what you should consider doing at the various levels, but this would be not a blog post but the basis for around five chapters in a book.

But as I said before the required resourses of this sort of security suffers from exponential increase for even a moderate increase of security so your first steps are to analyse your situation not just technically but in terms of loss if the technical solutions fail.

Q August 4, 2014 9:46 AM

@Clive Robinson • August 3, 2014 4:37 AM

Quote: The problem with datadiodes like any other guard mechanism is that they to may have "bugs" by design or lack of design.

This is not a (scientific) argument. Any system always may or may not have ¨bugs¨.

In the future a more fundamental theory then Quantum Theory may be discovered.

This does not imply that Quantum-Theory does not hold, even after a more fundamental theory then quantum-theory is discovered.

Scott "SFITCS" Ferguson August 4, 2014 10:13 AM

@Thoth


@Scott Ferguson


You said:

“I’m constantly astounded by the amount of energy people will invest in complicated schemes to try and mitigate the risks they moronically take on by refusing to encrypt their data. Sadly, laziness and stupidity don’t adequately explain the phenomena.”


That’s the common idea many of us down here have but not for the people out there who don’t have any awareness, interest nor patience with security. If you ask them (non-security people) to key in a password, it is as good as a thorn in the side for them… let alone doing anything more secure.

Security is not magic. Without proper Operating procedures there is no security.
Laziness will always be a major market segment for those that profit from the gullible.


I think you somehow miss the point because your SOP101 if you ask a journalist to do it (e.g. Glenn Greenwald), he might simply scratch his head yet again. Let alone hardware and software customizations (out of question) for Glenn Greenwald.

Speaking of missing the point….
Failure to practise proper OPSec is the problem. That Glenn, in your example, can’t be bothered learning or practising it, just confirms my point. No need to make excuses for him.

Wirewalking is dangerous. Practiced properly it’s not particularly dangerous. Just because someone can’t be bothered doing the training and exercising appropriate discipline doesn’t make it less dangerous.
If you want to walk a wire across the Grand Canyon – research it, plan it, and if that plan proves too risky – do not do it.
Just because “everyone” is wire walking the Grand Canyon doesn’t mean we have to try and develop technology to make it safe – or lobby the government to remove all sharp corners and potential impacts from the world (that would just lower the standards).

Unfortunately everyone wants to surf the web – in their underwear – and they want to extend the privacy they expect in their bedroom to anywhere they connect with their browser. Did I mention they want to do it now (actually now was twenty mouse clicks/taps ago)? Poor impulse control is another factor in bad security. Change control is not.


What I am asking are possible easy to use turnkey solutions for people (especially non-techies) to quickly deploy and not wade through the mess of technicalities reserved for technical people like us.

No. And not likely to be any in the near future (though many companies are betting they can sell something as “that” real soon).

Security is a chain. The weakest link in many cases is the meatbag at either end of it.
Of course the “iwanna” factor will ignore that, and search endlessly for magical solutions (that validate their hard-earned ignorance).

It’s too hard. I don’t want to learn. I don’t have the time (to learn what I don’t know and therefore cannot know how long it will take to learn). etc. etc.
These are not reasons. They are excuses. The problem is psychological – not physical or technical. Sort of a “dear interweb do my homework”/”I demand you make Open Source my way” thing, where impulse and selfishness trumps logic, discipline, and effort.


Here’s my first shot at it:
1.) Use writable optical disk. Hash data content and use visual checks for correct data hash. Of course, zip before encrypting data content if needed.

If you’re encrypting the data, later hashing and visually checking is redundant. But don’t forget to check the CD itself (md5sum or gpg signature).

RAR/Zip is only necessary to get around Joliet limitations and allow gpg encryption of multiple files. Compression is part of the encryption process (e.g. with gpg/pgp).

  1. Tar your data (or zip/rar the directory it’s in)
  2. Encrypt the archive gpg –clearsign classified.tar.bz2
  3. Make a note of the last few characters of the fingerprint tail -n 2 < classified.tar.bz2.asc | head -n 1
  4. Transfer the encrypted data to a CD image
  5. Write the last few characters of the archive fingerprint on the CD
  6. Image the burnt CD dd if=/dev/sr0 of=~/cd.iso
  7. You can md5sum or create a detached gpg signature for the iso image if you wish. Write the last 4 characters of either onto the CD.
  8. Burn the CD
  9. Move the CD to the air-gapped box
  10. Image the burnt CD and check it’s md5sum or the fingerprint of a detached signature against the original iso image.
  11. Copy the encrypted data off the CD.
  12. Check the gpg fingerprint of the encrypted data.
  13. If all the checks done on the air-gapped box match the original checks – unencrypt and enjoy.

Scott "SFITCS" Ferguson August 4, 2014 10:43 AM

@Clive Robinson


@ Thoth, Scott,


As I said the problem is multi-level in the HCC stack, and any solution needs to address all the relevant levels.

Yes – a solution to attacks that use the stack (and/or attacks that use built-in exploits in the Southbridge (knock knock?)). So that attacks can’t intercept unencrypted data on disks or from mice and keyboards….

Just saying,

Verify the integrity of the data being transferred


Is easy in practice it’s actually very hard to impossible to do.

I didn’t just say “it”.
I said:-


Verify the integrity of the data being transferred and obscure the data. Encrypt it.

and I said it in a specific context. Transferring data from one machine to another.


Is easy in practice it’s actually very hard to impossible to do.

How is gpg encrypting data for transfer to another machine – then decrypting the same data after transferring – “very hard to impossible to do”??

Or are you saying that decrypting gpg is not verification that the data hasn’t been altered during the transfer??

The only hard or impossible part is trusting the box you encrypt and decrypt on. Presume it is compromised (like a house that’s been burgled) – that doesn’t mean the encryption process has exploited (so even though my house can be burgled I’ll continue to use a gun safe). Nor is it grounds for not using encryption.

Clive Robinson August 4, 2014 10:45 AM

@ Q,

It might not be a scientific argument, but it is currently a “fact of life” few with any knowledge of hardware and software industries would argue against. And there is a reasonable body of evidence produced by research that bears it out.

Personaly I think “correct by design” is like a search for the Holy Grail, mainly because we have imperfect knowledge. If you could change that then yes you might stand a chance of bug free, but I suspect a “meatbag in the loop” will still keep making mistakes…

As for the theories of natural philosophy, we know Newton was wrong, but his laws will still get you around the solar system with little problems. Many many years ago I was told “Physics is a series of lies each more accurate than it’s predecessor…”. And if you care to look at cosmic infation theory, you can see atleast on notable cosmologist questioning if the speed of light has always been a limiting constant.

As a Frenchman of note once commented “the more things change the more they stay the same”.

Scott "SFITCS" Ferguson August 4, 2014 11:08 AM

@Thoth


@Scott Ferguson

You said:

“I’m constantly astounded by the amount of energy people will invest in complicated schemes to try and mitigate the risks they moronically take on by refusing to encrypt their data. Sadly, laziness and stupidity don’t adequately explain the phenomena.”

[snipped]

Correction, this should have said:-

  1. Tar your data (or zip/rar the directory it’s in)
  2. Encrypt the archive
  3. Delete the unencrypted data.
  4. Make a note of the last few characters of the fingerprint
  5. Burn the encrypted data to a CD
  6. Write the last few characters of the archive fingerprint on the CD
  7. Image the burnt CD
  8. You can md5sum or create a detached gpg signature for the iso image (if you wish). Write the last 4 characters of either onto the CD.
  9. Move the CD to the air-gapped box
  10. Image the burnt CD and check it’s md5sum or the fingerprint of a detached signature against the original iso image.
  11. Copy the encrypted data off the CD.
  12. Check the gpg fingerprint of the encrypted data.
  13. If all the checks done on the air-gapped box match the original checks – unencrypt with a high degree of trust that the data has not changed and remains as private as when you first encrypted it.

(Movable Type doesn’t allow code quoting… sorry).

Scott "SFITCS" Ferguson August 4, 2014 11:14 AM

@Thoth


@Scott Ferguson

You said:

“I’m constantly astounded by the amount of energy people will invest in complicated schemes to try and mitigate the risks they moronically take on by refusing to encrypt their data. Sadly, laziness and stupidity don’t adequately explain the phenomena.”

[snipped]

Another correction!:-

  1. Tar your data (or zip/rar the directory it’s in)
  2. Encrypt the archive do not use –clearsign, (I’ve been drinking).

Nick P August 4, 2014 5:20 PM

@ Thoth

“What I meant was transferring data between host computers knowing that no surprises would be injected into the data I attempt to transfer, modify data or give some nasty bite.

Imagine you want to transfer your emails from your internet PC to your air gap via downloading the email text data, compress them and transfer them to another PC by whatever means and ensuring the means you use doesn’t betray you in anyway possible.”

Thanks for clarifying. Scott’s many posts’ main recommendation is to compress and GPG the data. It’s a good idea, but far from adequate. We must always remember that attackers will exploit any link in the chain. You are trying to send data from a possibly malicious machine to a trusted machine. If it’s truly one way, then your requirement is one of the easier problems to solve. Here’s your problems:

  1. The data itself might be leaked or modified.
  2. The medium used to move the data might be attacked, such as protocol or firmware.
  3. The data might contain a malicious payload that executes on your trusted machine. And this might be inserted in a way where you don’t realise it, nullifying the advantage of crypto. This has been done by black hats with PDF’s, Word files, music, movies, and so on. Easier with binary.
  4. If the airgapped machine is taken over, it’s communications methods might be re-activated to stealthily leak data. That was in NSA TAO catalog.

So, this is the problem in a nutshell. As it always has been, obscuring and tamperproofing the data itself is the easy part. And it’s rarely what they attack. Hackers will instead hit protocols, OS, viewer apps, firmware, and so on. So, the solution must be a total solution which also involves security features for your air gap machine.

CD/DVD Solution

I’m going to build on this first. The hosting computer shouldn’t have any wireless communications at all. Anything non-essential should be disabled in the BIOS, the BIOS locked, and ideally a flash protection feature (eg jumper based) built-in. Auto-run should be disabled if the system has it. The media itself should be write-once and finalized. The main drawbacks are that it costs a disc each time, it doesn’t allow useful two-way communication (eg update service), it’s very slow (CD/DVD writes), and it’s quite manual. The crypto is unnecessary with this design except to keep you from having to destroy the discs. Of course, it provides the advantage where you can have a dedicated password for these transfers that’s saved on each machine.

Note: The untrusted computer sending the files is assumed compromised. The crypto still blocks random third parties with the disc from getting the data. Dumpster diving is main threat vector here.

Network Solution (Low Assurance)

I mention this solution because you wanted a turnkey solution. There are commercial solutions specifically design to do this. They’re called “cross-domain solutions.” They usually run on guards. They can be quite easy to use and sometimes support (modified) protocols such as FTP or Windows Update. The commercial one’s are quite pricey.

The basic, cheap solution is to build your own. Fortunately, there are already plenty firewall distro’s specifically designed to do this. Take a firewall distro, put it on a cheap board, configure it for one-way networking over UDP, and then you just need an app that will send the packets. A more carefully written policy might allow TCP acknowledgements, but still be one way. The NRL Network Pump works this way albeit with an extra technique to prevent the ACK’s from being used as a timing channel. So, this is a basic solution that’s mostly point and click. I’m also sure even the policies and commands to execute to do what I describe (for UDP) are already online somewhere.

Firewall is easiest option due to use of existing software and potential automations with scripts. The next easiest (and more secure) option is a data diode. Again, there are commercial options of varying price and features. The good news is there are DIY data diodes for ethernet and fiber online. They essentially modify the cable to only send data one direction, then ensure the apps use them that way. The code receiving the data, the firmware of the medium, the apps executing it, and the layers below are still vulnerable, though.

Many also use serial cables to avoid DMA risk. Certain modifications must be made but the drivers are so simple to modify. I used an IDE because it had working drivers in about every OS, is cheap, can operate in non-DMA mode, and is over 100 times faster than serial. My recent work is basically a dedicated chip (or PCI board) containing and running only what I determine to be trustworthy.

Network or Diode Solution (Medium Assurance)

So, how do we eliminate (or reduce) those risks while avoiding all kinds of complexity in design, installation, etc. The absolute simplest strategy is to put OpenBSD on a simple embedded board. Connect both computers to it with serial ports. Configure OpenBSD’s firewall correctly. On trusted system, use OpenBSD, Linux with SELinux/SMACK, FreeBSD with Capsicum, or Solaris with Trusted Extensions. The point is you want an OS on the trusted machine that’s open, has resonable protections, has been source audited for years, fixes problems, has simple app isolation method, and has online guides for about everything.

The effective TCB of the transfer is OpenBSD’s networking code, serial driver, and parts of kernel they use. This is highest quality and most secure code in all of UNIX so that’s a good confidence rating. Serial port gives you no DMA and simplicity of driver, reducing risk. With more effort, you can carefully write apps that move the files through the serial port directly, bypassing network stack. The applications on trusted PC that use the data should be restricted with MAC policies, dedicated user accounts or sandboxes at the least. That reduces risks of attacks via the data itself.

Note: The Cambrige CHERI capability processor is designed for security and legacy compatibility. They’ve already put FreeBSD on the prototype. There are FreeBSD firewall distro’s. I plan to combine them with open Ethernet (and DMA) I.P. later on for a turn key solution that just needs a cheap FPGA board with networking. Clive just supplied links to boards so that knocks out one obstacle. 🙂

Network or Diode Solution (High Assurance)

Obviously tradeoffs here are too much for most users so I’ll leave this off. If they have money, there are numerous vendors (ex Fox, Nexor, Tenix) offering data diodes with supporting software. These were rigorously analyzed and pentested in order to achieve their EAL7 certifications. So, that covers the transfer part at least and is turnkey.

Conclusion

There are your options. One must consider the risks. Hopefully, a BSD/Linux with app isolation, a serial cable, and some Googling will do for you. Otherwise, there’s the firewall distro’s, diodes, sandboxes, and so on. You can choose the security and convenience tradeoff you want with the information I’ve given you. Just remember that each link in the chain must be secured. Primarily, the app/kernel on trusted machine receiving data, the data transmission medium, and extra requirement of ensuring no other mediums on trusted machine can be enabled.

Readers following along wanting to know where all the risks are can get a thorough treatment here. That’s the framework I’ve used for years in high assurance security work, which you can freely distribute so long as you give credit. It also discusses secure code vs secure systems as it was original topic in that thread.

Figureitout August 4, 2014 10:02 PM

Thoth RE: post on 8/4/14 @ 3:17AM
–I like this post b/c it strikes some core problems, head on. Security, like winning, comes to those who want it the most and will obsessively work for it.

First things first, any data you want to keep more secure than even encrypted data on a machine (subject to transmissions via the the electrons buzzing around the keyboard, and the likely LCD/VGA/HDMI combo display you got going on), you keep it on paper. Write on a mobile glass sheet that you wipe after each page, also scribble manically on a few sheets and tuck the data inbetween those incase there’s a way to chemically read the ink/graphite remotely (I bet there is…). Never write the full message in one location, multiple ones. You can communicate a meeting if you want to exchange via insecure methods (so they can find you), but due to mobile creation of the data it should be hard to get that w/o attacking you physically, in which case you need a quick destruction method or back-up plan…

I know this all sounds comical, and it is…I can count on my 2 hands the number of “bro’s” that’d follow this kind of OPSEC to exchange data. They don’t get awkward w/ it, nor sad, it’s a reality and they’re willing to fight it. Such is the reality today. Now w/ the IoT, and tinier chips that use such low power, you’ll have harder times detecting them, insane signals, and they’ll be everywhere. So a “secure” area to write becomes rarer, and only the older people who established certain protocols and codes w/ other people can exchange data, the younger people are f*cked and they’ll be too naive until they find out the hard way.

Otherwise, you can download that infected ISO from that infected website, w/ that malware that spoofs the check-sum, from your infected router, w/ infected programs to burn the CD/DVD, for your infected computer, infected keyboard, infected USB-radio mouse, infected USB flash drive.

That is, unless all self-respecting engineers/coders/cryptographers take back control of their domains from marketing and banks, and make our hardware/protocols, and most especially the software, respectable and trustworthy. As in, openness all the way, knowledge spreading, and mistakes need to be scrutinized such that you’re a potential enemy if you make such a dumb mistake.

Nick P
–One of your better posts in a long, long time. Very practical, and w/in your expertise/comfort zone.

I’m operating from an infected environment, but I still can confirm from multiple places that webpages aren’t entirely corrupted, as my AP is toasted. All I can do is train and prepare for when I make my best effort for all-around secure.

I’m in the process of setting up a better protection scheme, which involves OpenBSD and pfSense; but it’s in an unsafe area…

Again though, I’ll criticize one part, in the interest of making you better and forcing you to be better, which is an extremely common problem in the security community:

“Configure OpenBSD’s firewall correctly.”

This does almost nothing to help anyone besides someone’s ego, and more likely than not they don’t even have the correct configuration…It’s just like a sound-bite, like a media one. It’s just to protect certain niche areas, which I guess is your way of surviving…on people’s ignorance.

/r/netsec had a small decent link on this phenomenom which I can’t find now…, no technical info, just saying that it’s pretty worthless to not GUIDE people though security setups.

It’s why my blog is dedicated solely to bare-bones walk-thru’s of some things. Like Thoth’s, which was great. When I get a really good tutorial posted I’ll link it, w/ all the data backed up “in multiple places”, in case I need to move the blog as it gets hacked on a security forum and displays porno for the employers looking at my resume…

Nick P August 4, 2014 10:21 PM

@ Figureitout

“One of your better posts in a long, long time. Very practical, and w/in your expertise/comfort zone.”

Why thank you.

re OpenBSD firewall

It has proven value in stopping attacks at the network level. It’s used by numerous governments and even in guard solutions pentested/approved by German government for their use. Their professional firewall hackers couldn’t bypass those without a 0-day. So, now you’ve limited the attackers wanting total control to those smart enough to develop a firmware attack or find a 0-day in the world’s most security-audited code. I also made a provision for making the firmware part tiny and app deprivileged. So, it’s quite a noose around the attackers neck.

Personally, I think top TLA’s probably can find a 0-day due to huge budget for that. Still stops tons of attackers, including lesser TLA’s, in practice. And recall he wanted a solution you can throw together, not the extra work (eg OPSEC/obfuscation) needed to slow/stop major TLA’s. So, a well-configured OpenBSD on embedded hardware with serial ports is one of the better options for quick and dirty mediation.

If I had a guess about the future, I’d say the next easy design for all these issues will leverage OpenFlow. You can easily build almost anything networking with all kinds of policies in that. Still too young to recommend for security purposes, though.

Figureitout August 4, 2014 10:35 PM

Nick P
–You didn’t address the configuration issue, so I’m assuming I’m correct that you just say that to charge people consulting fees and not really have the correct configuration.

Just clarifying.

Thoth August 4, 2014 11:14 PM

@Figureitout

Thanks for your appreciation :).

@Nick P, Clive, Scott

Thanks for your contributions on ideas and posts.

If you have noticed, I have not posted simple tutorial guides regarding the use of security software in the attempt of letting the dust settle down in regards to Truecrypt and to also reassess some simple yet effective combinations of measure the general public can quickly learn to use and setup simple barriers and later on ramp up their security higher once they have their bases settled.

The reason I have gone through great length to bring up this discussion is that we have seen so much supposed security breaking all over the place (Heartbleed, Apple Goto, USB attacks) and lots of uncertainties (NSA attack vectors and policies, Truecrypt going poof all of a sudden …etc…) and the divide between those who security and can secure themselves and those who dont know security and attempt to introduce themselves to security usually face a huge gap and shows us the reality of conflict in the practical and theoretical space.

We cannot assume the users would know how to find some board and solder or script their programs while they are being watched (without a secure base to start off with).

Scott and me have provided a quick and dirty CD/DVD secure file transfer mechanism with Nick P enhancing it (via the use of one time use CD/DVD).

Nick P have provided a custom made “guard” using serial ports with an embedded OpenBSD mini computer to transfer data.

More ideas would be welcomed. If anyone have instructions and tutorials on the setup procedures, posting links on the instructions here would be a good idea. Listing the required technical expertise would be useful too.

Nick P August 4, 2014 11:29 PM

@ Figureitout

I might be misremembering but I thought I implied a person could Google them easy enough. Each piece can be found and there’s whole books on the subject with step-by-step instructions, example policies, and so on. Those are also available for free on the torrentz networks for people with less cash and morals. If I built one, I’d just use Google and the books for most of the steps so a consulting fee would just be covering labor rather than knowledge.

@ Thoth

“We cannot assume the users would know how to find some board and solder or script their programs while they are being watched (without a secure base to start off with).”

It’s more like you buy a board online from Soekris, get an old PC on Craigslist, or whatever. Then, you download the free CD. Then, you follow steps you find in Google for the different parts I mentioned. No soldering required. Just time and effort for each extra bit of assurance. The best way to deal with the scripts and such is to crowdsource some experts to build all that for free or for profit. Then, long as the hardware exists (or similar hardware does) the users can buy/download that and simply run it.

“More ideas would be welcomed. If anyone have instructions and tutorials on the setup procedures, posting links on the instructions here would be a good idea. Listing the required technical expertise would be useful too.”

I’ll at least add it to my list of stuff to get done if I have funding or time. Might even release the non-TLA version free as in beer and OSS.

Figureitout August 4, 2014 11:54 PM

Nick P
–Yes, they can google them, then get identified as a potential terrorist trying to secure themselves and have malware injected in their internet computer; which is the only computer they use and use very insecurely until they learn better, which is too late as the malware has infected all their accounts and memory devices and other computers.

So, in other words, there’s little reason to pay you for consulting services…?

Torrentz are illegal and it’s unwise to do so unless you are essentially pushing the risk onto someone else, from a ridiculous stupid lawsuit; or just straight malware from a torrent.

Not good enough from the security community. The “solutions” all still suck, are untrustworthy, and require people to learn it all themselves. What if the medical community required that? We’d all be dead w/ pin worms crawling out our asses…

Q August 5, 2014 2:11 AM

@Clive Robinson • August 4, 2014 10:45 AM

In the reference on https://wuala.com/FreemoveQuantumExchange/Aspects/Tools/ a proof is given of the correct implementation of the Datadiode FFHDD2+. It is evaluated on assurance level EAL7+.

Given the simple optical separation and the easy verification possibility of it, it is widely believed to be provably correct.

If you have proof of the contrary then you have to produce this proof. Otherwise your remarks are not taken seriously.

Clive Robinson August 5, 2014 8:04 AM

@ Q,

You are obviously new around here, otherwise you would know that saying what you have said is going to make quite a few around here laugh at your belief in the infallibility of the EAL7 process.

Further the page you give times out with at best a partial load, and the home page indicates that it belongs to a Cloud Provider that uses SSL for security… that is more of a sick joke than a funny one, as others around here will confirm.

Further a cursory glance –I realy do not have the time to waste on timing out pages– does not show the page you have given.

Thus many around here will probably suspect you of being related to the organisation in some way and from your comments about the equivalent of a “S&M droid”.

Further if you hunt back on this web site you will find I have a low opinion of much that is Quantum Crypto, and have suggested quite a few attacks on such systems that others including under grad students have subsiquently verified. This is despite the designers were talking about provable security due to “guarantees of quantum physics”…

Quite a few of this blogs readers are well aware of many other security flaws I have identified that many “poo pooed” only to see them become major security flaws a few years down the line such as code signing flaws and air gap crossing techniques that Stuxnet much later came along to prove. So I’m not exactly worried about people saying my credability is under threat by suggesting I won’t be taken seriously.

I would suggest that rather than talking about the EAL processes you go and study them and then “start thinking hinky” if you are any good at it then your eyes will open up and you will if you wish to earn a good living from it.

The choice as they say is yours to take.

Q August 5, 2014 8:52 AM

@Clive Robinson • August 5, 2014 8:04 AM

Quote: ´Further if you hunt back on this web site you will find I have a low opinion of much that is Quantum Crypto, and have suggested quite a few attacks on such systems that others including under grad students have subsiquently verified. This is despite the designers were talking about provable security due to “guarantees of quantum physics”…´

It would be interesting if you could provide a security flaw in the information-theoretic provable true quantum randomness encryption which is used by the example group wuala.com/pgpstore

Quote: ´You are obviously new around here, otherwise you would know that saying what you have said is going to make quite a few around here laugh at your belief in the infallibility of the EAL7 process.´

I note that you have not produced any proof of security flaws in the Datadiode FFHDD2+.

Nick P August 5, 2014 8:55 AM

@ Q

The link you gave is a bunch of unrelated stuff thrown together. The few that are EAL7 are for the Fox Data Diode. A data diode lets data flow in one direction. The Fox Data Diode does this for optical Ethernet. It doesn’t support two-way computation or USB devices, so it’s not relevant in a discussion of USB. There’s no USB security product evaluated to EAL7 that I’m aware of.

Nick P August 5, 2014 12:05 PM

@ Q

You’re own linked paper proves my point: it pushes a data diode over Ethernet as an alternative to USB sticks for just one way sharing. Most use cases need two way commmunication. USB is a two-way protocol that’s not EAL7 evaluated either.

So, my point stands that a security proof for unidirectional Ethernet has nothing to do with securing USB device operation. Nor can one even compare them as it’s apples vs oranges.

Clive Robinson August 5, 2014 3:54 PM

@ Q,

I note that you have not produced any proof of security flaws in the Datadiode FFHDD2+.

You obviously have reading issues, I told you quite clearly that for whatever reason the page you posted was timing out.

That is no Information that an opinion could be attempted was available.

I did google the model number and found it was a product of Fox-IT, but the foxit.com site was closing connections when I tried it UK lunch time. The only info I could get was not design related, the only thing of note was that apparently some one had put it through the NATO TOR process, which complicates things.

I think even you should realise that you need appropriate level details to make an assessment.

What I can say based on what Nick P has said of it that by breaking the way ethernet is supposed to work, you can make a data diode just by cutting wires in a cat5 lead. However you have to ditch error correction which has knock on effects, and also make modifications to driver code, which can be fraught with issues. On the face of it most would not consider it overly difficult to do, and get away with it for undemanding instrumentation, but without error correction you can lose data which would cause problems for other uses. Such “chop-a-channel” datadiodes are generaly considered unreliable and not suitable for many applications where reliability and timeliness are important.

So if this unit is for enterprise or above usage one of the first places I would look at is how the diode performs error correction or mitigates it’s use.

As I mentioned the other day I have seen units where upstream channels –supposadly– did not exist, however I found a way of bouncing errors off of down stream units that activated the error correction mechanism in the unit which in the process opened a low bandwidth up stream channel.

The reality is either you have error correction or you don’t, if you don’t then most engineers look at mitigation by rate limiting etc. They then forget that malware can take the limiting off or cause other issues and the consiquence alows for low bandwidth back channels, due to human activity.

You have to consider “The whole system” not it’s individual parts when investigating covert channels in a given security system. Failing to do this correctly will alow such channels to be opened in various interesting ways.

I will however have another hunt for the appropriate level of documentation over the weekend, but I already have the feeling it may not be available to the level required. In which case you will have to provide it without encumbrance, and if you want me to do more than a cursory read through it I can make my contract rates and TOCs available to you.

As I’ve already said to you “The choice as they say is yours to take”, mine is not to do charity for those that don’t need it.

Secret Police August 6, 2014 9:59 AM

There’s fundamental insecurities of the Linux kernel when dealing with USB automounts as well. If you don’t patch your own kernel with grsec patches it’s trivial to escalate to root.

USB is just, all around bad. Bad for anti-forensics, bad for the kernel and full of exploitable firmware.

Nick P August 6, 2014 11:22 AM

@ Q

Common Criteria 101: What does Owl’s evaluation actually say?

They haven’t delivered a proof that the device is secure: they delivered a security target and an assurance argument that product meets it. The security target lists specific features and considerations. Anything outside that, like whether the chips they use have insecure extra functionality, might be used against it. Likewise, the EAL7 requirement requires an abstract (high-level) design, a formal security policy, a mathematical proof the abstract design holds the security policy, a simplified implementation, and an informal correspondence argument that implementation matches abstract design.

The implementation itself isn’t mathematically verified for security. Not the hardware, the firmware, the software, the covert channel analysis effect on implementation, the testing/proving tools themselves, the compilers/assemblers/linkers, the integration, and so on. Vulnerabilities were found in other products in each of these things. NSA, GHCQ, Russia, and China attack all of these things. The good news is that pro’s typically review and pentest an EAL5+ product so it gets good code reviews. Yet, pro’s also review and pentest things like SSL yet it’s had plenty of design and implementation flaws.

Due to issues like this, the industry puts the burden of proof on the company claiming security. The company must, in each aspect/component, provide arguments that certain invariants always hold despite malicious input, failure modes, etc. It’s not easy. EAL7 combined with the right security target does a ton of good. Matter of fact, data diodes are so simple (and almost uselessly so) that I’m reasonably sure Fox diode is probably secure for that one property: for software attacks and strictly one way configurations the device can ensure traffic is one-way. And that’s all it can do: the traffic can still try to do things on the other end, like enable built-in wireless functionality of receiving devices.

So long as you use it one way, it’s implementation has no problems, it has no backdoors (see BULLRUN), the enemies never have physical access, the administrator makes no mistakes, and so on. All these are assumptions their security relies on, many listed in their own Security Target. Change one, the evaluation result no longer applies per both common sense and Common Criteria. So, let’s look at Fox’s CC documents.

http://www.commoncriteriaportal.org/search/?cx=016233930414485990345%3Af_zj6spfpx4&cof=FORID%3A11&ie=UTF-8&q=Owl+dual+diode&sa=Search

Owl’s certification history indicated they wisely started with EAL2 for data diode because it’s cheap, proves almost nothing (i.e. quick), and allows revenue to flow. This let them sell it to pay for more development and a better evaluation, EAL4+. Most of Owl’s products, including “Dual-Diode” line that references USB, is evaluated to EAL4+. Just like Windows 2000 and Linux… Security engineer & OS developer Jonathan Shapiro summed up nicely what an EAL4+ evaluation means for security. One earlier, more primitive, data diode did get certified to EAL7 in its original configuration and design. The hardware, drivers, etc have been un-evaluated in all of them (per their evaluation reports) with the designers claiming they can’t impact security. (evil grin)

So, they started with a very simple product. They kept evaluating it repeatedly to design new products. The basic diode got evaluated to EAL7 at one point. They kept changing and upgrading it in different ways, getting each evaluated at “strength of function – medium” assurance. And they ignore key aspects of their own product (which NSA attacks per TAO) in all evaluations. So, Common Criteria says the overall security of a product is whats in its security target combined with its evaluation assurance level. All but one of the security targets are certified to low-medium assurance with some extra evaluation components. All security targets, including the rigorous EAL7, leave out stuff attackers can hit to simplify their evaluation.

Conclusion: Based on Common Criteria documents, Fox has rigorously designed at least one product to do its job excellently and the rest are merely tested while reusing the abstract design/policy. Of course, people don’t hack abstractions: they hack hardware, firmware, and software. So, the current Fox products that claim to protect USB (somehow…) can’t be considered secure unless those exact products undergo an EAL7+ evaluation against a security target representing what hackers are actually going to attack. And the hardware/firmware/drivers needs to be in that as the system’s TCB might depend on it regardless of what promises they made to evaluators.

Note: As I write this, it’s important that you should know I’ve actually worked on EAL6+ designs, broken an EAL7-equiv due to “out of scope,” and done years of pen testing. There’s almost no products, despite certifications, that have proven secure over time. A few came close due to hardware up verification and tough usability tradeoffs. The common link is that every component needs to be documented/verified, their interactions must be verified, failure modes must be verified, security method must be verified, and everything that’s TCB must be shown secure in all states. It’s so hard to do this that only a few commercial products have tried and the EAL4+ evaluations show even Owl doesn’t do it anymore.

anykeylogger August 11, 2014 10:55 PM

At the end of the day, any malware that happens to be on a USB device has to be able to make it into the target computer. The article talks a lot about PCs which, historically, have been quite easy to compromise.

Just suppose I stuck one of these nasty devices in my Mac. OK, it’s fiendish, it’s an empty gadget. And then its bad firmware kicks into life and tries to persuade my Mac that files are available. That file still has to make it onto my Mac and has to be an executable to do any harm.

I believe OS X’s inbuilt defences against malicious files – wherever they come from – would not be circumvented by a gadget like this.

Figureitout August 11, 2014 11:08 PM

I believe OS X’s inbuilt defences against malicious files – wherever they come from – would not be circumvented by a gadget like this.
anykeylogger
–Well, I have a USB-stick that I could test on your machine if you want to test that…Only reason I’m keeping it is to eventually get a machine I feel comfortable plugging it into and finding out what all is in it…

flxkid August 12, 2014 11:37 AM

I’ve seen this legitimately used. Western Digital ships a USB stick with their Black2 Dual Drive that when plugged in, acts as a HID and actually fires off the key presses to open the “Run Program” dialog in windows, and then enters a URL to open the support page where you can download the software for the hard drive!

It was stunning and a bit scary at the same time to see this in action.

sauna August 18, 2014 5:10 AM

A few have mentioned the SD cards also being vulnerable, but nobody addressed that the SD cards have a mechanical switch to make it “read only”, is it still possible access and reprogram the fw?

Railgun_Sniper October 15, 2014 9:05 AM

@Sauna

Flat out said I’m a long time reader, and maybe 3rd time poster. There are plenty of knowledgeable folks here, and I’d often just restate an already established opinion, or re-field a question that someone else already fielded. Redundancy is only good for storage media and mission critical components (we could all benefit from a second pancreas or a second brain or heart… if properly implemented. Ettins need not apply.)

A word to the wise on “read / write only switch” technology on cards. These are rarely (far as I know NEVER) actually mechanical switches other than the fact that they require mechanical forces (read “applied mechanical force” aka finger or screw driver presses against the switch in order to actually toggle it from one position to the other.) I have more than once opened such a device in write mode before the OS I was using had proper support for that “read write bit.”

So if your hardware host does not “respect” the read only switch… the device is writable at all times. Also, please note that read only for the data storage area and read only for the entire device are unlikely to be the same thing. There may be vendors who do more than just “set a bit” but I doubt it will be at the $1.00 per gig range.

As a side note: for that switch to “mechanically” disable reads and writes, it would have to MOVE part of the PCB out of alignment when write mode is disabled. This would require TWO busses, one for reading, one for writing. That’s a lot of extra logic to pack onto what normally costs 50 cents to produce and ship, 1 or 2 bucks to comply with all the government fees, taxes and regulations and sells for 5 bucks or more.

Case in point, I’ve taken a few apart in the past and all the components are fixed on the PCB, the switch is nothing more than a cheap dip/toggle switch affixed to the PCB. At best it might be designed to disable the flow of electricity across a channel which, when polled, checks whether the bus should consider the device read/write mode only. This is a typical “is the read only bit set?” type protection which the bus or the software, or both, have to respect. To make it so the CARD itself disables this, in hardware, ON THE CARD, there would have to be extra logic on the PCB to disable writes to the storage area of the unit whenever voltage through the “lock” line is disabled (or enabled, depending on design.) Short of the $4.00+ per gig cards, I doubt anyone can do it and still turn a profit. And that’s assuming the chinese company making them isn’t making more profit from your tax dollars being spent to circumvent your security (read, any TLA) so the people paid by tax dollars (read, any TLA) can get access to your naughty pics to make fake facebook pages… which apparently has prior art, as of recently. (See the Iphone racy pics story at bottom section of the Cryptogram current issue. October 2014 to be exact.)

Some time ago, I still remember an evaluation sample of a BENQ camera storing videos to a Samsung stick I set to RO with that hardware switch set to lock. Ironically, the same thing happened with a PNY stick. 16 to 32 gb sticks, for those who are curious. Camera itself was a fantastic piece of equipment, I still sell its later date compatriots (which coincidentally sport the exact same behavior, which I consider a failsafe for stupid lusers. Unless you’re a casual spy transporting secrets on your camera flash because you’re too broke to buy those USB cufflinks for sale these days, the actual legit users probably want to write to their camera storage no matter what, they probably value the pictures they can take more than the “oops, lost that one perfect picture moment because I forgot to turn off the lock switch” issue.) Again, we’re in the business of catering to the lowest common denominator, and PEBKAC is the most common threat to security there is. An idiot or ignorant individual at the keyboard, with proper access, can destroy some of the best laid security systems there are, if the security systems do not account and deal with said stupidity. Those lock switches are at best… a comforting illusion.

Equation group February 17, 2015 1:47 PM

I know you’ve heard this many times, but you were right all along and the NSA did use the firmware exploit

Henny October 7, 2016 7:21 AM

It seems that Sauna and Railgun_Sniper have good points. Preventing software contaminating the firmware is the main issue. A switch or physical sensor like hall effect, ultrasonic, optical, pin overvolts condition, any of these could be engineered so that the customer can both block software access and obtain it to upgrade or check the firmware. This function has to be hard wired into the design.
Without this malware can spread from one usb device to another and possibly access the internet if this is connected to the usb bus.
Ability to read back the firmware would also be useful, again hard wired, not in firmware. One could then check a newly bought device. Open sourcing the design and software loses virtually nothing for the manufacturer as they have patents and both Russia and China are quite capable of reading any firmware using electron microscopes and micro-manipulation of opened chips.
I found a whole batch of chips contaminated in a shop recently, all reporting 32GB capacity while actually being 8GB. Clearly there are already people able to program these devices in the supply chain. Next they will probably move to more lucrative collaborations with scammers.

It seems that the firmware goes to reporting a read only mode if there is certain tampering with code or device errors. I had stuxnet on a laptop and it damaged several old and new micro sd cards. They give read only error in windows and linux shows cache page error when trying to mount them.

There is a vulnerability to specially engineered devices that isn’t so easy to fix, yes it is possible to block ordinary supply chain tampering, but purpose engineered chips could in theory be slipped into targeted orders. This is worrying, random sampling might work, but there is a lot at stake. If genetics laboratories, nuclear plants, factories, banks etc were targeted with data editing at the storage chip level. We have already had viral genes introduced into gm crops inadvertently.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.