Jumping Airgaps with a Laser and a Scanner

Researchers have configured two computers to talk to each other using a laser and a scanner.

Scanners work by detecting reflected light on their glass pane. The light creates a charge that the scanner translates into binary, which gets converted into an image. But scanners are sensitive to any changes of light in a room­—even when paper is on the glass pane or when the light source is infrared—which changes the charges that get converted to binary. This means signals can be sent through the scanner by flashing light at its glass pane using either a visible light source or an infrared laser that is invisible to human eyes.

There are a couple of caveats to the attack—the malware to decode the signals has to already be installed on a system on the network, and the lid on the scanner has to be at least partially open to receive the light. It’s not unusual for workers to leave scanner lids open after using them, however, and an attacker could also pay a cleaning crew or other worker to leave the lid open at night.

The setup is that there’s malware on the computer connected to the scanner, and that computer isn’t on the Internet. This technique allows an attacker to communicate with that computer. For extra coolness, the laser can be mounted on a drone.

Here’s the paper. And two videos.

Posted on April 28, 2017 at 12:48 PM53 Comments

Comments

Who? April 28, 2017 1:17 PM

There are a couple of caveats to the attack — the malware to decode the signals has to already be installed on a system on the network, and the lid on the scanner has to be at least partially open to receive the light. It’s not unusual for workers to leave scanner lids open after using them, however, and an attacker could also pay a cleaning crew or other worker to leave the lid open at night.

I would say this attack is too complicated.

In case you are able to infect the airgapped network with malware —a requisite for this attack become successful— then you only need to pay a cleaning crew or other worker to plug a small USB device on a computer of the airgapped network. Malware can exfiltrate information using that USB drive. That USB key can even be the vector to infect the computer!

Do you think plugging a USB drive can be noticed? Then do not look at a laser targeting a scanner!

A much easier attack would be pointing a laser to one of the multiple webcams attached to computers these days and receive the replies from the webcam activity LED. This attack is even easier on Apple computers as the webcam LED is software controlled.

I think this one is another drone-targeting-airgapped-network uninspired research.

Who? April 28, 2017 1:26 PM

Another possible attack would be plugging a wireless keyboard dongle to the airgapped network and controlling it from the drone. You can ask the cleaning crew or worker to turn on a display too. Data can be exfiltrated by reading the computer display.

Tatütata April 28, 2017 1:42 PM

The air-gap exploits reported are progressively delving deeper and deeper in Rube Goldberg competition territory (*). Is that stung terribly useful? My wide agape mouth is exhaling large amounts of CO2…

The attacker who would be able to pull this off might just as well steal the coveted device, or plant a bug somewhere.


  • or Heath Robinson if your drink is tea.

Clive Robinson April 28, 2017 2:17 PM

Hmm “energy” jumping the “air gap” what a quaint idea “who’d have thunk?” 😉

Back in the 1980’s and 90’s there were “personal organisers” that had IR links to talk to each other and PC’s. Today we have smart phones, pads, tablets and laptops with light sensors in, which not only talk to the OS to control screen brightness but also in some cases make themselves available to remote web servers etc…

Then there are sensors in web cams etc that likewise talk to the drivers lurking beneath the OS.

In other words enough for at least ten papers in the publish or die world of academia.

What I would be interested in seeing is someone developing malware for the microcontrolers to turn status LEDs into photo diodes.

r April 28, 2017 5:28 PM

@Who?,

Too complicated?

Too difficult?

If there’s no other way to exfil the method may be worth one’s while.

Maybe you or I couldn’t find any specific use or instance to which this could be applied but other’s maybe within a more global scope could certainly find some niche.

Ben Nassi April 29, 2017 4:41 AM

Hi Guys,

I read your comments regarding the post.
I think some of you didn’t understand some basic things in covert channels.
Let’s try to make things clearer.
There are 3 purposes to establish a covert channel:
1. Exfiltration of data
2. Infiltration of data
3. Full communication channel

While 1 & 3 are more understandable to the community of security, 2 is somehow less known. The purpose of infiltration of data to an isolated organization can used for different usecases. Among them:

  1. Triggering a red button activity (starting a cyber warfare e.g. launching a missle)
  2. Self destruct of the malware
    3, Shutting down a protocol,service, etc..

In the entire usecases, the attacker wants to communicate and control a pre installed malware. While there are many methods to establish a covert channel for exfiltration of data, there are only few methods to infiltrate data to an organization. Among them: thermal methods, and acoustic methods.

The problem with the thermal methods is their low transmission rates and some of the permissions that the malware\attacker is required to achieve (e.g. one research used the organization’s HVAC to infiltrate a command). The problem with the acoustic methods is that confident organizations are fully aware to this method and forbid connecting speakers and microphones to their computers, hence preventing an attacker from establishing this covert channel.

One of you have even mentioned to pay a worker over and over to plug a USB device to do things you want instead of communicating the malware. This is probably the worst thing you can do as an attacker because it is not under the radar at all. Each time the USB will be plugged, this activity would be logged to a document make it easy for the IT team to find the connection to the worker who did it. There is a big difference between using a worker once, and using a worker repeatedly.

To summarize the attack contributions to science:

  1. It uses an organization’s scanner, which is not consider by organizations as a mean to infiltrate data (unlike microphones and speakers)
  2. It provides higher transmission rates compring to any other method that its purpose is infiltration
  3. It can be done under the radar using an IR laser
  4. The malware does require any special permission except of being able to launch a scan and receive the output

To the person who wrote about it is too complicated. I think you don;t read enough about attacks in the real world if you think this attack is too complicated. Was Stuxnet an easy attack? Each of the security attacks I read in the last few years are getting more and more complicated. This is the result of trying to bypass IDS IPS firewalls and others security means.

To the person who wrote about : “It is just a drone’s research”, I suggest you would look on the second video which presents how every IoT device can be hijacked by an attacker that does not even connected to the organization network in order to modulate the commands to the malware (It was demonstrated using a smart bulb).

I suggest to understand the entire topic better before criticizing a work.

For any other questions you can contact me in nassidt@gmail.com

Regards,
Ben Nassi

Who? April 29, 2017 4:49 AM

@ r

Just look at the requirements to make this attack feasible.

This attack requires infecting a computer in the airgapped network before being effective. If that network is so nicely protected that only “energy jumping the airgap” (in words of Clive) is possible then how is a computer on that network being infected in the first place? If you can introduce malware on the airgapped network then there are much easier ways to exfiltrate data, or control the airgapped systems, using the malware itself and, possibly, the same device used as a vector to exfiltrate data. One can even connect a wireless keyboard dongle to a USB port (managed from a drone) to control the airgapped systems. Remember, another requirement for this attack being successful is getting help from insiders, not only to open the scanner lid but also to infect some systems in the airgapped network too.

Seriously, this research looks like a typical academic paper whose only goal is getting some temporary notoriety on press or securing a job at a University.

Physically it is possible turning a LED into a photodiode; both devices are mostly the same, just as microphones and speakers are. As Clive I would be interested in seeing a research that turns an innocuous activity LED into not only an exfiltration channel (it would be easy, right?) but also into a channel to receive commands.

r April 29, 2017 7:21 AM

@Who?

Let’s break ‘airgapped malware’ down,

Do you really believe stuxnet’s preprogrammed behavior was controlled? It was an automaton AT BEST, once released it seems to have expanded beyond it’s initially intended target.

Malware is dumb (at this current point in time) no matter what you think of it’s inclusions airgap jumping malware is a well trained paratrooper: their chances of success go way up the better prepared both of you are inn respect to support and training prior tho deployment and whatever post deployment support is provided.

Further, I avoided mentioning ingress of further tweaks behind this glass door because I thought it would be moot to a thoughtful researcher.

I guess not, I did see and read this article last month and complicated is not a complication for long when resources and necessity are there.

If you don’t think your printers, scanners, cameras are capable of being coerced leave them on as unattended company. 🙂

r April 29, 2017 7:23 AM

prior to deployment*

ANY method of communicating behind enemy lines is a valuable asset in the right hands.

r April 29, 2017 7:27 AM

You might guard your ports, both the ephemeral and those physical.

But how well do you guard the office equipment next to the coffee machine?

Clive Robinson April 29, 2017 7:42 AM

@ Ben Nassi,

I suggest to understand the entire topic better before criticizing a work.

Hmm… Some of us here have not only discovered air gap jumping methods over thirty years ago, we also worked out ways to not require malware on the gapped machine (look for “EM Fault Injection” attacks). Oh and also gave information to those that went on to write the very very few books to cover the subject. Some of us discussed air gap crossing techniques here –long befor stuxnet used exactly the same idea–, when discussing how to attack voting machines.

I’ve frequently complained just how far behind and how little academic research has been carried out in the subject, despite dropping fairly big hints to people how to inject faults impressed on energy signals. Oddly perhaps it appears that the western IC and SigInt agencies are also behind the times as well from the little information we have. Which is odd because for most of these things you only need a K12 knowledge of physics to understand why they work.

It would be nice if occasionaly the academic community actually looked out the window of their ivory tower and at the very least acknowledge those who have actually broken the trail for them to follow. I know from experience just as Duncan Campbell does the IC and SigInt agencies will just steal the ideas, and give them to others to profit by, but then they are reputed to have no honour along with no sense of justice or morals.

Oh by the way remember your basic rules of physics about the transmission of energy by radiation, convection and transmission. When it comes to the mechanical vibration that is often called sound, many forget it travells way better in so many mediums than air and usually way way further (it’s why in the movies people put their ear to the rail to hear a train comming). Further remember that all inductors due to magnetic fields generate very small amounts of electrical energy when subjected to a mechanical wave. It’s especially bad in radio frequency oscillators where design and other engineers call it “microphonics”. In fact it’s important to note that many –but not all– transducers are bidirectional in their energy convertions. Thus a speaker can be a microphone and a dc motor can and is a generator, a usefull fact when you are designing electronic speed controlers.

Any way that should give you a few more ideas for setting up other covert channels, especially ones that do not require the infiltration of malware either directly or in the supply chain.

But something else for you to consider most systems with both inputs and outputs are “transparent” in that you can send signals through them without the system being aware of it. The obvious one is a lowish bandwidth forward channel from a keyboard to a network as Mat Blaze and his students demonstrated a decade or so ago. But less obvious is the back channel from the output to the input caused by generating error conditions at the output. This sort of thing can work back upstream through quite a few things including a number of data diodes.

Have fun investigating further.

Who? April 29, 2017 9:41 AM

@ r

If you don’t think your printers, scanners, cameras are capable of being coerced leave them on as unattended company. 🙂

Please, do not put words in my mouth and read my posts on this thread.

Who? April 29, 2017 9:57 AM

@ r

I am talking about this paragraph:

A much easier attack would be pointing a laser to one of the multiple webcams attached to computers these days and receive the replies from the webcam activity LED. This attack is even easier on Apple computers as the webcam LED is software controlled.

So, indeed, I am aware of the risks related to these devices.

I understand the authors of this research are under pressure to publish —it is the way academy works right now. However, as I said before, there are no new ideas on this work only an overly convoluted way to solve a problem.

Fellow Bunny Rabbit April 29, 2017 11:00 AM

Jumping airgaps?

Apparently I missed the memo defining what an “airgap” is or why it is significant to computer security. To me there is no logical significance as to whether or not digital communication of bits (0s and 1s) occurs over wired or wireless “air-gapped” networks. The same malware just as easily propagates over both, it would seem to me.

Yet I remain puzzled because “airgap” is quite a buzzword in the computer security snake-oil sales and consulting field.

albert April 29, 2017 11:23 AM

@Ben,

Stuxnet itself is complicated, but it was useless against the Iranian air-gapped system without one of the ‘good guys’ plugging in a USB drive on the system. Eliminating USB ports would have closed that attack vector. Do you recall how long the centrifuge system was operating before the attack?

@Clive,

Regarding status LEDs, I’ve wanted to try some tests, but never have the time. There are different ways to drive LEDs, most are probably driven by ‘driver chips’. It would be trivial for manufacturers to use bicolor LEDS (better IR/color). Once those traces disappear into the chip, GOK what happens to the signals. If they are software controlled, then Al Betzerov.

@E.T. Cetera,

Re: drones. Drones can stick a tiny sensor on a window at 3AM and Bobs your uncle. I would disguise it as something natural, like the Dead Fish camera. Guess where the lens was:)

. .. . .. — ….

Ben Nassi April 29, 2017 1:06 PM

Hi Again,

I want to make somethings more clearer:

@Who – Covert Channel – This entire area of research is based on the assumption that a malware was installed on one the second party (the party that transmitting\modulates the signal or the party that receiving the message\demodulates). Do not be confused with side-channel-attacks.

@albert – Regarding to how to infect a computer in an isolated network (air-gapped):

I have few things to mention about it:

A. There are other methods other than USB. Supply channel attack is one of them, and it doesnt require any insider (worker) to plug a USB.

B. While some of you may say that this is not practical to infect an air-gapped computer using the methods we speak about I think you can read about the next malwares some of them were found on isolated networks:Stuxnet,Duqu 2.0,Equation Group APT,Conficker,Mahdi etc. They were found on the networks of: military, government, diplomatic, telecoms, nuclear research facilities and others. I think you can find some of the link in the article.

@Who – “plugging wireless keyboard dongle” . I think it is bad to leave traces. I dont know any attacker that will use a method that leaves traces. Do you?

@Who – It seems to me that you dont understand what kind of means governments use in our days. You need to stop with your USB example. Even a 6 years old attacker will not leave traces.

Finally, regarding the open lid. This is the only logical claim that I heard hear. I want to tell you something, in the last few days we sampled organizations with dozens of scanners to find an full/half open scanner. Guess what? most of the tend to leave them open. Scanners are not part of the security policy of any organization. Scanners, unlike camera, microphones, and speakers are not a known vector for an attacker.

Nick P April 29, 2017 1:50 PM

@ Clive Robinson

There’s been a bunch of work with LED’s. I thought I posted that before. Anyway, here’s one on LED’s that may come from Joe Loughry that posted here. Here’s an one on DEFCON badge. The group behind the OP has one for hard drives and LCD’s. People are all over this stuff.

@ Ben

Good work on your research. All the stuff in Elovici’s lab is interesting. Clive predicted quite a few of them in the abstract long ago with the concept that any ability to move matter or energy between two machines is a potential side channel. His concept is energy gapping: preventing as many forms of energy as possible from moving between two machines. How to do that is an open question almost nobody is looking into since they’re still thinking in the dated form of “air” gaps. That’s useful but changing the word can change the mindset to find more stuff.

I encourage you to drop this idea on people in your research group to see what kind of inexpensive shielding or side-channel safes (per PC, rack, or room) might counter vast majority of them. Selling them to non-defense would be a huge enabler as it’s still illegal to buy TEMPEST gear from defense contractors here. Your group is probably most qualified to do it far as combo of skill and publicly publishing capabilities. Defense contractors in Five Eyes usually don’t like to do that second part. 😉

grgarchr April 29, 2017 3:12 PM

Hacker were able to listen to me in my room even though I unplugged my microphone and webcam. How is this possible? I l also received follower requests from fake Twitter accounts. There are lot of fake twitter accounts created by cybercriminals.

Wael April 29, 2017 4:22 PM

@Bruce,

Good heavens, man! The medicine stopped working!

Joking aside…

@ Clive Robinson,

What I would be interested in seeing is someone developing malware for the microcontrolers to turn status LEDs into photo diodes.

I doubt that’s doable at a large scale. This definitely has hardware configuration dependencies. While, through the principle of reciprocity in Electromagnetism; a transducer can be either a transmitter or a receiver with varying degrees of efficiency, that piece of malware may need a pre-designed piece of HW ready for software malware updates. In other words, if you say you’d be interested in seeing it, then chances are it’s not doable – there, gave you the bottom line.

So… on air-gap terminology: It doesn’t mean that a wirelessly connected device is “air-gapped”…
If you enclose a device in a hypothetical enclosure, then that enclosure needs to meet certain properties for that device to be considered properly air-gapped, or energy-gapped as @Clive Robinson likes to say. I suggest we distinguish between several ascending levels of air-gaps.

Level 0: System is isolated from the internet
Level 1: System is Level 1 + Sound, light, Electormagnetics, kinetic vibration isolated (energy-gapped)
Level 2: System is Level 1 + Controlled physical access of human operators
Level 3: System is stored inside of earth. Heh, and you didn’t think we knew about that? In your face, TLAs 🙂
Level 4: System doesn’t exist, it’s a figment of the attacker’s imagination. (This is a valid level, by the way.)

albert April 29, 2017 4:35 PM

@Ben,
I know there are many ways to bypass air-gapped systems. In reference to -the- Stuxnet attack on Iran, IIRC, the USB drive was brought in by the vendor to update the PLCs. I don’t know the details, but it does raise a -lot- of questions that have nothing to do with Stuxnet or its operation. Now I find that the worm could have been introduced anywhere along the supply chain, rendering air-gapping useless. Someone in the plant with Siemens PLC expertise could have seen the problem in the code; it’s not that difficult. My bet is that they farmed out everything, and that turned out to be dangerous.

. .. . .. — ….

Anura April 29, 2017 9:02 PM

@Wael

Level 4 should be light gapped. Light gapped systems make use of speed of light delay to ensure data obtained from signals are too old to be useful to the attacker.

Wael April 29, 2017 9:34 PM

@Anura,

signals are too old to be useful to the attacker.

Won’t the bad guys be able to still reconstruct the needed information? Is time delay on the order if a few nanoseconds a formidable barrier?

I mean you are experiencing time delays when you’re reading this text. I must be misunderstanding something, unless you’re talking about keying material that’s valid under specific temporal (and maybe spacial) constraints or frames. Interesting! I fired this neuron a few years ago, and it keeps haunting me every now and then.

Wael April 30, 2017 1:57 AM

@fifteen_billion_years,

The future is starting to look bright again 😉

It’s a matter of perspective! One man’s “dim” is a TLAs “bright” 🙂 Zero-sum game, one may argue.

By the way, what do you mean “again”? When, in the past, has the future ever looked bright? It’s downhill from now on, ma man. You know, the second law of Thermodynamics…

Ben Nassi April 30, 2017 5:56 AM

Hi All –

@Albert – Please read about the Equation Group APT. A malware that was found on isolated network that used a supply channel attack for the infection.

@ Nick P – Let me just see if I understood you correctly. Do you suggest an anti briding the airgap technique?

@Wael – Where did you came up with the described levels of isolation. Is it taken from the SCADA security definitions?

Thanks
Ben

Clive Robinson April 30, 2017 10:16 AM

@ Fellow Bunny Rabbit,

Apparently I missed the memo defining what an “airgap” is or why it is significant to computer security.

If you are under fifty, then yeh you probably missed the memo.

Back then the only way to get data into a computer was by magnetic tape or over a serial data line, that had outgrown from the old telex machines.

Thus the way to make the computer secure from intrusion was to issolate it electricaly, and if you had the appropriate security clearance the attachment to the memo that talked about TEMPEST.

In the mid 1970’s there was a BBC television program called “Tomorrow’s World” that showed some equipment reproducing the image on a VDU screen from a considerable distance away, which caused quite a bit of a curfuffle in some circles, especially the insurance/banking sector. Then a decade later another memo issued from outside the Govetnment Security cleaque, that gave rise to Van Ekk Phreaking.

Around this time the PC was getting quite widespread, and information whilst still going down serial lines was mainly transported by floppy disk via what became known as “Sneeker net”. It was only entering the 1990’s that local area networks started making their way down the corporate culture.

But the “air-gap” name had stuck and we still call it that (just like “Robin Red Breast” even though it’s orange).

During the 80’s I independently discovered some interesting tricks with low power CPUs and their susceptability to not just EM fields, but modulated EM fields, where you could cause the execution of a CPU to change in predictable ways. Which is great if you can do it to an electronic wallet in a tamper proof/evident case… With hindsight I can see why certain managment types in the company that made the wallets did not want to know. Similarly why those attached to a SigInt agency feigned lack of further interest after seeing it being demonstrated.

Any way as any K12 should be able to think through from their physics lessons is that any energy source can have information impressed on it and by conduction, radiation snd even convection carry the information into or out of a communications, processing or storage system.

Thus I started calling a higher type of issolation such as those various government agencies try to achive as “energy gapping” hence “energy-gap”. The question now becomes will technical correctness overcome historical short hand 😉

@ Albert,

There are different ways to drive LEDs, most are probably driven by ‘driver chips’. It would be trivial for manufacturers to use bicolor LEDS (better IR/color). Once those traces disappear into the chip, GOK what happens to the signals. If they are software controlled, then Al Betzerov.

Read on to my reply to @Wael next 😉

@ Wael,

I doubt that’s doable at a large scale. This definitely has hardware configuration dependencies… …that piece of malware may need a pre-designed piece of HW ready for software malware updates

Think about PC keyboards and those three LEDs, there is sure one heck of a lot of them and like audio chips in PCs things tend to be oh so standard…

To do it however you need to be able to do two things,

1, Control the diode bias point.
2, Read in the small changes caused by the photovoltaic effect.

There are many ways you can do each of these with discrete components but there would be quite a few and it would be fairly obvious at a glance.

How ever as with power supplies you can strip out a lot of components and get greater precision by Pulse Width Modulation. With just the LED, a resistor and a capacitor and switching the driver line from the microcontroler not just as a pulse wave form but turning it into an input to measure the integration slope. Thus in a similar way to DC motor controlers measure speed of rotation by the EMC, the circuit can measure the light level by the change caused by the photovoltaic effect.

The chances are most people would ignore it if told the PWM is just a way to drivr the LED.

Nick P April 30, 2017 10:38 AM

@ Ben

“Let me just see if I understood you correctly. Do you suggest an anti briding the airgap technique?”

I don’t know the meaning of the word anti-briding. What I’m saying is that emanation attacks that others did for decades led to shielding research and TEMPEST-certified computers. Certain things were hard to shield at individual, component level. That led to TEMPEST safes that stored the computers or equipment. Likewise, they made whole, windowless rooms to hold in the emanations with sound proofing/masking, too. That knocks out some in RF, sound, and light. Most of that is classified or limited availability, though. We need more methods like that, need them described publicly so people can use them, and provably blocking more side channels.

Far as the terms, air gap specifically means there’s air between computers to block risks of connections. We know audio and light travel through air. Side channels based on those bypass air-gapped computers. That’s why Clive invented the term energy gapping since it’s more honest: any form of energy one device can send and one can receive is a potential, side channel. So, we’ve been suggesting those discussing side channels adopt the new, accurate term. It might as a side effect get people thinking about every form of energy and interaction between things to exhaustively find side channels.

r April 30, 2017 10:44 AM

@Wael, Ben, Nick P, Anura

Off the grid
Off the radar
Off the network
Restricted access
Enclosed
Isolated

In no certain order or depth, it’s interesting Ben specifically asks how (or where) defined those levels are.

As for time delayed (@Anura) delaying a pulsed signal would do nothing for waiting ears except delay the piquing, now delaying and injecting noise excluding ECC’d constructs should be effective against all but the shortest or overpowering of signals.

ECC can overcome noise, patience and multichannel listeners can overcome time.

That’s all my little brain can contribute at this point.

Wael April 30, 2017 11:39 AM

@Nick P, @Ben Nassi,

I don’t know the meaning of the word anti-briding…

Ben forgot the ‘g’; the word is ‘bridging’. Not surprising given the meaning of the handle name (son of forgetful.) 🙂

@+=@r,

Classification was relatively arbitrary. Wanted to distinguish between levels of ‘air-gapping’ just to clarify what people really mean when they use that word.

Wael April 30, 2017 11:59 AM

@Clive Robinson,

If we don’t have a pre-designed piece of HW ready for software malware updates, then the ‘attack’ becomes not interesting. The attacks we care about are one’s that can be mounted remotely without physical access to HW.

However, all these studies of data extrusion techniques should be classified under ‘demonstration of an idea’. The majority of these ideas are nothing more than a demonstration of a ‘tool’. These tools are not suitable for ‘simple’ attacks, but may be used in a ‘compound’ attack,where several of the tools are used to accomplish a specific task.

The idea that an attack depends on manipulating the hardware to allow LEDs to act as receivers of PWM signals is not practical in most situations, because if the attacker has that level of access to the hardware, then surely they can mount much more efficient attacks.

albert April 30, 2017 12:08 PM

@Ben,
Re: Equation Group APT
That’s why I said “Now I find that the worm could have been introduced anywhere along the supply chain, rendering air-gapping useless.”

Stuxnet for the Iranian centrifuge control system needed to have -very specific- payload, so -someone- along the chain needed that expertise. The ‘worm’ could have been developed anywhere, but the payload (the PLC part) required -very specific- information about the control system. Here’s where I got my information: https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet/

BTW, I don’t consider supply-chain infection to be a problem with air-gapping per se, It can overcome -any- security system. Well-designed air-gapped systems -can- be quite secure -if- they are in secure environments.

. .. . .. — ….

Sancho_P April 30, 2017 6:16 PM

@Wael

”The idea that an attack depends on manipulating the hardware to allow LEDs to act as receivers of PWM signals is not practical in most situations, because if the attacker has that level of access to the hardware, then surely they can mount much more efficient attacks.”

No HW access needed, some uCs can use the same GPIO as binary (in or out) or analog (in) channel. Software could read the LED’s voltage while blinking the the same LED, the issue is sensitivity / bandwidth.

@albert

I’d say the PLC part wasn’t that specific as it is used in many systems to simulate I/O for service purposes and for process simulation.
But the technological knowledge about operation (sensitivity) of these centrifuges combined with the PLC internals / possibilities is very special.

Wael April 30, 2017 6:26 PM

@Sancho_P,

No HW access needed, some uCs…

That’s true. Keyword is “some”. Also, that’s not a sufficient condition! How many motherboards are wired to allow that, even if they use a Microcontroller as you described? Then there is the need to install a device driver (not hard, but may require user’s consent, if the user has root or admin access.) Which means the user will get a notification, which means the operation may not be as stealthy as desired.

Clive Robinson April 30, 2017 10:27 PM

@ Wael,

If we don’t have a pre-designed piece of HW ready for software malware updates, then the ‘attack’ becomes not interesting.

The fun of using the keyboard microcontroler in this way is that the circuit with the resistor LED and capacitor is what you would end up with if you took the normal resistance LED driver circuit and added a small capacitor for EMC noise reduction. And as it happens I’ve seen LED circuits with an approropirate circuit in electronic toys and other places including IoT devices. So pre-designed does happen by accident.

As for,

The attacks we care about are one’s that can be mounted remotely without physical access to HW.

Yes and no, with interdiction and supply chain poisoning, we need to know and care about all potential channels so we can assess items correctly. But it might not be “interesting” which is more the perspective of a scientist than an engineer.

The engineering point is to ensure that the discrete component arrangement is one where small changes are made such that making a covert channel at this level becomes to difficult.

Which brings us to your point of,

because if the attacker has that level of access to the hardware, then surely they can mount much more efficient attacks.

We have to go back to the SigInt Agency data collection models of bulk collection and targeted attack.

In the case of bulk collection supply chain poisoning is likely to be an attack vector as other vectors become less effective. The point to note is that if they are going to put one piece of modified code in then they will put in others at the same time to protect the investment of having such an asset in place.

With targeted attacks that might use the last stages of the supply chain putting software in a peripheral chip has many advantages. That is the user will not be able to wipe it, nor find it with the advantage in the case of a keyboard that if another device such as a keylogger is used it will point the finger of suspicion towards an “insider or traitor”.

We’ve seen with the TAO “radar bugs” in Video Cables that this periphery attack by either an “evil maid” or “blackbag job” has been and was presumably still used for a while after it became public knowledge. However they would have started looking for other peripheral attack vectors to replace the radar bugs.

Wael May 1, 2017 12:58 AM

@Clive Robinson,

I think we are in agreement, with one observation: interdiction and supply chain poisoning are a way to manipulate HW/FW/SW.

supersaurus May 1, 2017 3:54 PM

wrt physical access to poison the machine inside the wall: I once worked in a facility that had a room within a room for testing new hardware; you needed a keycard with whatever the access codes were to get in, you were logged, etc. sure, that’s easy enough to get by, but here is my point: guess who else could get into that room? the lowest paid people in the building, namely the custodians. organizations can be amazingly stupid…that could easily have been plugged by requiring the engineers to bring out the trash, but they didn’t think of it.

Sancho_P May 1, 2017 6:21 PM

@Wael

Um, @Clive was tinkering about malware abusing a microcontroler’s status LED as a binary input. So device driver, root, admin? The smaller uCs (e.g. on a motherboard) don’t know what it is, the bigger ones don’t drive simple LEDs.
Often LEDs are driven directly by the uC pins, using the internal current limiter, to save even the resistor’s cent, and PWM the LED to reduce power dissipation.

For the “stealthy operation” we are back at the chicken and egg problem. If it wasn’t there from the beginning let’s say it’s done by a signed update (compulsorily, as with Win10), the user’s consent was already given by using the software (EULA).
The latter would avoid the need of interdiction and supply chain poisoning, using the standard HW + SW.

Following @Clive’s idea I’m less thinking about a peripheral chip in a standard PC but a dedicated “security” device (like an encryptor).
Important uCs are IP protected (locked, read disabled) and will accept encrypted code updates only, very tricky to get hold of the code, if ever.
I’d say remote targeted tampering without a trace.

For an 8-bit (!) example see: http://www.atmel.com/images/doc2589.pdf
(also read chapter 5 “Summary”, the “Table of content” is chapter 7, however)
and:
http://www.atmel.com/Images/Atmel-42141-SAM-AT02333-Safe-and-Secure-Bootloader-Implementation-for-SAM3-4_Application-Note.pdf

The interesting question is: What could it be used for?
What could a tainted uC do with the status LED as hidden input?
E.g. could the Evil Maid use a tv remote to initiate an upload of the user keys to a CC server while the regular user of the device is at the bathroom?

Wael May 1, 2017 8:04 PM

@Sancho_P,

Several LEDs: Power, Disk access, Network packet and connectivity indecator, what else? The drive LED is controlled by the HD FW, and we know HD FW has been subverted in the past (was discussed here sometime in the past.) Network LEDs are controlled by the network card (discrete or integrated.), etc…

Um, @Clive was tinkering about malware abusing a microcontroler’s status LED as a binary input. So device driver, root, admin?

So suppose the malware was successfully deployed through however mechanism to the µController. What can the malware do? Switch GPIO directions (turn a transmitter into a receiver, back and forth,) encode data for light transmission using any scheme that doesn’t give signs to the victim (LED seems always on, but actually blinking too fast to see.) Ok, I give you that. Now how will the µController get to the data it needs to exfiltrate? How do you think this additional functionality could be achieved? Ok, suppose the malware knows where to find the data and sends it to the µC for transmission. Doesn’t that imply FW updates to the µC to enable this behavior? Device Driver, root or admin. There could be other ways of achieving this, but my original question remains valid: is this a large scale Data Loss vector? Targeted? Maybe.

For the “stealthy operation” we are back at the chicken and egg problem. If it wasn’t there from the beginning let’s say it’s done by a signed update (compulsorily, as with Win10), the user’s consent was already given by using the software (EULA).
The latter would avoid the need of interdiction and supply chain poisoning, using the standard HW + SW.

Then someone is in on it! Either the manufacturer (or their subcontractor, if they delegate or subcontract / outsource FW development) or the operating system provider that may add other functionalities (or allow it) under some conditions. That means it’s not a piece of malware that some outsider developed.

idea I’m less thinking about a peripheral chip in a standard PC but a dedicated “security” device (like an encryptor).

And most PCs have them in the form of a TPM, although they may not be fully utilized. If they are, then again, the manufacturer is likely involved. This isn’t malware, right? It’s an unadvertised stealthy remote access functionality — a backdoor. It’s not the most effective because it requires a person at close proximity to the victim’s machine. But there are also ways to jump multiple hops in a network of so called ‘air-gapped devices” until payloads reach the mothership, or CC end destination.

The interesting question is: What could it be used for?
What could a tainted uC do with the status LED as hidden input?

Anything from data theft to device destruction.

E.g. could the Evil Maid use a tv remote to initiate an upload of the user keys to a CC server while the regular user of the device is at the bathroom?

Unlikely! The more likely scenario is the LEDs will take the command from the remote control and then send the data to the same remote control or another receiver (a smart watch, for example.)

Besides! There are no evil maids here! It’s an evil scanner maintainance person 🙂

oliver May 2, 2017 11:43 AM

Dear Bruce,
why do you keep posting these movie-plot-threats posts? Why two in a row about optical methods that are not even remotely practical to exploit?
Why are you posting this BS?
Seriously, that is way beneath you!
Yours sincerely, oliver

albert May 2, 2017 12:30 PM

@supersaurus,
I’m going to ask Bruce to make that a requirement.

@oliver,
Practicality has nothing to do with exploits. It’s the -goal- that’s important. Take Stuxnet. Very expensive and time-consuming, but they did it anyway. You may not have separation centrifuges in your basement, but optical techniques are -already- in use, in the general population.

@Clive, et al,
Re:PWM. I was called upon to “look at” (i.e. fix) a string of ‘garden lights’ (the ‘solar powered’ ones), and I noticed the FCC mark. Why must they meet FCC regs? It’s the driver circuits. A simple chopper to reduce the duty cycle, power draw, stress on the LEDs. Especially useful on the super-high brightness LEDs for house and vehicle lighting systems. Imagine what can be done with auto lighting.

. .. . .. — ….

Who? May 2, 2017 1:04 PM

@ Ben Nassi

@Who – It seems to me that you dont understand what kind of means governments use in our days. You need to stop with your USB example. Even a 6 years old attacker will not leave traces.

That is only your opinion. You should refrain from taking conclusions about other members of this forum (not just about me) after reading a paragraph.

Sancho_P May 2, 2017 6:15 PM

Probably I can’t follow some of your thoughts, the reason might be we are thinking of a different scenario.
So let me try to fix some points:

  • You seem to focus on a PC-like architecture: Big CPU, some uCs for peripheral functions, user controlled (screen, keyboard), OS, universal function, network connected.
    I wouldn’t exclude that but my focus would be on simple devices, mainly one dedicated function, limited user activity – thus a “simple” encryptor (not HAIPE, more like a modernized KIV-7) is the thought model. 2-3 uCs, NFC for chipcards, 2 networking ports, 2-3 LEDs.
    [So I don’t know what you wanted to say by your first paragraph, but we fully agree here]

Your second paragraph is interesting:
”So suppose the malware was successfully deployed through however mechanism to the µController. What can the malware do? Switch GPIO directions (turn a transmitter into a receiver, back and forth,) encode data for light transmission using any scheme that doesn’t give signs to the victim (LED seems always on, but actually blinking too fast to see.) Ok, I give you that. Now how will the µController get to the data it needs to exfiltrate? How do you think this additional functionality could be achieved? Ok, suppose the malware knows where to find the data and sends it to the µC for transmission. Doesn’t that imply FW updates to the µC to enable this behavior? Device Driver, root or admin. There could be other ways of achieving this, but my original question remains valid: is this a large scale Data Loss vector? Targeted? Maybe.”/i>

“What can the malware do” was my primary question, and I think that the uC with the LED will not be the same that holds the crown jewels. This hidden input, however, may be used to trigger the malware, to enable / disable some extra functionality of the device (or chip), only when some prerequisites allow them to stealthy to operate:
!!! This is my main concern, a “secure” chip that – by any signal, a bright light flash, magnetism, mechanical pulses, whatever – suddenly turns into insecure.
To say that in the clear:
A chip that suddenly will “forget” about my read disable (fuse) setting.
A very convenient way would be @Clives LED input, because beforehand the innocent chip must not be tampered with, no one could read the program code (of this chip, e.g.), no one could check the (now updated) code, until the (tv) remote commands it “now, spit out the code / keys via network or the status LED”.

Am I clear in this point, a subverted code, triggered by the perfectly hidden input, may compromise my (code) kingdom?

”Then someone is in on it! Either the manufacturer (or their subcontractor, if they delegate or subcontract / outsource FW development) or the operating system provider that may add other functionalities (or allow it) under some conditions. That means it’s not a piece of malware that some outsider developed.”/i>

We have to assume this as a fact, whenever we buy a networked device that is allowed to autoupdate via internet.
The manufacturer is legally bound to the secret service (in clear: They have the manufacturer’s keys to download whatever they want for “national security”.
The secret service (plus the manufacturer) will lose the keys, as no secret can be held when it is digitalized.
-> All the (legal depends on nationality) bad boys will have the keys, sooner or later.

In a complex machine as a PC is all uCs are connected by busses. A “legal” update may influence everything, propagate through the whole system, any dumb peripheral with a LED could then trigger the (main) malware in the main CPU / OS.

”Anything from data theft to device destruction.”
Now here we concur, this is why we should follow @Clives concern and see LEDs as dangerous inputs, depending on the HW configuration, even if there is no working exploit known to the public.

Wael May 2, 2017 7:55 PM

@Sancho_P,

Oh, forgot @…, sorry!

No worries, I read all your comments.

Probably I can’t follow some of your thoughts…

Strange… I can’t follow my thoughts, either!

Am I clear in this point, a subverted code, triggered by the perfectly hidden input, may compromise my (code) kingdom?

Of course!

this is why we should follow @Clives concern and see LEDs as dangerous inputs

To some extent, yes. In practice, it’s an unlikely scenario because:

One: The manufacturer was able to force a stealthy FW upgrade. What’s to stop them from collecting what they need using the same upgrade channel? Two: Have you ever tried to change the TV channel with your remote control, and it didn’t work? Keep in mind that the TV Photo-Diode is designed and optimized (through circuitry and lenses, too) to receive signals! What are the chances of a regular LED working in a half way decent manner as a receiver? It’s doable – after all, the trigger signal could be a few simple pulses from a laser pointer, for example. Three: The LED, transmitting, now: How efficient will it be? The receiver end will have to be super sensitive.

Me? I believe it’s doable, but I wouldn’t worry about it. There are much more efficient ways of accomplishing the task. We agree, though! If the HW / FW is subverted, then it’s best to isolate the sensitive device from the ambient environment.

Clive Robinson May 2, 2017 10:19 PM

@ Wael, Sancho_P,

Me? I believe it’s doable, but I wouldn’t worry about it. There are much more efficient ways of accomplishing the task.

If you add the word “currently” after “task” you would be correct.

Humans even of the Spook variety are inherently lazy, thus the “low hanging fruit” metaphor. That is they tend to do the minimum that gets the job done.

Which means as defenders up their game the spooks have to look for new attack vectors.

In general for an attack type to remain viable the more novel it is the less likely people are to see it deliberately or accidently.

We have seen this happen before with BadBIOS. Somebody started chasing down effects they could not account for. They suggested that a channel might be via sound using the PC microphone and speaker. Now we never did find out if that was the cause of the effect he was seeing or not because he was lept on by a whole variety of people that thought he should be making cuckoo clocks or some such. However some of us had not just thought up using sound for PC-PC communications some years before hand, we had also experemented and known it was viable if problematic. We actually said it was possible but gave reasons as to why. Still people went on saying “no” this time not “no not possible” but “no to complex” then a couple of Uni academic types demonstrated a couple of laptops talking down a corridor, and the next thing you know every malware “gun for hire” is adding it to his skill base.

Now I know this trick with LEDs as Photodiodes works, for the same reason I knew the sound channel worked. Because back in the 80s/90s I had experimented as had others in cheap handheld to handheld communications with what were called “Personal Organisers” at the time.

The real problem Personal Organisers had was “communications” it made them limited in capability and was the main reason they never realy gained market traction unlike todays Smart Phones and Pads / Tablets. Which do have good communications via GSM or WiFi, and arguably are successful to the point they are killing the old desktop and newer laptop PC as the user device of choice.

Us old now grizzeld engineers knew back then that communications was the king maker not fancy icons, applications or displays the limited thinking marketing types were pushing. Thus we looked at how we could get to devices to communicate cheaply and easily with minimum effort for users.

The big problem was one I’ve told Nick P in the past, was the limited lifetime of connectors. Back then the best sort of gaurentee you would get was as little as “50 operations” for the connectivity quality to send data. Worse such connectors brought other problems such as potential fire hazard shorting out in peoples pockets, or connector holes getting filled with dirt. Triping hazard from leads, leading to physical damage to not just the lead but the organiser and computer it was connected to.

Thus we looked into both sound and light as communications channels and got them working, as others did. You may remember that the likes of “Cherry Keyboards” brought out IR diode keyboard and mouse sets. Which had a few problems because although they worked fine on a table or desk they did not work so well when people tilted their chairs back and put the keyboard on their lap. Now with Bluetooth and WiFi we don’t have those communications channel problems, instead we have security problems from the RF communications instead.

But the work us now grizzled engineers did with sound and light still lives on, we broke the trail and made it easily navigable. The outcome of BadBIOS was it brought the idea of sound channels back into the general conciousness. What makes you think the Spooks have not started looking at light as a replacment or are now doing so now peoples defences have been upped a notch by BadBIOS?

Because as I say from time to time “If the laws of nature alow…”.

Wael May 2, 2017 11:08 PM

@Clive Robinson, @Sancho_P,

But the work us now grizzled engineers…

If it only were that! Cataracts, floaters, retina problems… aaaand not much hair to turn grey either. I have a foot in the grave, baby. Perhaps a foot and a half. Lol

What makes you think the Spooks have not started looking at light as a replacment…

Two reasons: You mentioned that spooks are lazy, and Einstein never believed in “spooky action at a distance”. Poor Einstein didn’t know he was under surveillance by the FBI. If he knew that he was being monitored from afar by the spooks, he wouldn’t have said that, it would have made him an instant believer in entanglement and quantum mechanics. (Can you follow my thoughts now, Sancho_P?)

are now doing so now peoples defences have been upped a notch by BadBIOS?

My defenses haven’t changed! They aren’t dependent on “advances in attack vectors”. I’m a man of principles, get it? C-v-P 🙂

TS May 5, 2017 9:52 AM

It would work the other way as well.
the malware could “flash” the screen, or cause an electrical charge to occur somewhere, which could be scanned by a drone with a scanner.

The difficult part generally is getting the malware where you need it to be.

Clive Robinson June 24, 2017 2:49 PM

@ Ben Nassi,

To all of the skeptics about the pre-installed malware on an air-gapped network

As I told you above,

    Some of us discussed air gap crossing techniques here –long befor stuxnet used exactly the same idea–, when discussing how to attack voting machines.

With regards, “brutal-kangaroo” as far as I can see from a cursory glance via the link you gave it’s little different to that which has been discussed here years ago. At the time I outlined in some detail how to use “fire and forget” malware to locate the laptops of voting machine technicians and then infect any USB sticks that would get inserted and thus the voting machine. Which we found out much later was the way Stuxnet got around a year or so after I posted the details. I also mentioned at the time I had a way to exfiltrate data etc in reverse, from work I’d been doing on setting up a “headless” “Command and Control” system for bot nets. I quite deliberatly witheld details on that because of my concern that people would use the idea. Which suprise suprise has happened in much the same way as with WannaCry…

Thus having put the idea in the public domain, I was supprised just how long before it was picked up (previous ideas I have made public, have taken around eight years to get used in anger). Which is why as time goes on I get less and less impressed not just with the IC agencies but also with the academic community who are frequently upto a couple of decades behind the curve… Which is also true of the idea to use light or other energy channels like sound and even mechanical vibration hence the notion of “energy gapping” rather than the now clearly outmoded “air-gap”.

The thing is the current papers comming out are just rehashes of things mentioned on this blog or that have been done at the UK Cambridge Computer Labs a decade or more ago.

I know that is tough but engineers rather than scientists tend not to worry to much about publishing. We outline what is novel, new and usefull and tend to pass it on to other engineers, in the same way as the original “hacker ethos”.

Thus if you take a dispassionate view, this all goes back to the idea of information impressed or modulated on energy which forms the basis of the Claude Shannon Channel which in turn was based on the work of Harry Nyquist and Ralph Hartley ninety odd years ago in the late 1920’s…

JG4 June 25, 2017 6:37 AM

speaking of government agencies behind the technology curve, I was poking around the USPTO yesterday to try and understand their relatively new electronic filing paradigm. the system works only with Windows and Apple OS, running Java. to set up a user account, you have to fax in two forms. I haven’t had a fax machine for five years. the rest of it is equally appalling, including the notices about outages. for a tenth of what they’ve spent, Amazon or Google could set up a robust “ecommerce” site for them in an afternoon that would never experience an outage. ironic that they are stuck in the 1990’s. I concluded that the paper filing system with the $200 surcharge is a safer bet than spending $2000 on the hardware and software to fit their ecosystem.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.