Scott November 1, 2013 6:07 PM

Kerry admits surveillance went too far:

Although, I would be surprised if you hear anything else from any executive offcial; this is a special time and place, a conference on open government, in which complete arrogance and denial wouldn’t be tolerated. Politcians have a tendency to become completely different people in front of different audiences; it’s amazing how Reagan, Bush, Clinton, null, and Obama have spoken so differently but acted so much the same.

SpaceTruck November 1, 2013 6:16 PM

This audio air gap jumper might not be ultrasonic. It could be pseudo-random spread spectrum audio below the noise floor, in audible frequencies. It might just sound like barely audible or even inaudible quiet static. Pseudo-random spread spectrum would be especially tricky because unless you were careful you might overlook the signal on an oscilloscope or microphone recording as just background noise. GPS is a common example of digital data transfer by a signal far below the noise floor.

Alternatively, the audio signal need not be spread all over the spectrum. It could for example operate only on frequencies very close to the fan or hard drive noise, thus blending in.

Petréa Mitchell November 1, 2013 6:57 PM

For those who skip URLs that aren’t accompanied by an explanation: badBIOS is a yet-to-be-isolated piece of malware which theoretically uses ultrasonic signalling to talk to other installations even on air-gapped computers.

I think this sounds spooky enough that the investigation would benefit from the expertise of someone like James Randi.

kashmarek November 1, 2013 7:18 PM

It seems that politicians have overlords. The rule from the overlords is that politications may act any way they want AS LONG AS they do what the overlords want.

For different politicians, acting differently in front of different audiences but doing the same behind the scenes, is appropriate behavior expected by the overlords.

Rip November 1, 2013 8:04 PM

Based on the coverage I have seen so far, the LAX attack occurred in the TSA shooting gallery — the cattle chutes where people queue at U.S. airports before being processed through security screening. If correct, it is the manifestation of a long-ignored warning by Bruce, and perhaps others, that the TSA created a target for terrorists, psychopaths, et al.

It will be interesting to see what TSA’s inadequate, and too-late, (over)reaction will be now.

tz November 1, 2013 8:31 PM

I think a H/T to me might be in order. It appears to be the one I linked to a month ago. But sleep on it.

jackson November 1, 2013 8:41 PM

@gmeista – other similar reports were discussed here recently. Don’t use ordinary flash drives if that’s the culprit.

65535 November 1, 2013 9:16 PM

First, I wonder what Bruce’s take on this speaker/microphone transfer over the air gap. Most modern note books and smartphones have them.

Second, I wonder if these Smartphone “charging pack spying devices” are wide spread in the USA (and I wonder exactly how they work).

“Russian hosts of the Group of 20 summit near St. Petersburg in September sent world leaders home with gifts designed to keep on giving: memory sticks and recharging cables programmed to spy on their communications, two Italian newspapers reported Tuesday.”,0,1499023.story

Third, why is Diane Feinstein doing a 180 degree turn on the NSA scandal which she down played during her September 26, 2013 NSA hearing and her editorials defending the NSA (one editorial as late as October 20, 2013 in USA Today)?

It would seem she is the main idiot responsible for this ghastly out-of-control black box agency. She is the one who blindly trusted the NSA. It is her responsibility!

Flipper November 1, 2013 10:09 PM

badBIOS jumps not just the air gap but the species gap.
Dolphins are the only ones who would program in high frequency squeaks.
So long and thanks for all the fish? No way.
They … are … coming …

Mike Amling November 1, 2013 10:38 PM

The acoustic modems were limited to what? 300 baud? 600 baud? They had to operate through a POTS that was limited to what? 2kHz? 5 kHz? Getting more bandwidth because the carrier audio frequencies are over 17 kHz should at least partially make up for the poorer response and sensitivty of the speakers and microphones at ultrasound frequencies. And if it takes 4 hours to transmit 1 Megabyte of data, maybe that’s fast enough.

Figureitout November 1, 2013 10:52 PM

Thank you for reading.
–Thanks for posting. Can tell right away you like big pictures, me too. Agree on how so many supposed security experts are calling badBIOS BS straight up. They probably get pwned on a daily basis or have nil experience w/ targeted investigations and witness the effects of them. I have another type of evidence that something like this is out there, but it’s not the kind I want. What’s funny is I recently got another virus on my laptop that disabled my speakers; don’t think it’s related, but it doesn’t matter on a rooted machine. I’ve also got some rootkit that took over pretty quickly after some calls home a Ubuntu OS installation; and a hidden encrypted volume. Next I’m going to try Puppy Linux but I’m guessing the OS doesn’t matter at this point. I could just brick my PC and take off components/smash some of them but I kind of want to see what this is.

I’m frustrated too (as if it’s hard to tell by my posts) at all this chaos and no clear solutions and it’s a hard problem and it’s going to take a lot of time and I’m not very patient. It’s time (in my opinion) to go back to simpler circuits that humans can actually follow and verify, simplify your life. But really that does no good unless you follow the metals from the smelter to the molds, and make sure components hidden in components aren’t in your PC. I think once it’s a final product, w/ today’s manufacturing, I will destroy evidence of backdoors trying to find them.

I don’t think much of a “solution” will exist in the political/legal realm, sorry. Too many old people, old judges, old politicians, and when I say old, like in the last years of their lives. Bureaucrats doing what they do best, nothing. Nothing will get done, more talking about issues no one cares about. Do you remember Senator Byrd? Dude can’t even walk himself, probably wearing a diaper, it’s just sad to watch. Old people clinging onto positions of power that younger people should have right now, that is a major problem.

Mike the goat November 1, 2013 11:36 PM

Figureitout: I guess I too am in the camp of “theoretically possible, but will not be convinced until there is a full analysis”. Extraordinary claims require extraordinary evidence.

Mike: yes, we would be talking well and truly sub 300bps at such a narrow frequency range and given the quality of consumer hardware and the distance between the two.machines being, say 5m. That is not a lot of bandwidth to play with.

Brian M. November 2, 2013 12:05 AM

About “badBIOS” malware:

It’s absolutely possible that there is a rootkit that can communicate via an audio network.
It’s absolutely possible that a rootkit can hide in the various writable spots of flash memory in a system.
It’s absolutely possible that a rootkit can infect USB drives.
It’s absolutely possible that a malware payload can have code to infect multiple OS targets.

Here’s the kicker, though: if Dragos Ruiu has been working with this for three years, why hasn’t it been seen by anybody else, and why hasn’t he shared a sample with any of the big companies, like Kaspersky, etc? Why is he the only source of this news?

Some parts of his story are a bit odd. For instance, “observing encrypted data packets being sent to and from an infected laptop…” What has he been using to observe those packets? If those packets are encrypted, then how does he know, with certainty, that it’s communicating, and not just blasting garbage on the IP stack?

For a rootkit to contain so many drivers and exploits, it would have to be huge. As in, really noticeable.

I have written software that did updates to flash memory, and I worked with Award source back in the early 1990s. Here’s the thing: flash memory used for controllers is written has a big block. So the malware has to copy out the flash, make appropriate patches, and then write all of that back into the chip. It’s not a question of hiding, it’s a question of whether the computer will boot at all, ever again. The only common point about any of the BIOS versions is where the CPU initially picks up the first few instructions, and after that everything is different, from version to version, even like v1.01a and v1.01b. And to do this successfully with motherboards from different manufacturers is a bit implausible.

Until badBIOS has been shared with other researchers, I’m not going to get paranoid about this one.

Mike the goat November 2, 2013 12:08 AM

Apparently the quality of built in audio hardware has improved a lot over the last decade. Some consumer grade audio cards have professional grade sampling rates. With the right choice of modulation you might be able to do 2400+ over a short distance.

Have a look at blurt

Brandioch Conner November 2, 2013 12:44 AM


(y) governments develop sophisticated malware and have entire programs for leaping over airgaps.

The problem is that computers communicate via protocols.

So while one computer can beep boop with its speaker, without the other computer running an OS that supports the application that listens on the microphone and translates the beep boop into 0’s and 1’s nothing will happen. So it is a specific app that is vulnerable.

Without that, then the OS needs to be listening on the microphone and the OS needs to be able to translate the beep boop into 0’s and 1’s. Or nothing will happen. So it is an OS that is vulnerable.

Without THAT, then the hardware needs to have the microphone active WITHOUT the OS’s or an application’s intercession. Beep boop, 0’s and 1’s. And then the microphone needs to be able to access some firmware on the computer that can be compromised. So it is the firmware that is vulnerable.

Now remember that those components are NOT built in the USofA.

And at that point you are not dealing with a “virus” or whatever. You are dealing with a backdoor that is written into the app (or the OS (or the firmware)).

So why go to the extra step to accomplish something that is already in the firmware (or OS (or app))? The only thing that I can think of would be for sending data out of the airgapped system. But that should be easy to identify and monitor. But that isn’t what he seems to be claiming that he observed.

Third Hand Witness November 2, 2013 12:56 AM

All this talk of transmitting through speakers is silliness – it would be audible! However, someone who is not me suggested that perhaps someone should put the market-leading CPU in an RF-quiet environment and take a look at the RF spectrum emanating from the CPU (using a spectrum analyzer) and check for pronounced spikes, especially out-of-band spikes occurring when the system is otherwise “idle”. If Mr. Ruiu is to be believed, his machines may be especially interesting to investigate in this manner. He worries, however, that “spread-spectrum” RF transmissions would be virtually impossible to detect with this or any other method.

My friend – who works as an engineer at a major CPU company – recently read a critique of #badBIOS that relied on a firmware memory examination tool to conclude that there’s no possible way that the firmware is infiltrated. This ignores the fact that the BIOS is not actually the lowest layer of firmware. Microcode resides below even the BIOS macrocode, and microcode can access many parts of the hardware. My friend said it is even possible that microcode could present a “false picture” of memory (including firmware memory) to macrocode. He isn’t saying this is the case, merely that the rabbit-hole goes a lot deeper than most people probably realize.

RobertT November 2, 2013 12:59 AM

On the BadBios issue of speakers/microphones communicating on air gapped systems, I have no trouble believing that you can create a network of computers that communicate information over the audio pathway. I played around with this a little some years ago as a simple point-to-point system and found that 1kbps was easily achievable across a room without anyone hearing anything.

My target was linking smartphones but the idea is exactly the same just different hardware and OS, once the acoustic link is established the rest of the network protocol can be easily done in software. You just need to get the software on to the machines somehow.

BTW If you really care to shutdown all Acoustic channels then removing the Microphone and de-soldering the speaker wires are essential steps but insufficient. Unfortunately there are still lots of ways to create acoustic noise with a laptop (such as fan ON/OFF keying) and many ceramic filter components make great unintended loud speakers especially in the 50Khz to 100Khz region. If the “noise” from these components can be controlled than they can be used to transmit data (receiving acoustic signals fortunately is a little more difficult, without an intentional microphone).

PPS: while on the topic of air-gap systems, I just saw an IT boss charging his smartphone from the USB of an air-gapped computer. So it appears stupidity knows no bounds.

Secret Police November 2, 2013 2:00 AM

Here’s a vid for a proof of concept being shown later of an air gap jump using LEDs as an antenna

Seems strange that all of his various hardware would be immediately infected, he needs to test this with 3 different machines all running different BIOS, different manufacturers, and different O/S.

Apple products being air gapped seems plausible, because they are all uniform machines so your air gap jump will work on all of them but throw in Lenovo, random Taiwanese laptop, and a US laptop all running different BIOS and that shouldn’t happen unless a nation state with huge resources made this and spent the time to pwn every single major hardware manufacturer

If you want an air gapped system you need a SCIF tent, remove all LEDs, physically remove wifi/mic/bluetooth, independent power source and don’t use USB drives

Sp November 2, 2013 2:35 AM

BadBios (if it’s not a hoax.. who knows at this point) blocks Russian language websites with firmware flashing guides and software, so this is either Russian nation state spyware in the wild or some hackers deployed it to dragosr to pwn his system, and since he organizes for all upcoming security conferences and most likely privately receives papers and submissions seems logical a sophisticated criminal operation would want to target him to get their hands on new exploit methods before they are even released.

Mike the goat November 2, 2013 3:16 AM

SecPol: Indeed! I wrote a brief article on the recent kerfuffle about using a speaker and mike as a side channel. Indeed it is possible, it has been done before and there is nothing in what Dragos’ claims that would be technically impossible.

Implausible, perhaps… But not impossible. People said the same thing about stuxnet until all the facts came out. I remain agnostic and will sit on the fence until all the data is available. That said, I think it is sad that some people are saying it is a hoax in the tech media. He may have got it wrong, that remains to be seen, but the guy is legit. He has credibility and I doubt he would risk that just to perpetuate a hoax. It just doesn’t stack up.

It seems like the name might be misleading. The badBIOS portion of the malware seems like just a means to an end. My gut feeling is that when put into an already infected system it reflashes the controller of the thumb drive to do something evil – perhaps a buffer overflow in the device ID during enumeration. Somehow it gets its code to execute, which drops itself on the machine.

From this point on the machine is infected and on next reboot the BIOS shim ensures that it is still present and also ensures it executes before the real MBR. There is obviously a win32 component which does all the fancy stuff (audio communication etc). Although Dragos said that putting the thumb drive into an OpenBSD box caused its bios to be modified he didn’t say that an OpenBSD box was actively transmitting using its audio card. I suspect that the bios side of it is merely the dropper portion. Given that thumb drives generally have a few different brands of flash controller it would not be hard to write a generic exploit that works on most USB thumb drives.

This is all just my gut feeling – if the info he has supplied is correct. I have no reason to doubt the guy at this point.

AndrewS November 2, 2013 3:20 AM

Hi Bruce,
Just want to know what the changes at the Guardian mean to your agreement in terms of using the Snowden materials and your future publishing schedule. Have enjoyed your articles so far so hopefully they keep on coming.

Mike the goat November 2, 2013 3:32 AM

SecPol: as an aside do you remember the paper where they put a photodiode on a switch’s activity light and managed to sniff the interface. I believe it was only a 10mbit/sec switch but nonetheless it is pretty amazing.

Thomas_H November 2, 2013 4:01 AM


Apparently the guy targetted the TSA itself, not the queue of people waiting for them to do their thing:

I will not be surprised if more incidents like this one follow, especially in the light of the NSA scandal.

In any case, it has shown that the security provided by the TSA and other services is mostly for show. You have to wonder how many people would be dead if his relatives hadn’t phoned the police.

Clive Robinson November 2, 2013 4:07 AM

@ Rip,

With regards the shootings at LAX, whilst it’s currently not getting much media attention in the UK (I guess due to TZ differences) it’s certainly hit some US news blogs.

And the comments do not make nice reading… One trend that has emerged is gun ownership, with anti blaiming easy availability of assult weapons and the pro side saying that the shooter was mentaly disturbed and on anti-psychotic meds and it’s the meds taking/not taking is to blaim (I’m not aware of any official reports that indicate he was on any kind of medication having been released at this point in time).

What worries me is that some people are saying that anyone on anti-psychotic meds or even anti-depresants should be “registered” and in effect making the knowledge available to so many people that in effect it would be public, and thus almost certainly used as a method of prejudice. This would in turn stop people from seaking medical assistance for a whole load of conditions some of which are physical not mental conditions.

There is currently a lack of information and this in it’s self is causing people to fill in the blanks with their own prejudices, it’s not just guns but racism and imigration and no doubt others will be seen fairly soon such as religeion.

All in all it’s got all the hallmarks of a mediafest / conspiracyfest build up, hopefully it will not give rise to some of the excesses we have seen in the past. If for no other reason the detrimental effect it will have on the friends, familes and loved ones of those hurt or killed.

Clive Robinson November 2, 2013 5:57 AM

With regards BadBIOS, as I’ve commented in other places it’s not just technicaly doable in some respects it’s been done before.

The earliest publicaly published account of air gap crossing with sound I can find is Peter Wright’s “Spy Catcher” where he discusses MI5 using spike mikes and RF flooded phones to listen in on embassy crypto equipment manufactured by Crypto AG, This was back in the 1950’s and this sound info was passed to GCHQ to provide “base setting” information to break high level diplomatic traffic. Due to the end of war BRUSA (UKUSA) “Special Relationship” all of the recovered traffic would have been given to the NSA. I also assume the US would also have been party to the British “methods and sources”.

Later documents on TEMPEST released in the US under FOI requests and although in redacted form give clear indications that “sound energy” was considered and I’ve pointed this out on this blog numerous times in the past.

The speculation of using DS Spread Spectrum is a sound one, not in this case to give Low Probability of Detection (LPD) but because of the effect it has during deconvolution of reducing background noise in the room by the “coding gain”.

With regards speakers and ultrasonics, I would expect the small speakers in laptops and smartphones to be considerably better than external speakers. Because those small speakers frequency response significantly favours just above human hearing, likewise those small electret microphones. In times past the required analog filtering components would have stopped them being used for this, however for the past ten years or so digital filtering using DSP technology has been cheaper thus the analog filtering components are nolonger used.

As I and others have pointed out repeatedly in the past it’s not just the BIOS semi-mutable memory you have to worry about it’s also that on IO cards. For instance both the original PC-BIOS and PCI IO card specs have clearly documented methods of loading IO driver code from the IO card into system memory. As this code is run befor the OS is pulled in from the HD it is fairly free to do what it likes with all the PC memory and other resources available to it. Thus the driver code could have a very modest (thing tens of bytes) leaver loader to pull in hundreds of K if not mega bytes of code from other semi-mutable memory.

Which could in turn write code to the HD swap space or other files for the OS to run when it is loaded in, this code having done tempory changes to the kernel and drivers could then delete all traces from the HD long prior to the user getting the system prompt etc.. Similar techniques have been used in computers going back to the earliest IBM boxes to load in system test code into the microcode RAM to do hardware level diagnostics. Once done the system test code was then overwritten with microcode required to run as a computer. This was well documented in aplication notes of bit slice ALU manufacturers as a recomended process to best utilise highly expensive microcode RAM/ROM back in the early 1970’s.

This only leaves the question of where does BadBIOS find semi-mutable memory to hide?

Well as I’ve pointed out before there are very few actuall HD controlers, and these tend to be SoC chips with two or three ARM cores on them and lots of flash memory. Likewise in practice there are very few sound chips and they likewise are SoC’s with standard CPU cores and flash memory. The code for these cores tends to be very inefficiently written for various reasons I’ve discussed in the past, therefore it would be relativly easy for a skilled programer with the required data sheets to write their own much more efficient code freeing up much of the semi-mutable memory. Also as supplied many features are implememted on these SoC’s for marketing purposes, but in reality never get used by either the standard OS drivers or applications, thus the skilled coder can basicaly down grade the functionality and the user would never know.

My money would be on these standard SoC’s especially the “sound card” one reason is that the code on it needs to be modified to change the DSP filters to open up the audio bandwidth above human hearing range, and importantly put the TX/RX code on it.

Which brings us around to the question of what this TX/RX code might do. For those not that familier with DSP techniques it sounds complex and difficult. However there are two things to consider the first is “dither” the other is so called “oversampling” both are fairly standard techniques used for “noise shaping” which is needed with “Delta-sigma” converters that produce high frequency noise with little or no modification could be used for a DSSS system.

Mike the goat November 2, 2013 6:49 AM

Clive: or it could all be done in software – ie a rogue win32 service hidden by a bootkit that the BIOS component injects. Dragos remarked in one of his twitter posts that the laptops would on occasion make an annoying high pitched whine which is what drew his attention to that aspect of it. I suspect it is not so much using ultrasonic comms but just transmitting on the edge of human hearing and slightly above. If you spread your comms from 18-22khz (22.5 being the cutoff assuming you’re not modifying the audio card firmware). I guess that those above maybe twenty something wouldn’t hear anything, right? (Like that Mosquito anti teenage loitering device they trialed at UK stations)

Aspie November 2, 2013 7:49 AM

WRT “airgap” jumping and this forum in general: I see three categories:

(1) All this is tinfoil hat speculation: the IC cannot and does not implement
these “extreme” measures.

(2) The IC has read these posts and gleaned many new ideas from them to
implement some derivations.

(3) The IC was doing this all along and is just monitoring its eventual
discovery to bring new strategies into play.

I’d pay even money for (2).

When a entity can listen in on the speculations of its enemy it stands a very
great chance of learning weaknesses and countermeasures to attack. Hence the
phrase ‘tight-lipped’.

Perhaps this effectively promotes security-through-obscurity but surely a
(b)leading-edge approach is better than none. After all, THEY won’t post and
tell us why our ideas are flawed. (Indeed some posters probably are shills of
one stripe or another.)

This is no surprise to any of you I’m sure but it effectively accords with the
“giving aid to the enemy” mantra that the IC itself pushes so hard in defence
of its shadowy nature.

If they have a right to shade, so do we.

Mike the goat November 2, 2013 7:57 AM

Robert: this guy’s implementation is very simple and is nothing more than a PoC. Have a look at blurt for one with added error correction and it uses the 802.11a PHY.

There was a stego project a few years back that encoded a whole heap of data into an MP3 without any additional human discernible artifacts. Human hearing is not infallible at the best of times. If you used high frequency part of the audible spectrum right up to the limit of the audio card’s DAC (and the receiving card’s ADC, not to mention the physical speaker and mike and any limitations they may introduce) you could make something that is almost imperceptible to an adult human in an office environment. You could have an initial negotiation so both sides could determine what frequencies they are able to attain and the relative bandwidth they have available. Perhaps the malware could listen and only transmit where noise level is sufficient to avoid detection (also implying it avoid sending where noise is too high to allow reception too). So many ways you could do this.

By using an ax25 you could even have a scenario where – pretend you have three computers and all have had the evil USB stick inserted at some point and are infected. PC1 has internet access but 2 and 3 do not. PC1 sends out a probe, finds PC2 and negotiates a connection. PC2 also hears PC3’s probe and they negotiate. PC2 can then tell PC1 that it will act as a gw for PC3.

In this way multiple hops could be used to breach well into a supposedly air gapped system.

Of course this is all in theory. In practice nobody would be silly enough to put a USB stick into an air gapped machine. Uh, would they? of course they would!

vb November 2, 2013 8:41 AM

According to this article, a member of the Belgian parliament sent an email to an IT expert to ask his advise, mentioning the words cyberattack and cybersecurity in the subject. The expert says the email headers show it transited via Fort Huachuca, home of the 111th Military Intelligence Brigade. And the politician to conclude that she too is being spied upon.
This seems hard to believe they would leave so easy to track evidence of their interception.

herman November 2, 2013 9:09 AM

The LAX shooting just shows again how useless the NSA spying is and how the TSA creates new easy targets. These are two organizations that should be abolished/shrunk.

The money will be better spent on good old fashioned police work, but beating the pavement is hard and strenuous. It is so much nicer to sit and watch the useless internet data flow by on a computer screen than actually going out and doing something…

Bryan November 2, 2013 9:34 AM

What are your thoughts on:

I would not be surprised to find out that badBIOS is using the unused reserved pages on flash sticks for storing it’s code to infect other systems. Furthermore, if the firmware on the USB stick is compromised, it could effectively hide it’s presence against discovery via the USB interface. The only telltail signs would be what it communicates to the host. You’d need JTAG type access to the controller chip to even diagnose the changes because the firmware could provide the correct responses, and even make it look like it was writing new firmware when it wasn’t.

Bryan November 2, 2013 10:05 AM

Based on the coverage I have seen so far, the LAX attack occurred in the TSA shooting gallery — the cattle chutes where people queue at U.S. airports before being processed through security screening. If correct, it is the manifestation of a long-ignored warning by Bruce, and perhaps others, that the TSA created a target for terrorists, psychopaths, et al.

It could also provide a tripping point for somebody who is already on the edge mentally.

Mike: yes, we would be talking well and truly sub 300bps at such a narrow frequency range and given the quality of consumer hardware and the distance between the two.machines being, say 5m. That is not a lot of bandwidth to play with.

When you execute as many NOPs as needed and can handle interruptions to your transmission, 300bps or less will do. 😉 I’ve let many huge transfers run in the background as I did other stuff on the computer.


blockquote>Apparently the quality of built in audio hardware has improved a lot over the last decade. Some consumer grade audio cards have professional grade sampling rates. With the right choice of modulation you might be able to do 2400+ over a short distance.


blockquote>Don’t forget the built in DSPs in audio chips, and all those computers that were made with the modem using the audio system’s DSP.

name.withheld.for.obvious.reasons November 2, 2013 10:49 AM

Taxpayers could save a significant amount of money. Instead of the telephone/network companies operating the communications infrastructure, why not have NSA buy them up in a big of Fed funds spending spree…at least the cost of acquiring the meta-data would be built into the operating costs–the telcos already collect the data so why do that job twice. It escapes me as to how the politicos can stand there with a straight face a pitch the dung at us without thinking we aren’t going to pitch something back. Congress acts as though they are footing the bill–they’re not capable of shoeing a foot.

NoSec November 2, 2013 12:59 PM

One thing is for sure, whoever made this will alter it to not be so obvious, as not being able to CD boot is the signature you’re infected.

Anybody know where mere mortals can obtain Nato spec equipment with hardened tempest proof laptops/boxes? Of course I could social engineer a few but surely there is a civillian market somewhere

Clive Robinson November 2, 2013 1:57 PM

@ Mike the Goat,

A quick scan on the internet shows quite a few simplistic methods of transfering files with audio generation. However most of the ones so far linked to here are not that good.

Without going into details using audio in a room or open space with fixed and mobile objects exhibits many of the same problems as found with long haul H.F. circuits as used by the military and diplomatic communities and simple H.F. antennas.

A little history for you, after WWII the teleprinter or telex had taken over from morse code as the prefered way to send messages be it by land line or radio link. The problem being the standard 5bit (baudot) teleprinter code whilst good for DC line signaling was fairly usless when it came to H.F. Radio circuits with the restricted audio bandwidth. Various methods of superimposing the teleprinter code onto an audio circuit were tried including multiple level signals from very high power AM transmitters (have a google for Aspidistra transmitter at crowborough in tthe UK). Most systems had significant failings…

At the UK’s Diplomatic Wireless Service this problem was causing some head scratching however three engineers there (Harold Robin, Don Bayley and J.D.Ralphs) came up with the idea of using Multiple Frequency Shift Keying (MFSK) using orthagonl tones and quenched resonators which they called Piccolo due to the noise it made. Originaly it used thirty two tones (Piccolo MK1) one for each teletype code, but the technology back in the 1950’s ment it required quite a bit of re-tuning. However it gave very good H.F. circuit performance and would give highly reliable communications around the world using as little as ten watts of transmit power.

They changed the design [1] to use six tones (Piccolo MK6) used in pairs to give thirty six possible codes, thirty two for teleprinter codes and the other four for sychronisation and “engineering order wire” (EOW) signaling. This system like it’s bigger brother worked remarkably well.

The idea was based on a simple observation, if you have a perfect resonator and you excite it at it’s resonant frequency the energy in the resonator will rise linearly with time. However if you excite it with an off frequency tone then the energy will build up and decay at a rate proportional to the frequency difference. So if you have six resonators that have been quenched to remove any energy which are tunned to be 20Hz appart and apply a tone for fifty milisecs then the resonator at the frequency of the tone will have built up to a maxima, whilst all the others will have decaded down to a minima, making differentiation of the sent tone easy to distinquish even in high noise and multipath / fading conditions. If you do the math you will find this coresponds to a seventy five bits/sec teletype transfere rate (ie ten five bit chars with one start and one and a half stop bits per sec).

The point is it does not matter at what base frequency these six tones are centered around as pushing them into a slightly modified DFT will do the job of the quenched resonators, Thus say using tones 200hz appart will give a signaling rate of 500bits a second with only a 1.2KHz bandwidth needed up around 20KHz to give quite reliable communications at 62.5Bytes/sec at around 10dB S/N at the microphone.

I was involved in the design and construction of a digital Piccolo modem using an 8bit Z80 CPU back in 1988/9 which fitted quite nicely into a case with an A5 footprint. Sadly due to MoD constraints little paper work survives [2] however compared to the Racal LA1117 that used two 16bit 68K CPU’s it’s performance was only marginaly worse and that was built into a 4U 19inch rack sized unit. Racal built three types, the Kaynard MK1 / MK6 / MK12 the latter was a 12 tone unit designed to carry 7Bit ASCII at various effective data rates. The Kaynard also came in several briefcases for use in diplomatic missions and it’s often incorrectly stated as being an MI6 item [3] not a DWS item.




Nick P November 2, 2013 2:11 PM

All this talk about audio air gap jumping takes my mind back to the wisdom of the old days, which solves the problem. The first thing is POLA: the hardware should only have the components needed for the job & access to them must be restricted by the TCB. If this isn’t possible, then unnecessary devices should be disabled in the BIOS, at driver level, physically, etc. whenever possible. Far as identifying these issues, a covert channel analysis that considers every piece of hardware as a potential communication channel would probably have identified this risk. But people aren’t interested in boring and paranoid covert channel analyses these days, are they?

Trustworthy systems must be architected with this stuff in mind from the ground up. It’s why I prefer designs with all of memory protected from peripherals or tagged/typed software from kernel up. Every time we see another story about an attack via peripheral devices or a new control flow bypass it just goes to show that the fundamentals must be extremely strong. Otherwise my old mantra: “Castles built on foundation of quicksand.”

Nick P November 2, 2013 2:31 PM

People interesting in Audio Networking should also read the paper:

“Audio Networking:
The Forgotten Wireless
Technology” (2005) Madhavapeddy et al.

(Madhavapeddy’s work is always a good read.)

They talk about many aspects of it including splicing it into a voice call IIRC. Submarines have long used acoustic signals for underwater communication. There’s also a very clever key exchange method for smartphones (bluetooth alternative) that sends it over audio by holding the phones microphones and speakers against each other briefly. Being able to hear the exchange & control its circumstances might make MITM much harder than RF-based methods. So, seeing all the work in audio networking field from embedding to key management, it’s only one logical step to use it as a covert channel to steal data (eg keys) from unsuspecting users. The setup described is much more complicated than that, though.

Honestly, though, I hope that what one commentor wrote isn’t true: that he’s been the sole guy aware of it while dealing with it for years. There’s quite a lot of smart malware researchers under regular attack by (and luring in) top malware writers. If none of them noticed it for years, then I begin to worry that Dragos might have made it himself (the story or malware) for publicity/money. I’m not accusing him of it necessarily, but it will be in the back of my mind until a full disclosure happens. Creating and “finding”/fixing clever attacks has long been a lucrative hacker business model. One I wouldn’t know anything about, of course. 😉

Chad November 2, 2013 2:40 PM

I have a theory about how the Silk Road server was found.

All the NSA has to do is look for an address that gets lots of traffic through tor and little or none directly. Or even worse, there are limited exit nodes that know how to reach a hidden service, so you just need to look for a server that’s getting all traffic funnelled through a few points. Hard to do unless you happen to be monitoring most internet traffic. Encryption wouldn’t help, the mere presence of the traffic would give it away.

I could be wrong, but if not then there’s a serious flaw with operating [popular] hidden tor services. You could set up a non-hidden site on top of the hidden one to mask it a little, but the hidden one will still rely on the exit nodes, which will be transmitting through a monitored backbone.

RobertT November 2, 2013 3:31 PM

@Mike the goat
“If you spread your comms from 18-22khz ”

This is the frequency range that I used for the smartphone trial. From my limited experience 15Khz is the point where average people start to hear something.

I didn’t spend much time on the project but we were able to achieve 30db SNDR which logically means it is a sufficiently robust channel to support 8 bits/Hz giving us a theoretical channel data throughput of 32kbps.

As Clive also mentioned the trend to use ceramic speakers (as used on Apple iPhones) makes the job easier because traditional moving coil speakers are way beyond their “pistonic” limit at these frequencies. This means that although amplitude may increase linearly the “phase” is all over the place so the throughput is significantly reduced.

I dont think they would need to spread the information because the acoustic channel is fairly robust AND it can be easily used in burst Transmit mode. Bursting alone gives you a low probability of detect especially if you communicate only a few kilobits per hour.

The “multihop internet link” that you talk about is exactly what we wanted to do with smartphones.

WRT: using USB memory sticks on air-gapped computers, you MUST make sure the memory is completely filled with your own data. This is why I recommended fining the device with random data and ten XORing bits of the file to be saved with the “Random” stored data. At the other end you know what the original Random data was, so you can reverse the XOR step. By making the fill data random we know that it cannot be compressed by any known method and XOR random with any other data is also un-compressible. The difficult part is in knowing (for certain) how much storage space is available on the USB stick.

@Clive :
Thanks for mentioning the problem of Chip component functionality not necessarily matching top level product functionality. This can be a huge problem these days.

Unfortunately from a security perspective It is cheaper to ship a software downgraded chip than to build a new specialty chip. So a semiconductor vendor might have a WiFI + Bluetooth + GSM chip that he wants to sell, but the customer says he only wants 10% of the product to ship with GSM (2G cell phone) functionality. The vendor then needs to make a special chip and develop new firmware and prove this whole system OR sell the chip with all 3 systems BUT only enable two.

This practice is very common and means that the on chip /system memory that was intended for GSM functions can be happily used by any malicious application.

The problem is that a full GSM cellphone + WiFi + BT chipset from some vendor like Spreadtrum or Rockchip sells for under $3USD. This is much cheaper than the individual parts when purchased from say Maxim or Broadcom. Many times the PC maker might not even know what Chip functionality he is actually receiving. In the case of a downgrade to GSM + Wifi only, I have seen the PCB actually ship with the BT antenna printed and connected (it was obviously too much work /risk to redo the Gerbers and remove the BT completely).

The problem is so bad that I’ve seen full comms chipsets ship with only their ADC/DAC functionality enabled. The package had lots of Do-Not-Connect pins that if connected would enable the full device functionality. Today we have a silly situation where a 50Mhz 12 bit ADC sells for over $10 as a stand alone part, so why not simply use the ADC/DAC from the WCDMA chip and disable all other functions. Unfortunately disables implemented in vendor supplied firmware are easily reversed by a determined hacker.

Even when the chip/module does not have undocumented hardware it is common to include firmware functions on chip that nobody EVER uses. Sometimes these firmware functions are demanded by the PC vendor to support things that they “might” do.

To understand this you need to understand how Chip’s absorb other similar functions. This process is vital to the survival of a chip maker. So the Audio chip maker will include RF functions and the RF chip maker will include Audio functions. eventually one or the other gets the whole combined system figured out (RF + Audio) and gets to sell both functions…the other guy goes out of business.

NoSec November 2, 2013 3:32 PM

@chad surprised these guys don’t either use a custom botnet and buy up 60 servers into a pool and drop/add on a random basis. They are afterall making 80million doing this. Could also host beside a major exit node to camoflauge all that tor traffic.

Actually I’m surprised they use tor in the first place instead of a static freenet service just listing their safe escrow service

Not Rob Ford November 2, 2013 4:23 PM

About the usefulness of metadata:

The current scandal in Toronto surrounding Mayor Rob Ford and his unsavory friends got explosive last Thursday (Oct 30, 2013) when the court released a document in support of a search warrant. It contains no intercepted communications, but lots of information about who called whom when, who met with whom when, and the like. The resulting media frenzy should dissuade anyone from believing that “metadata does not matter.”

If you need a blow-by-blow example, this is a good one.

Not Rob Ford November 2, 2013 4:25 PM

The Toronto scandal is also a good reminder that what is erased from a hard disk is not necessarily gone forever.

Skeptical November 2, 2013 5:19 PM

Regarding the NY Times article linked above, this paragraph (among others) stood out:

The spy agency’s station in Texas intercepted 478 emails while helping to foil a jihadist plot to kill a Swedish artist who had drawn pictures of the Prophet Muhammad. N.S.A. analysts delivered to authorities at Kennedy International Airport the names and flight numbers of workers dispatched by a Chinese human smuggling ring.

There has been a lot of speculation, in here and elsewhere, about why more NSA analysts haven’t leaked information, with dire (or optimistic, depending on your view) predictions of the volume of leaks in the future.

Bruce has a generational hypothesis – that younger generations, accustomed to information transparency and with less hope of permanent employment, will not respect the same codes of silence as older generations. And I’ve heard other hypotheses discussed as well, such as the increasing reliance on ever growing numbers of contractors subject to shallow background checks.

I do not think any of them really pan out (yet). Younger generations know the dangers of information transparency at least as well as they know the benefits; and uncertain employment can make one more, not less, careful not to jeopardize one’s security clearance or relationships. Contractors are frequently former military with long records of service, and can be of no less reliability and integrity than a permanent employee.

But lost in all those arguments is something else: the people in those positions see triumphs and benefits which have been rarely reported in the confused storm of leaks. Even when they are reported, they are reported very skeptically, far more so than the leaks themselves are.

For the analyst devising better ways to sift through the unending avalanche of noise to find those precious few crystals of valuable intelligence; for the technician experimenting with new devices to capture and read the signals of highly protected networks; for the operators who venture to effect collection in hostile and non-permissive environments of every kind: that paragraph in part explains the level of dedication. They know, and feel, that they’re on the team that does those good deeds.

And so as long as the NSA is focused on foreign intelligence and international terrorism, and so long as the leaks reveal merely capabilities and not clear abuses, that dedication will not waver. An intelligence analyst is not going to read the speculation present in these press articles with the same degree of credulity that many in the public have. He’s going to weigh that speculation against what he knows of his colleagues and teammates, against the culture of which he forms a part, and will make a more informed and cautious judgment.

To be sure, no individual is perfect, and no collection of individuals is perfect. Abuses happen, and mistakes happen. But to ransom the system in demand for perfection is the act of a child who never outgrew the world of comic book heroes and Hollywood productions.

Those who think that pouring techniques and operations across the pages of papers, much less exposing the names of personnel to journalists, will encourage additional leaks, or excite internal alienation within the NSA, have (in my uninformed speculation) dramatically missed the mark. The more likely reaction is disgust and anger at the source of the leaks, at the leaks themselves, and a renewed dedication to accomplishing silently the mission at hand.

As to the pressure for “reform” from those long opposed to the NSA, in my view it would take one dramatic speech from President Obama to sway public opinion against such “reform”. Until now the Obama Administration’s approach has been one of extreme caution, responding narrowly and quietly to allegations, and rarely taking initiative. That strategy has allowed public opinion to be shaped mostly by the noise surrounding the press reports of leaks. The time for that strategy has almost past, and a true push for public opinion, on behalf of the public institution most trusted by the American public (the US military), would shatter the uneasy coalition attempting to add more restrictions to the functioning of the NSA.

As always, all views expressed here subject to revision in light of new information or better analysis.

Blog Reader One November 2, 2013 5:23 PM

David Wheeler’s “open security” proposal:

For the Drupal social publishing platform, a whitepaper on its security was released:

Dan Wallach talked about how an e-mail system could be made resistant to insider attacks (basically by local storage of crypto software and keys):

Mike the goat November 2, 2013 7:07 PM

Clive: interesting links. I know some spy equipment uses the cache and burst paradigm to reduce the likelihood of detection so it is reasonable to think that a device working in the near audible spectrum would do the same thing. Years ago I worked on a project that required us to decode ACARS messages over VHF. Before familiarizing myself with the system it stunned me at just how much data could be transmitted in a quick burst. Of course ACARS would be considered “slow” by today’s standards.

RobertT November 2, 2013 7:07 PM

#Chips Not actually being just what they claim.

Just to finish off on this issue that Clive raised.

From a security perspective you have to understand the design, production and supply processes before the security weaknesses are obvious.

The basic problem is that Chip making is a Batch process and a VERY expensive batch process to get right. To put some numbers to the process a chip provider pays the Fab (guy making the chips) somewhere between 3c and 10c per mmsq (exact price depends on a lot of factors but Volume is the main one). So if a Cell phone chip set selling at say 10M units/month will be able to negotiate a very low price, whereas a high performance ADC vendor selling say 10K units per year will happily pay 10c/mmsq or even more.

OK so lets assume the whole cell phone chipset is 5mm * 6.5mm (equals 35mm sq) so actual production cost (pre yield test etc) is about $1.00 (32*3). A low end cellphone chip sells for about $2 so if they are lucky they make maybe 50c gross profit (package and test are the rest).

OK if the vendor finds an opportunity to sell 100K ADC’s at say $10 each That’s $1M gross with cost still about $1.50/unit. so profit is about $800K gross. to make gross profit the vendor must sell over 2.5M cell phone chipsets. If you look at the Chip vendors net profit the situation is even more extreme supporting the $800K might cost only 3 engineers part time say $100K/year cost. So Net profit is $700K /year. To make $700K net from a cell phone chipset the vendor must sell a heck of a lot of chips (say 10c per chip net profit =7M cell phone chips)

IF the vendor already has a cell phone chip set than all of these costs are fixed costs. the variable costs are in the choice do I sell 100K ADC’s or 7000K cell phones. The ADC/DAC is typically an integrated function in the Cell phone chipset so it makes sense to do both and just use the same chip BUT never tell the customer what you are actually doing.

The last sentence is VERY important for anyone interested in security because an ADC/DAC alone is hardly a security weakness if deployed in an airgapped system, however a full featured cell phone chipset sold as an ADC/DAC needs to be avoided. Trouble is nobody except the chip vendor really knows what’s in the package and furthermore nobody is paying them to care about the security aspects of their processes.

Sometimes the decision to use a dumbed down chipset for low end machines is made because the motherboard maker initially gives the chip vendor some idea of what the breakdown in sales for different features will be. The chip vendor usually tries to make more on the newly released full featured chipset and always hopes they get upside so instead of the vendor estimates being 1M cheap 3M medium 1M high end. They hope the reality is 1M cheap 1M medium and 3M to 10M highend, consequently they push this full featured chip VERY hard. they accept the lower margin for shipping unused (un-enabled) functionality (low endparts) because IF they get the high end upside their profits will sky-rocket. The sales guy can make a heck of a bonus if they get the equation right whereas its not their money if all the business comes in at the lowend. The asymmetric nature of the reward does effect decisions and creates invisible security weaknesses.

Mike the goat November 2, 2013 7:13 PM

RobertT: yes, as we know from the Russian counterfeit USB thumb drives that are often sold on eBay (that report a large size and just junk anything that is written over the actual physical size of the device) USB devices can’t be trusted. I don’t expect that situation to chsnge any time soon.

Re “downgrading” and disabling chip functions two good examples in consumer products would be the Nook Color having a Bluetooth functionality on its WiFi chip (although no antenna trace was connected) and the band 4 LTE modem existing on the Nexus 4.

Chad: tor hidden service functionality is believed to be broken due to the way that tor periodically changes the guard bode. Eventually the hidden service will connect with an “evil” guard, and probing can reveal its existence.

BP November 2, 2013 7:28 PM

To the guy who posted the first item: One place to look for this is in the cache on the DVD or CD drive, although it has only a small cache, I once was able to get old video discs that were scratched almost beyond repair and played terribly by using a Debian based OS and using a hard drive as the DVD player disk cache. The drive had to try to try to read the the whole disk but apparently if the CD or DVD player catches the correct digit once, it’s stored in the disk cache and then remembers what the correct digit is, even if the machine reads it wrong. It might try 25 times to read that spot on the DVD, but once it gets it, with a large enough cache, it’s coded into the /dev section and voila. That disk played like a brand new disk, actually better than new as it was about 15 or 20 years old and disc players weren’t as good as they are now.

I’ve noticed malware that uses the CD cahde as a loopback device to get at info when you’re using hard disks, then feeds that back into the OS, presumably sending it out if you’re online. My experiment was done offline but it worked incredibly well. Restarting the OS took a long time with the playing the DVD experiemnent,
htough, as it had to process a gobsmacking amount of disk errors and put that into the Debian store. I’m still not absolutely sure that’s how it worked, but it didn’t work by trying to make a copy of the DVD onto the disk and what I saw seemed to be the process that was going on. .Copying the readable parts of the disk only was useless in trying to reach the results I finally did.

Also, what do you think of the NSA possibly using this kind of technolgy?

We used to hear a lot about it being in development for power companies to sell broadband, but I don’t hear anything abut it anymore.

Daniel November 2, 2013 7:33 PM


The underlying problem is: since when did the NSA become Batman? It not their job to fight crime, there is an FBI for that. And I’m deeply skeptical of the idea that information which is justified for one reason (stop terrorist) can also be used for another reason (stop human traffickers.)

I do not think that any American in their right mind is willing to spend hundreds of millions of dollars to stop human traffickers. Those people may be evil but they represent no threat to Americas national security.

Clive Robinson November 2, 2013 8:21 PM

OFF Topic :

Various people have claimed that the Ed Snowden revelations about NSA activities will/should harm US technology firms.

Well it looks like one sufferer already is AT&T which has been told diplomaticaly not to bother trying to buy EU based telcos. Because any such attempt will be “extensivly scrutenised” which is the diplomatic way of saying the bar will be set to high to jump over. Further there will need to be cast iron guarentees that EU and EU state data protection legislation is properly adheared to…

And it would appear the UK “Security Service” (MI5) apparently decided, prior to his traveling from Berlin that Mr Greenwald’s partner David Miranda was in effect (by their description) a terrorist…

And a question has come up about the Adobe breach and if it effects Ed Snowden…

What the Adobe breach shows yet again just how badly supposed major companies keep getting it wrong with password storage…

Fluffy November 2, 2013 8:47 PM


An intelligence analyst is not going to read the speculation present in these press articles with the same degree of credulity that many in the public have. He’s going to weigh that speculation against what he knows of his colleagues and teammates, against the culture of which he forms a part, and will make a more informed and cautious judgment. (emphases mine)

You have far too low an opinion of the press & the average citizen, and far too high an opinion of those whose expertise has been paid for by their tax dollars. Most in the intelligence community do good work, yes. However, their expertise and dedication do not excuse your low regard for the public. (For whose good this whole security edifice exists. Ostensibly.)

The fact they are mostly “good” is not good enough if they are not constrained by some decent respect for the citizenry. No beautifully written collection of veiled slurs can overcome this fact.

This is a real, consistent problem among the intelligence elite. Long term, it’s inimical to democracy.

Clive Robinson November 2, 2013 8:48 PM

@ RobertT,

Out of curiosity at what range were you geting 30dB SNDR?

@ Mike the Goat,

Yes “burst transmission” has been around for a while, I beleive from what I remember the CCCP (USSR) KGB were the first to use it to field agents. Their system used morse code that was sent fast enough to be outsidethe audio range of a normal HF receiver. A modified (IF and audio bandwidth opend wide) commercial receiver was connected to a paper tape unit with a pen similar to those seen on polygraphs which scratched out the morse envelop so that it could be read by eye quickly then easily burnt.

I must admit I’ve been thinking about what data would be most valuable and easiest to send across an air-gap with sound, and it occurs to me it would most likely be the output from the keyboard if the link is unidirection or a simple command line interface if bidirectional.

In both cases the data rate does not need to be high thus giving an increase in range possibly out to over a hundred feet or so (ie about twice the distance of an “ultrasonic tapemeasure).

Mike the goat November 2, 2013 10:07 PM

Clive: I was thinking the same thing. If this is the work of a nation state then they either have a working theory in mind and this is a tool to prove it (i.e. they are looking for, say a configuration file for an industrial control system and if found the variables will be TXed over the ‘mesh’) or it is just a fishing expedition where an operative will manually use a shell explore the remote machine(s). Either scenario doesn’t require any more than 75 bps. Painfully slow perhaps but doable, particularly if you use compression and send your commands in a line at a time (ie don’t use remote echo or interactivity). I suspect that if this has been designed by a nation state that a negotiation would take place and a suitable baud rate selected. It is likely it could be adjusted on the fly – ie if a few packets are lost or their CRC is bad then the baud rate will be knocked down a notch and attempted again. You would periodically beef up the TX rate if the environment was favorable as the noise floor in an area could shift dramatically depending on the time of day.

Mike the goat November 2, 2013 10:14 PM

Re ACARS wikipedia has a capture of a burst. It goes for less than half a second and includes the message included in the description along with the metadata. ACARS is a bad example as it isn’t designed to be covert and is slowish (2.4kbit/sec) but it should give some idea as to what’s possible. Imagine something like this bursting on the edge of human hearing.

Figureitout November 2, 2013 10:18 PM

Mike the goat
–Yeah I guess that’s on him; for me I don’t care. If I prove my claims I give up my methods, so I get skeptical of anyone asking how I do what I do…If people can stomach really simplifying their life, so many attacks expose themselves…That’s probably the best easy “panacea” practical advice I can give at this time in regards to security. My goal, is a step by step implementation and providing secure components to my buddies and maybe a small fee. And of course double check your 6 o’clock often 🙂

BTW, I do appreciate all your technical posts. Seems like your mind works a little too fast (but I like it), might make some simple errors. I know I do in something like math.

RobertT November 2, 2013 10:41 PM

@Clive R
I dont have access to the exact measurements anymore but the system was designed for in-room communications so maximum distances of 5 meters. The SNR of 30db is not difficult across a room but the SNDR is difficult because of room reflections.
Burst transmitters that send packets of length less than the typical reflection distance (multipath fading distance) are easy to do in Acoustic systems due to the low speed of sound in air. Low phase distortions is also why the ceramic speakers were SO much better than normal moving coil speakers for high frequency acoustic comms purposes.

herman November 3, 2013 1:46 AM

BT vs NSA: It is a bit ironic that BT also helps the NSA and GCHQ snoop, as detailed in the latest Snowden papers. Looks like there is bound to be some friction at work Bruce?

65535 November 3, 2013 5:42 AM

@ n.w.f.o.r.

NAS’s yearly numbers:

Budget of $10.8 billion / 35,000 employees = $308,580 per employee.

Gee. What a bargain.

Maybe the NSA should buy AT&T/Verizon/etc. It would cut out the middleman.

@ Others

It does look like the audio air gap jump could be possible.

Now, how about the smartphone re-charger thing and it’s ability to crack said Smartphones?

Mike the goat November 3, 2013 9:28 AM

Nick: I still haven’t given up on the blogsig idea we were chatting about a while back. My current PoC code is a bit of script hackery that uses wget to dump the page you wish to validate, then trawls the page looking for blogsig footers. Each footer found gets pumped into a routine. This routine pulls the blogsig into its component parts – the metadata and the digital signature.

The first block is the key identifier, which is encoded in five characters of the blogsig (and thus is limited to 94 chars, that is 7bit minus the space and controlchars). The keyID is actually in hex but is encoded in 7bit printable to save space. The sixth character of the blogsig metadata stores whether to verify the entire block as teXt (strip all HTML), full HTML, or strip all but Links. It can also be instead set to K which means the public ECDSA key follows. I will likely be able to avoid the sixth char of metadata entirely if the client software is smart enough to try the three methods until a good sig is found (and of course notify as to what method has been used). An embedded key could easily be differentiated from a SIG, of course obviating the need for the sixth char entirely. Directly following the six char metadata is the length of the signed post in characters (with all whitespace ignored and the HTML stripping settings of the mode chosen enforced) immediately followed by a ! and then the signature proper (or the key in the case of mode K). Obviously I could move char six to this location and have a relatively predictable delimeter rather than wasting a character. The blogsig ends with a % sign, thus giving a blogsig a pretty unique layout to be found with a regex.

Anyway my PoC can take a chunk of blog and verify sits against my test keyring without any hassles. The stripped HTML with the exception of links (mode L) is the default as link forgery is a possibility with plaintext strip mode. I have tested it with WordPress and the comment section of drupal without any problems. Of course this is just a proof of concept.

The next obvious piece of the puzzle is key servers. My idea of being able to push our your public key on a blog sucks as not everyone will see it and people might request it time and time again. A key server is the logical solution.

I am considering whether I can “dress up” a blogsig key so that the existing PGP key servers can do this job for us. It would be trivial to change my key identifier to PGP’s system. How it would be done remains to be seen. The servers might refuse abnormal looking keys. The easiest way would be to use the metadata of a PGP key (like the comments field) to publish our public key. It is short and shouldn’t pose an issue. People that wish to use both blogsig and pgp could generate a subkey with the required info and publish it to the key servers. This is just an idea I am toying with.

So that’s what I got up to on a boring Saturday evening. Obviously it is all just a test and my mind is made up about nothing. But I think the concept of a short low security key for signing blog posts is a good one. If I end up hiding the blogsig data in a link then it would annoy readers of the blogs less (although a one line blog SIG has to be less annoying than a multi line PGP sig). If you had the link point to the blogsig website, is then those with the browser plugin installed would get instant verification but those without the plugin could simply click the link. If it is a public blog a CGI script on the server could fetch the page, parse it and verify the blogsig. How is that for graceful fallback behavior?

As we have discussed the aim of the blogsig is for a short digital signature to provide a low to medium level of assurance that the user is the one who owns the public key. The key size is small enough that an attacker with time, money and resources could potentially forge it, but in a way that’s the point – non-repudiation is not a good feature to have in a system like this. Really all it is doing is providing a mild level of assurance. For anything serious – use PGP.

Nick P November 3, 2013 10:55 AM

@ Mike the goat

Cool. I’m still waking up so I can only give preliminary review. 😉

  1. What exactly is the “key identifier” in this scheme? I see it’s around 40bits and you also might include a public key too (the real identifier). So, whats the KI do here?
  2. ” I will likely be able to avoid the sixth char of metadata entirely if the client software is smart enough to try the three methods until a good sig is found (and of course notify as to what method has been used)” Don’t do that. It’s better to go ahead and send the char than force the client to do a bunch of work. Many clients will be low spec machines, esp if people use obsolete hardware for anti-subversion.
  3. “Directly following the six char metadata is the length of the signed post in characters (with all whitespace ignored and the HTML stripping settings of the mode chosen enforced) immediately followed by a ! and then the signature proper (or the key in the case of mode K). ” Seems like a good idea. I’m always for putting the length in up front to allow for optimizing memory safety & efficiency. If sigs are fixed length, then the “!” shouldn’t be necessary.
  4. “giving unique layout to be found with a regex” Ok, so that’s why you included the “!”. Might be a good idea, then. 🙂
  5. “I am considering whether I can “dress up” a blogsig key so that the existing PGP key servers can do this job for us. It would be trivial to change my key identifier to PGP’s system. How it would be done remains to be seen. The servers might refuse abnormal looking keys. “

DJB would tell use whenever we run into a problem fix it at its root. In this case, you have a choice of updating the keyserver or the app. Hard to say which is superior. I’ve noticed even commercial offerings are improving key mgmt to include symmetric, public key and others. More versatile. However, the idea of causing users least problems & us least work means it might be a better idea to use an existing key server. So, I’d say do whichever is easier for you.

(Note: I’m not worried about key management at this point as it is a whole can of worms itself. Let’s focus on the sigs/verification process itself rather than managing keys so fundamentals get done. Can optimize on key types/sizes later I would think.)

  1. “But I think the concept of a short low security key for signing blog posts is a good one.”

It can be short and high security. Might be extra work but I think you really need to look at this. It’s fast, keys are 32 bytes, sigs are 64 bytes and security is high. That would have been doable even on AOL bandwidth. 😉 Course, as GPG runs on so many platforms, it’s my current tool for our scheme and I gather yours as well. I’m keeping ED25519 in back of my mind, though, as I develop the ideas so I can possibly embed it into a future version.

Another relevant link:

(Seems we’re not the only ones thinking of these problems. They might have useful info in there on solutions or unforseen problems. Or not. Who knows.)

  1. “If you had the link point to the blogsig website, is then those with the browser plugin installed would get instant verification but those without the plugin could simply click the link. If it is a public blog a CGI script on the server could fetch the page, parse it and verify the blogsig. How is that for graceful fallback behavior?”

That is excellent thinking. 🙂 It’s a signature, it’s a page, it’s potentially secure, it’s usable, and it’s legacy compatible all in one. It actually reminds me of the Gibson QR code thing as he does something similar with a URL.

One improvement that immediately comes to mind. We don’t want to cram too much in the URL, we want to be able to change the protocol easily, we want it to be fast, and so far we’ve been putting sig in HTML. So tweak it where URL string is a key (key/value pair style) sent to a server app (not slow CGI’s) that uses it to retrieve an HTML page, sign it, embed signature, and send it to browser. This can be part of an HTTP cache so this is usually just a memory lookup of a signed page, with memory contents updated whenever page changes. If user has plugin, it sees identifier in HTML & does verification. If not, identifier is ignores b/c it’s a comment or unrecognized META tag. Protocol details can be upgraded or swapped out while keeping this interface (and hyperlinks) the same. Web site owners just have to install it with it being transparent from there, maybe even speeding up their site due to caching.

“As we have discussed the aim of the blogsig is for a short digital signature to provide a low to medium level of assurance that the user is the one who owns the public key.”

That was your aim. Mine was a stronger security property for authentication/integrity of content without relying on SSL. I consider SSL protocol as-is to be unreliable or outright compromised. I’d rather have a secure[able], native app producing the signed pages, then they’re sent to an untrusted transport (web stack). So, I think we have different goals here but the core of it (HTML signing) can solve both problems.

I also intend to use whatever comes out of our discussion in my own site if I make one. Its main content will be edited and distributed from an air gapped machine. In that context, the tools we’re brainstorming are much more secure than either SSL sessions or pages dynamically generated from web apps. Further, our HTML signing idea has tweakable security in that you can ignore it, use low robustness browser code, use NativeClient, use an application firewall style proxy in front of browser, or even batch pages over a one-way line to an air gapped machine that batch verifies them before viewing. Security embedded in content offers many such choices compared to an online protocol.

Nick P November 3, 2013 11:22 AM

@ kashmarek

Thanks for the links. The pieces were well-written. Especially the character assassination of senator feinstein. 😉

Clive Robinson November 3, 2013 3:48 PM

OFF Topic :

If this article is true then your home / hotel “white goods” may well be attacking your PC via WiFi…

Now, I don’t know about other people but my suspicious mind thinks “how difficult to mak a store and forward” device with such chips. So it could attack weakly air gaped laptops / smart devices. Or be used for the equivalent of an electronic “dead letter box” similar in function to the (supposadly) British MI6 “Russian Rock” device…

65535 November 3, 2013 7:38 PM

@ kashmarek

I am concerned about Senator Feinstein and her husband Blum (not to mention their net worth of $40 million which is enough to influence people in high places).

If these two pieces are halfway accurate then Dianne Feinstein should not be co-chairing the Senate Select Committee on Intelligence.

‘Senator Warbucks’

[The Byrne Report]

[and your links to this article]

The combination of financial confilicts and intelligence intelligence access give off the odor insider dealing via the Blum/Feinstein/URS/Perini/NSA/military contracts. Those dealings look unethical and maybe illegal.

I watched Feinstein conduct the “open hearing” with Gen. Alexander, Clapper and their lawyer. I found it to be choreographed and slanted in the direction of the NSA.

Give all of the current disclosures of massive NSA surveillance, “the least untruthful answer” and the overall deception by the Feinstein/NSA gang (plus, the fact that nobody has been fired) I have to conclude that Feinstein should not be co-chairing the Senate Select Committee on Intelligence.

Skeptical November 3, 2013 8:52 PM

Daniel –

Sure, my point isn’t that the NSA should be first to call when encountering any crime. I agree with you that they should not be.

As to the intelligence on human traffickers, I’m speculating, but two obvious possibilities are that either the information was acquired incidentally in the course of collecting against other targets, or that the traffickers actually were considered a national security target. Not too hard to guess why the latter might be the case: networks that successfully smuggle human beings from abroad into the United States for a price == vulnerable point in US defenses. I think such networks are themselves susceptible to deterrence, but better that they are simply rendered unsuccessful.

Fluffy –

I don’t think it’s disrespectful to say that the average member of the public does not bring the same level of critical analysis to NSA related news reports as do, say, readers of this blog. The average member of the public finds his energies outside his work better spent elsewhere, and is too worried about issues closer to his welfare (bills, earnings, health, family) to focus much on this type of issue. And let’s face it: for us, reading this blog and thinking about these issues can be fun and enjoyable. This isn’t work for us. Can we reasonably expect the average member of the public to read through leaked FISC orders and memoranda, or parse the exact wording used by reporters concerning material that provides itself a narrow window, bereft of context, into a complicated world?

The obstacles faced in causing the public to be well informed about a complex issue are the same here as in other areas such as health-care, taxes, international trade, climate change, and so on. I’m as fervent a believer in democracy as anyone, but faith doesn’t make those obstacles go away.

Snurt the Dog November 4, 2013 1:49 AM

Nadgers, I’ve been rumbled!
(And we’re not humping we’re just sniff-buddies.)

Aspie November 4, 2013 2:19 AM

In the UK, possibly the most surveilled country in the world, there are plans to install facial-recognition cameras at checkouts in supermarkets ostensibly to recognise who is shopping there and deliver targeted advertising. And I suppose if there’s a warrant out for a particular person … well, it could always notify the thin blue line at the same time.

The full article can be found here.

Try walking into a UK TESCO with a bandanna over your face and dark glasses (and probably lifts and lots of baggy clothing to disguise your height and build) and see how far you get before being thrown out.

Really, really don’t want this kind of thing going on.

Clive Robinson November 4, 2013 3:31 AM

@ Figureitout,

I think the second one would look good on a T-Shirt 🙂

@ Aspie,

Now we know why the Queen wears so many head scarfs and sun glasses when out and about…

But maybe we should take leaf out of Paul Hogan’s film and disguse ourselves to look like famous personalities, no as in his case to rob banks but just to go shopping.

Oh and in the UK news this morning apparently some “suspected” Somali terrorist went into a mosque, put on a burka and has disappeared of the authorities radar… I expect the talking heads will bring up legislation in other EU countries which have banned the wearing of face veils in public places as an example of how to foil terrorist plots…

Winter November 4, 2013 3:58 AM

“In the UK, possibly the most surveilled country in the world, there are plans to install facial-recognition cameras at checkouts in supermarkets ostensibly to recognise who is shopping there and deliver targeted advertising.”

The question the UK public has to ask is “Did giving away our privacy made the UK the safest country on earth?” Or even “Did it make the UK any saver at all?”

If the answer is “No”, the next question should be “Then why have we given away our privacy (and tax money)?”

But we all know few people want to read about difficult questions. And half the readers of the popular press in the UK go directly to page three anyway.

(For the humor impaired, this was sarcasm and I think people outside of the UK are not brighter or better informed than those inside the UK)

Charlie Dobbie November 4, 2013 4:00 AM

The following article on ArsTechnica talks about the Adobe password breach:

They say that the 3DES data came from an old backup system, while the new system is protected by SHA-256 – and that they iterate this hash over 1,000 times.

IANACryptographer, but given that hashing necessarily loses data, isn’t hashing multiple times a very bad idea? How does it affect the chances of collisions?

Aspie November 4, 2013 4:17 AM

@Clive Robinson

I recall that a Hollywood production company specialising in realistic latex face-masks (of various fact-types) was doing a brisk trade with members of the criminal fraternity predisposed to making unauthorised withdrawals from banks (validating their requests with firearms). Apparently these masks were of such high-quality that one had to look very closely to see they were masks at all. Maybe, if lifted whilst wearing one, a person could convince a psychologist that they have “Phantom of the Opera” syndrome and cannot roam outside comfortably without a mask.


Sarcasm aside, the Brits were not consulted and the few that were alarmed weren’t able to raise the profile of this erosion of rights enough to start a movement for change.

As it was, a million people marched in the UK against the Iraq war and it made not one jot of difference – except that the more vociferous found their way onto watchlists. The 9/11 event has proved to be a super-coup for all security-related industries that deal with governments. Look at the police. They’re equipped like the military now and the only thing they’re using it for is to keep anyone who complains “on message”.

As for the “safer” argument, the security services can simply say that we are safer because 7/7 didn’t happen again and if we ask for evidence of thwarted plots we get stonewalled with national security secrecy arguments. We don’t know what plots didn’t take place as a result of surveillance and action, it might be none.

Now … where did I put my bobble-hat and that Groucho Marx nose, glasses and moustache combo …

jacob November 4, 2013 4:35 AM

@Charlie Dobbie

Hashing is not tasked with preserving data. It is used only as a “fingerprint” of a message (in this case, passwords) so if 2 hashes match, you can be certain that the messages are identical.

From security consideration, a server keeps hashes of passwords – not the original passwords “in the clear”. When a user presents his PW as a credential, the server hashes the offered pw and compare to the stored hash of the original PW. If matched, it means that the offered PW is identical to the originally generated PW, and the user is let in.

A hacker who is trying to find a valid PW would hash guessed PW, testing for a match to the hashes he pulled from the server. To make his life more difficult, you can store on the server not just the first hash of a PW, but a hash of a hash of a hash – 1000 times. This will force the hacker to spend x1000 effort on his hash testing operation.

This will not change the chance of collision, which is nil.

Winter November 4, 2013 5:26 AM

“IANACryptographer, but given that hashing necessarily loses data, isn’t hashing multiple times a very bad idea? How does it affect the chances of collisions?”

The probability of a hash collision is (birthday paradox):
P = 1 / e^[-k(k-1)/2N]
where k is the number of hashes compared and N the number of hash values. So, for a 128 bit hash:
N = 2^128

Incidentally, from the formula it follows that P ~ O(0.5) for k ~ SQRT(N)

For 1000 rounds, the probability of a hash collision, P_1000 is 1 minus the probability of no hash collision
1-P_1000 = (1-P)^1000 = ~ 1 – 1000P for small P
That is, P_1000 ~ 1000

But P decreases exponentially for k < 2^64 (if N=2^128)
So, I assume that it will only remove some 10 bits from the hash “strength”

Scott November 4, 2013 5:27 AM


It is likely that repeating hashes will increase collisions. With a k-bit hash the probability that any two passwords will hash to the same value is 2-k. The more you hash, the more likely that the hashes will equal each other at one point (but not a specific point); if this occurs then every subsequent hash will also be equal, increasing the probability of a collision. It’s also possible to get stuck into a loop, where a1 = H(a0), a2 = H(a1), a3 = H(a2) … a1 = H(an), further increasing the likelihood of a collision, as they don’t have to get into the loop during the same round of hashing for a collision to result.

Now, passwords are probably going to be many many orders of magnitude faster to brute force than collisions, given we are only talking about 1000 rounds, but it’s still very easy to protect against, which any good key derivation function does. You can include a counter with the output of the previous hash, guaranteeing all hash inputs are unique for any round of hashing for a given password, and you can include the original password, guaranteeing that no two passwords have the same input for any round. PBKDF2 does both of these, preventing both scenarios.

Mike the goat November 4, 2013 5:58 AM

Nick: thanks for your rapid response. The keyID is a throwback to few things I was trying – in my first implementation keyID was derived from a hash of the public key. Obviously collisions would be an issue that I have not yet considered. The second idea was to use the keyID (and lengthening the field) to give the PGP key that contains our blogsig key embedded in its metadata – as a way for the client to know where to go and fetch the public key. Not sure exactly where I am going with this yet.

Re ED25519 I’d be happy enough to use it. I just chose ECDSA as there was reference code available and it was a “known quantity”

Re the URL idea – thanks. I thought it may at least remove the signature blob from public view and stop people going “huh? What’s that line of crap at the end of your blog posts?”.. I may have misunderstood you but are you stating that the server would, in effect act as a proxy, searching for signatures and verifying them if they are present?

Agreed re your comments on SSL. While we have different aims In have no doubt the solutions could be engineered to be similar or indeed solved in a single implementation. I think the key here is lots of discussion, simple proof of concepts etc before going to the spec/RFC stage. You can spot that some very popular internet protocols were thought up on paper without ever being implemented as a proof of concept – IPSEC comes to mind.

The benefit of doing some hacky proof of concept code – even if its just a bit of perl or even shell that, say takes your message and signs it – and another script that you can pump a HTML page into and it will find and verify any embedded sigs. Such code is never intended to actually be used but by doing that you encounter some of the problems that an actual implementation would face without going to all of the trouble at such an early stage.

Aspie November 4, 2013 6:01 AM

Sort of security related; for any of you who also as young people had a picture of the beautiful SR-71 on your wall … now an SR-72.

At mach 6 this proposed (unmanned) drone would probably need to slow down to launch standard missiles lest they disintegrate. Theoretically able to circle the earth in 6 hours (assuming it has the fuel) it could take an awful lot of pretty pictures too.

This drone thing has really fired-up the DoD hasn’t it?

Bryan November 4, 2013 8:26 AM

@Clive Robinson

If this article is true then your home / hotel “white goods” may well be attacking your PC via WiFi…

The Register also has an article on it.

It is interesting that they mention small 220V power supplies. That has been able to be done on a single power converter chip for ages. All it takes is adding a few resistors, capacitors, and an inductor for a power supply that could power a device. Might take upwards of a cubic cm to house it all. If one wanted to it likely could be integrated right onto the chip to be powered, and the inductor built into the PCB as a spiral trace.

As for the rest, it can easily be put on one chip, and stuck in a small package. Which, if somebody has done it, means there are likely 10s of thousands of them at a minimum as those numbers would be needed to justify production. BTW, in those numbers the price per unit would easily drop under $2 per chip, and $4 for the whole assembled device. Given some of the system on a chips that I vaguely remember, it should be able to be done using standard off the shelf chips so it may even be possible to make them cheaper yet. None of those pesky non refundable engineering costs. Just stick the chip in a cheap package please. All I need are JTAG programming, and WIFI ports.

Charlie Dobbie November 4, 2013 1:50 PM


I’m largely following – but doesn’t the probability of collision increase with the number of hashes compared? I suspect that formula should be: P = 1 – e^[-k(k-1)/2N]. That gets me P=0.393 for k=sqrt(N).

I wasn’t actually considering hash collisions inside the same chain of 1000, but that’s a very interesting point, because it would lead to a looping sequence of hashes, which if detectable would reduce the attacker’s work somewhat. Thank you!

What I was considering was collisions in resulting thousand-fold-hash from different input plaintexts. I was trying to understand how the repeated hashes affect the probability of collisions further down the chain – but from reading your response and playing with the numbers, for hashes 2 through 1000 the sizes of the plaintext domain and the hash domain are of course identical, so you don’t have the cascading reduction in message-space that I was concerned about.

Very interesting!

Scott November 4, 2013 2:35 PM

The birthday problem is appropriate if you want to see getting stuck in a loop (i.e. the probability that in 1000 iterations, at least two of the hashes for one password will be the same value) but for whether or not the hashes for two given passwords will equal each other at any given point in the chain, not taking the loops into consideration. That’s a much simpler formula (to be consistent with my previous post, n is the number of hash iterations and k is the hash length in bits):

P = 1-(1-2-k)n

It’s an approximation, since hash values are not completely independent. As stated before, the possibility of a loop may increase the probability of a collision, and also the possibility that H(bi) = H(ai+c) where c is a non-zero integer reduces the probability of a collision (e.g. if H(b1) is equal H(a2) they will never equal each other at any point unless there is an i such that H(ai) = H(ai+1) where i >= 2).

Scott November 4, 2013 2:54 PM

Reading my previous post, I think I just invented “Security by incomprehensibility.”

The birthday problem is appropriate if you want to see getting stuck in a loop (i.e. the probability that in 1000 iterations, at least two of the hashes for one password will be the same value) but not for whether or not the hashes for two given passwords will equal each other at any given point in the chain, not taking the loops into consideration. That’s a much simpler formula (to be consistent with my previous post, n is the number of hash iterations and k is the hash length in bits):

P = 1-(1-2-k)n

It’s an approximation, since hash values are not completely independent. As stated before, the possibility of a loop may increase the probability of a collision, and also the possibility that H(bi) = H(ai+c) where c is a non-zero integer reduces the probability of a collision (e.g. if H(b1) is equal to H(a2) they will never equal each other at the same iteration unless there is an i such that H(ai) = H(ai+1) where i >= 2).

Scott November 4, 2013 9:41 PM

I figure the actual probability of collisions for a nested hash k-bit function, with n iterations, with a database of m passwords is approximately the following:

P = 1 – e-m(m-1)/(2/(1-(1-2-k)n))

For SHA-256, with 1000 iterations, and 153 million salted passwords, the probability of a collision is the following:


If you use a proper key derivation function, the probability of a collision is the following:


Mike the goat November 5, 2013 1:16 AM

Scott: not sure why they would roll their own rather than using a standards based known quantity for stretching like PBKDF2. Seems that when people roll their own thinking they have some deliciously clever scheme they invariably screw something up (me included).

Scott November 5, 2013 2:24 AM

@Mike the goat

I’m not suggesting one way or the other, I’m just commenting on whether nesting hashes reduces collisions resistance, and showing that if you don’t use a good function like PBKDF2, it does (but probably not enough to matter).

That said, previously they were using 3DES in ECB mode, so who knows what they are doing now?

Scott November 5, 2013 4:17 AM

That said, I have to admit I’ve made some kind of overly clever schemes myself; I had one where I repeatedly hashed password + salt/previous hash, concatenated all outputs and hashed the original salt, password, and final concatenated string of hashes (well, I really piped the output to the hash function, there’s no need to store the whole string of hashes in memory). It’s overengineered and I wasn’t exactly sure why I was doing it, it just sounded like a good idea. It’s probably not insecure, but there’s no method behind the madness.

PBKDF2 has always struck me as having a lot of that, though. Using a keyed hash function like HMAC for no apparent reason, other than it’s there, XORing every hash together with no justification. It’s like they just threw a bunch of things together without really thinking about it. It seems like it would be much simpler to just do H(Iteration | Salt/PreviousHash | Password) and then just take the final output. There’s nothing wrong with PBKDF2, as far as we know, but I just like things to be simple and well justified.

Mike the goat November 5, 2013 6:29 AM

Scott: I had a devious scheme for remembering my online passwords when travelling setting all my online passwords using the following scheme:

echo “secret sitename”|md5sum|cut -c1-12

… so I needed to only remember the secret. 🙂

Mike the goat November 5, 2013 6:32 AM

Point being we often come up with “great” concepts that turn out to be pretty crap. Ignoring the discovery of the secret and the lack of sufficient entropy in a 12char hex password (I guess I could pipe the md5 into base64 first… Ahh, a revolution) the system wasn’t /that/ bad but is symbolic of the kinds of hack we all dream up in an instant and don’t think much more about.

I guess that’s why peer review is such a powerful tool.

Bryan November 5, 2013 11:37 AM

echo “secret sitename”|md5sum|cut -c1-12

Back when my 41CV still worked, I entered the keyword in, and it computed a set of hashes with a secret the 41CV knew. The output was 8+ printable characters including symbols and numbers. I had the code output characters until it had 8 that were in [0-9A-Za-z] range because some systems didn’t allow symbols.

Later I used spreadsheets on Palm and Windows CE devices. With them I had to enter characters into individual cells. You could see the hash output evolve as characters were entered. I consider my tablets to insecure to use that method.

Mike the goat November 5, 2013 1:41 PM

Nick P: you know I was going to great lengths to condense things to 80 printable characters so it could be used in band as a single footer line of a blog post? How about this little gem I came across called unibinary. Yeah, it’s like base64 but uses Unicode alphabet to encode more data into less text. Extra points as people think you’ve just got a foreign language sig. Haha! (No I am not seriously considering this)

Bryan: good to see I am not the only one to have come up with the idea in the past! Nonetheless I think that the security provided was probably much greater than reusing an English(ish) password across many different sites.

Scott November 5, 2013 1:50 PM

I have a handful of easy to remember throwaway passwords for things that I don’t care about, like forums, some much stronger, much harder to remember passwords for financial institutions, as well as a couple fairly hard to remember passwords I reuse for things like and other services that have my personal information.

It’s far from perfect, but it works until I can have flash storage and a microprocessor implanted and attached to my brain.

Nick P November 5, 2013 4:12 PM

@ Mike the Goat

That’s a cool trick but I’d rather not use Unicode. Reason is that Unicode interpretation is somewhat complex and has resulted in bugs before. ASCII and BASE64 are much more straightforward to code and understand. And I believe they’re already coded on most platforms, including javascript.

Scott November 5, 2013 10:19 PM

@Nick P

I played around with the idea of allowing UTF-8 passwords in the past, and in my research I realized that it could be very problematic, especially since through the use of combining characters there are often multiple ways to represent one character. However many Unicode libraries implement conversion to the Unicode specified normalized forms that guarantee unique encoding; this can be either fully composed or fully decomposed. In fully composed form, a single codepoint that represent character with an accent is favored over a codepoint representing a character followed by a codepoint representing a accent combining character; fully decomposed is, of course, the opposite.

Hopefully if the site is UTF-8, the characters you send shouldn’t be changed at all by the software. Unfortunately, you have no way to guarantee it.

Nick P November 5, 2013 10:35 PM

@ Scott

I appreciate the info. To be clear, though, Mike and I are brainstorming on a digital signature scheme for online content embedded in the content and taking up almost no space. An HTML document or blog comment here is an example. So, he was mentioning it more as an encoding for a signature to save space (or reduce clutter).

I’m sure your comment will come back to me, though, if I try to build it. I’ll be worrying “how is this being processed behind the scenes and will that mess up my signature?”

Scott November 5, 2013 11:13 PM

@Nick P

Yeah, I was aware, just went off on a tangent (since I had no experience encoding binary in Unicode). If you want to keep it simple, make sure each byte can be encoded to a single Unicode codepoint, normalize the signature on input, and then keep a two way mapping between byte and Unicode codepoint.

If instead you want bonus points: Design an encoding scheme to encode to fully decomposed form, in which combining characters are used to further reduce the character count. e.g. if your previous character is e, you can look at the next few bits and pick a valid combining character based on that. You can then normalize to fully composed form to reduce storage.

Scott November 6, 2013 12:51 AM

I’ve been thinking about it and I realized PBKDF2 is not ideal. The problem is the salt is only in the first hash, so there is an increased risk of collision if the same password is used (and people choosing the same password is a problem). There are several opportunities for collision:

A collision on the output is obvious; it’s a simple birthday problem. However, if you use the same password but different salts, and the first block is the same, then you also will have a collision on the last block. This means the probability is double. Furthermore, if the previous blocks XORd in the intermediate are ever equal to each other, than the next block has an additional probability of collision, increasing the probability of collision with each iteration. These are still extremely unlikely, but still, I think any key derivation function should abide by the following rule to minimize the probability of collision:

The input to a hash function should not ever be repeated except for when the password, salt, and iteration are the same.

Another issue is that if you are using PBKDF2 for key storage, and the output length of PBKDF2 exceeds the output length of the hash, then the work for the attacker to check a password is equal to or less than half of the work for you to compute it, making it also less than ideal (not that there really can be any gain from having the key length greater than the hash function length for password storage).

I propose the following key stretching function where H is the hash function, n is equal to the total number of iterations, and i is the current iteration:

k0 = H(0 | Length of Key | Length of Salt | Salt | Length of Pass | Pass)

ki = H(i | Length of Key | ki-1 | Length of Salt | Salt | Length of Pass | Pass)

K = LEFT(H(n+1 | Length of Key | kn | Length of Salt | Salt | Length of Pass | Pass) | H(n+2 | Length of Key | kn | Length of Salt | Salt | Length of Pass | Pass) | … | H(n+CEIL(Length of Key / Length of Hash) | Length of Key | kn | Length of Salt | Salt | Length of Pass | Pass), Length of Key)

I think I’ll write up a spec and call it “Polliwog.” Not that it will ever get attention; I’m a bit of a nobody.

Figureitout November 6, 2013 12:58 AM

Not that it will ever get attention; I’m a bit of a nobody.
Hey, you know what? F*ck that mentality, you are somebody. Your posts remind me of Applied Cryptography lol, I realized this knowledge isn’t helpful if I can’t implement and understand myself into hardware that I trust. If you can contribute to implementing an open source crytographic scheme to SECURE hardware then you can be a part of a massive project that I think is going to happen.

Mike the goat November 6, 2013 5:12 AM

Nick P: wasn’t being serious, but just thought it was pretty cool. I think using printable ASCII only is essential. Base64 is a bit wasteful though, how about Base85 (aka ascii85)? Thinking about it there are 128 low ASCII chars of which 33 are control chars, which leaves us with 95. Take away the space and we have got 94. So potentially we could have a base94 system that was more efficient, right?

Clive Robinson November 6, 2013 6:51 AM

@ Mike the Goat,

    Take away the space and we have got 94. So potentially we could have a base94 system that was more efficient, right?

Err possibly not… you need to think about errors and how they propergate. Base16 being four bits or half a byte only effects one visable char. Base64 being six bits is harder and after unpacking effects several. As for base94 this does not pack by easy binary chop but by complex (to human eyes) mathmatics and effects even more chars…

As always it’s a game of Snakes-n-Ladders. What you gain one way you lose another 🙁

Scott November 6, 2013 7:18 AM

@Clive Robinson

It’s actually about 5 lines of code with an arbitrary precision number library. Treat the signature as a large number, and conversion to base(anything) is as simple as division and modulus.


string b;
while (b > 0) {
    a = a / radix;

Figureitout November 6, 2013 11:08 PM

//Shout out to Dirk Praet
Hackaday (one of my favorite sites) had a hackspace tour of Europe and 4/13 places were in Belgium and most of the towns in Netherlands (those damn dutchies!) and Germany I’ve visited. There are some legit hackers and cheeky bastards there for sure. Of course everyone knows Vincent Rijmen and Joan Daemen, the creators of AES, are Belgian.

Hopefully they stopped by your place. 🙂

Figureitout November 7, 2013 9:21 AM

Mike the goat
–Remember when I told you to come to my college and an underground computer lab to pick up an account? Well the other day some freshman engineer dumbass left his account logged in…I could’ve been a real dick, I just logged it off and no did not save his work.

Dirk Praet November 7, 2013 11:42 AM

@ Figureitout

//Shout out to Dirk Praet

Finish your education and move back here. Any IT degree will get you a work permit in no time and when you register for our free courses in Dutch and wrap your head around the language you’ll be in business in no time. And we’ll throw in very affordable health care. The main downsite of living in Belgium is the horrible tax rates and the weather. But you already knew that.

The Squirrel November 10, 2013 2:38 PM

Anyone considered the ham radio modes of data transmission? You can cram plenty of data through 3kHz of unstable audio bandwidth. Just grab fldigi and see how much data you can transmit with your speakers, even with a super high SNR…

As far as the feasibility of implanting an application for this in BIOS, I have no idea. If the code has access to enough processing power to encode and decode these audio signals and do PWM and CCP with an i/o pin connected to a transducer or speaker somewhere, it’s very possible! Same thing can go for generating and receiving an RF carrier.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.