Dennis October 12, 2012 5:02 PM

This link:;next
shows photos of people being scared out of their wits at a haunted house. The text next to the gallery says: “Much like snowflakes, no two people seem to react to a good scare the same way…”

That got me thinking. What about a new security system. Instead of fingerprint scan or iris scan, your laptop can pop up a terrifying photo and take your picture as you react. How many people would be able to imitate someone else’s instinctive fear reaction?

Of course I am not serious, but what do you think?

Vlad October 12, 2012 5:11 PM

One flow will be that one will not be able to repeat the same reaction every time. The problem is that the act of authenticating actually changes the response of that authentication challenge – in other words – it will work once and after that you’ll not be scared the same way or at all after a while.

martino October 12, 2012 8:02 PM

Ok…It just dawned on me…what’s with the squid fascination? I mean I get that their cool and smart and all but why not octopus then? Why squid? What’s with the whole squid thing?!? I’m a long time reader and it never occurred to me till now…

Clive Robinson October 13, 2012 4:03 AM

OFF Topic:

It would appear the “US China APT” mob are up to their usual political shananigens,

Now let me be clear there is a lot of difference between taking sensible security measures and using faux security scares as a smoke screen for politicaly inspired protectionism.

The US made a political choice many years ago to let go of it’s US based secure fabrication plants etc and go do the COTS route to save a few dollars to look good come election time.

In the process they handed over much in the way of IP, skills, job security and National Security. They let the genie out of the bottle and in the process also opened a whole pandora’s box of problems. There is no excuse they were told very clearly that this would be the result.

Now the “Chinese War Hawks” and others are jumping on a political bandwagon, It’s not just the Chinese but any nation that produces base component parts that are a threat to US National Security, and at the end of the day “when the chips are down” on the work bench no matter where you got them from if there was no security custody chain from design to the bench you cannot implicitly trust the chips.

Thus you have two choices, the first is the hugely expensive set up US based secure fabrication again, or find some other technical way to mitigate the risk.

A politicaly inspired idea such as this one will not and cannot provide the security it claims to be for. Because at the end of the day supply chain security is horribly horribly complex and inordinatly difficult, thus it has more gaps and holes for individuals with ill intent to worm their way into than most could even. attempt to imagine.

I suspect if after rational non politicaly biased thought the proposed solution will involve a slow migration back to secure design and manufacture of chips and other key components and to use these in appropriate technical mitigation strategies for the bulk of components.

But it’s a big “IF” simply because there are to many rice bowls / pork barrels to be filled and there will be no change other than more political rhetoric.

In theory the US people will have a chance to make a choice on such matters fairly soon, but the reality is the Dog and Pony show of the Monkey in a Suit Pollitical process of “representative democracy” that most Western and other nations have will be compleatly subverted post election by the holders of the rice bowls and pork barrels who bought the “peoples representatives” a long long time ago from the previous fillings of their bowls and barrels.

Vatos October 13, 2012 7:33 AM

With my latest UK-equivalent of “TV Guide”, there was an ad for a wallet that thwarts RFID attacks.

Is this sort of thing worth the money? How much damage can an attacker do if you are not protected?

Civil Libertarian October 13, 2012 12:53 PM

New Election System Promises to Help Catch Voting-Machine Problems

When voting system activists in the U.S. managed to get many paperless electronic voting machines replaced a few years ago with optical-scan machines that use paper ballots, some believed elections would become more transparent and verifiable.

But a spate of problems with optical-scan machines used in elections across the country have shown that the systems are just as much at risk of dropping ballots and votes as touchscreen voting machines, either due to intentional manipulation or unintentional human error.

A new election system promises to resolve that issue by giving election officials the ability to independently and swiftly audit the performance of their optical-scan machines.

[. . .]

So far, Moore says they do not intend to make their software open source so that it can be examined by others, but he did not rule this out for the future. The company does, however, make its source code available to election districts that have requested it.

Any preliminary opinions on whether this might be promising?

Figureitout October 13, 2012 2:51 PM

@Petréa Mitchell–that was great.

$5.8 mil “Text Against Terror” gives no intelligence or leads.

Yet over on the Homeland Security Newswire, they are calling the program a success and want to expand.

Leaves me wondering what some of the “tips” were like…

Clive Robinson October 14, 2012 5:12 AM

@ Vatos,

How much damage can an attacker do if you are no protected

It’s not a simple question to answer, and currently we are only at the begining of what people are thinking of doing with “Near Field Communications” which covers RFID and other “Contactless Technology”.

The first question to ask is what the technology actually does and how, as this puts a bound on what is practical to achieve using it. It is important to remember that the technology is agnostic, it is just a tool that like any other such as a knife and thus can be used for good or bad, and can thus also cause accidents.

Historicaly the technology started out to solve a simple problem of how to identify an object from something inside it as opossed to outside it. The reason for this is things on the outside are generaly the first to get damaged, obliterated, removed or changed. Thus by putting it in the object they make the identity much more secure.

Also the method chosen helps to prevent mis-identification of one object as another due to human transcribing error.

The original RFIDs where made by companies likes Dallas Semiconductors and consisted of a tiny silicon chip a capacitor and coil wrapped around it all sealed in a small package with some small enough and safe enough to go down a hypodermic needle and be embedded in live stock in areas they were not likely to enter the food chain. Each device supposadly had a unique serial number and would respond when energised by repeating the serial number over and over whilst it remained energised. Unfortunatly the number of bits asigned too the serial number was to few for serial numbers to be realy unique.

The purpose of the coil was to act as the secondary of a tuned transformer by which energy would be received from the primary coil in the scanning device and also provide the back path for the data that was the serial number. As such these devices generaly ran at very low frequencies and low data rates and had very short ranges of just a few inches and had no mechanism to deal with more than one device in the scanners “field”.

Various things were done to solve these issues including working with scanning frequencies upto the bottom of the UHF band, use of varying time delay before sending data and repeating it such that the multiple device in the scanners field issue was solved, and also modification of the way data was sent back to the scanner such that more data could be sent from the device.

However other uses for “contactless technology” became envisioned including programability and as such the chips have now become comparable in function to contact based Smart Card chips or SIMs you find in your mobile phone or credit card.
As such the use had changed from a pasive unidirectional data transmission of a serial number tag to one with bi-directional data communications and very large amounts of data that could be cryptographicaly protected, and used for carrying out financial transactions for such things as public transport fares.

But the development has not stopped there, others have realised that “Smart Devices” could actually act as many RFID devices and thus now mobile phones are getting the backend “Near Field” technology built in and can look like one of many RFID devices including all your credit and other payment cards, identity systems used to provide security in buildings etc as well as some proposed National Identity cards.

So in effect all of your financial and identity data including medical records etc will become put on these Near Field devices and as such stand as a witness for all your societal requirments. And when put on a smart phone the phone becomes for practical purposes you…

The reality is of course that any of these RFIDs can be cloned and thus are totaly reliant on other security measures such as PKI certs shared secrets etc all of which are known to be vulnerable in many ways as they are dependent on other security technology, likewise smartphones and other smart devices are known to be vulnerable because of their trust on other security technology.

In all cases this “other security technology” was designed with totaly different security goals in mind and thus can and is inappropriate for the new uses it is put to with the likes of RFIDs.

My solution to the problem is not to have RFID / Contactless / Near Field devices on my person. This already causes me problems as it effects my use of public transport. Currently in London you pay about twice as much for a bus ride if you pay by cash as you do if you use the insecure Oyster card, the alternative of paper based annual travel cards is being phased out fairly rapidly and cash payment is being made extreamly difficult with ticket offices being run on reduced hours or closed entirely. Some banks are including Near Field Technology in existing contact based credit cards, from what has been said it appears you will have to move banks if you want to opt out, which as it’s an Industry drive just as was Chip-n-Pin you won’t be able to do fairly quickly.

Now when you add the mismatched security aims of the base systems to that of the financial systems overlaid ontop of them you are going to have more security holes than a second hand pair of string underpants and a consiquent loss of support from those who sold you the systems (ie they will as with Chip-n-Pin Externalise the risk back onto the “customer”). And with potentialy large amounts of money to be made you can guaranty that Organised Crime are going to get involved just as we currently see with OnLine Banking.

So as all these Near Field Contactless RFID systems can be made to work at considerably more than arms length you have to ask how long before you get “walk by contactless mugging”?

Such crime will become a numbers game just as it is with ATM skiming of peoples current bank cards. If you are in a high risk area you need to take more precautions than in a low risk area to have the same level of risk. But the risk is always there as long as society pushes you to use the technology and opting out is not possible. The more precautions that people take or the easier it is to use the attacking technology the more wide spread the crime will be. Thus sensible people will use whatever precautions they can and if they work RF shielding wallets will become a sensible precaution.

However they have downsides both personal and societal.

On the personal side you have the conveniance or more correctly the inconveniance factor to consider. In London we have the Oyster contactless card for buying bus / train / underground / over ground / tram and other public transport services. Standing on a bus near the card reader you can observe peoples habbits, it’s got to the point where women press quite sizable handbags up against the reader rather than get the card out. In many cases the time it takes them is longer than it would takee them to pull it out of a pocket and put it back again. You also see people holding their purses and wallets up to the reader and it failing because they either have two Oyster cards or they also have one or more of the newer bank cards with near field contactless technology built in. Basicaly two or more near field cards confuse the readers that are not that reliable anyway which is why quite a few people have two oyster cards…

Think about how the conveniance / inconveniance of an RFID proof wallet would effect you?

However you may not get the choice because of society aka the Gubbymint and it’s money grabbing ways to pay the voter bribes that get them elected to their personal grave trains.

The very proflagate previous UK Government (many of them caught for illegal activities involving money) wanted compulsory ID cards with Near Field Comms in and heavy fines for not carrying them on you at all times.

It is an idea that is not going to go away as the tax take from big businesses carries on dropping in various ways (google HMRC and “Dave Hartnel” and the outrageous sums he alowed various big businesses such as Vodafone Tescos and a couple of hundred other major companies to avoid over a good dinner and a handshake). So we are moving from a tax on business based government revenue system to a “personal tax and fine” revenue system.

Some years ago I predicted that in the UK what we would have is a “big brother” style system where there would be random mandatory checking of ID and on the spot fines for not carrying ID… Well we’ve started well in on that path by “conditioning the plebs”.

In East London the Border and Imigration people are already running random (usually racialy profiled) spot checks at public transport hubs like Stratford to look for illegal immigrants. Even when those members of the racialy profield group do carry their ID they are often detained for lengthy periods of time whilst they are “cross checked” which causes them problems with their employers for being late to work etc. We also have due to the “war on terror”, “Get tough on knife crime” and a whole host of other initiatives various LEAs setting up random walk through metal detectors and stop and search initiatives.

Think how easy it would be to add an extra detector in the “walk through” for your National ID card, if it’s not detected you get pulled to one side and seriously inconvenianced to show it or be fined a substantial amount…

As such RFID proof wallets would slow such a cash cow system down and raise costs initialy you would be very servealy hassled and humiliated (TSA style) because “you obviously have something to hide” then if sufficient people did it the wallets would become a prohibited item subject to yet another fine etc… Oh and to ensure that it works they way it should each person on the walk through team will be on a bonus scheme bassed on how many people they catch, not just for ID card fines but all sorts of other finable offenses as well…

Such is the society we are having forced on us by the Politico’s and their Big Business pay masters.

Vatos October 14, 2012 8:12 AM

Thanks for the reply Clive.

Am I correct in saying that if someone uses a scanner near my contactless bank card, they then have enough information to perform a fraudulent transaction?

Clive Robinson October 14, 2012 10:25 AM

@ Vatos,

Am I correct in saying that if someone uses a scanner near my contactless bank card, they then have enough information to perform a fraudulent transaction?

It depends on a number of things, firstly if it is an ordinary “radio scanner” as used by “Hams” it is only capable of picking up the field generated by the contactles card if it is activly being used within the range of it’s antenna system. Further the data should be multi level requiring knowledge of not just the protocols in use but any encryption as well.

Thus even if an attacker has access to the same hardware that is used in a shop EPOS terminal it may well be incapable of initiating any kind of transaction beyond the very basic levels.

That said fialings in the systems may well make PK Certs and shared secrets vulnerable in all sorts of ways. You only need to see the issues arising from Chip-n-Pin to realise just how difficult a process it is to secure financial systems. Further some of the near field systems used for paying for public transport have had their secret keys and shared secrets found by researchers, which means it is very likely organised crime is also more than aware of these secrets.

The problem is neither you or the system designers and implementors can prove these systems are in any way secure (you cannot prove a negative any way). So a wise person, to limit their risk proffile must assume they are not secure in any way, and that it is simply a numbers game that they cannot win at. The best they can hope for is to draw before their number comes up and they lose, unless they take further action such as ‘not playing the game’ or ‘moving the goal posts’.

As I noted earlier “society” will not let you be a member and not play the game, so you have to look at how to “move the goal posts” for your own protection.

One way to do this is to only ever use a debit card attached to an acccount that you keep virtualy empty. That is you plan ahead and only transfer money into the account shortly before you make a purchase. However you need to use a process that is in turn not liable to attack so don’t use online banking. This can be done because some banks alow you to make transfers from an ATM machine but only to pre-aranged accounts. Providing the account with the debit card is with one bank and the ATM transfer account is with a different bank then the level of work done by an attacker has vastly increased. You can further back stop this in other ways, the important thing is not to have any credit or overdraft facilities in any of the back stopped accounts involved. Thus the worse loss if you do get attacked is the small amount of money in the accounts at the time and any bank charges run up by the attackers that you should be able to get back in most juresdictions from the banks.

As an individual or organisation you realy should not have to do such things, however because many of the banks have poor security and protect themselves by externalising the risk onto their customers in many juresdicitons you have to think creativly to protect yourself.

Nick P October 14, 2012 10:26 AM

@ Vatos

Maybe, maybe not. It’s one of those things that is easy in theory, but rarely happens in practice. Krebs on Security does pretty good coverage of what crooks are actually doing. Most of it is hacking databases, ACH fraud, or getting credit cards via ATM skimmers. Get a non-RFID card if you can. Otherwise, just don’t worry about it until attackers start focusing on it.

Snope’s link on Electronic RFID Skimming

Peter A. October 15, 2012 6:55 AM

Re: contactless credit/debit cards.

One way to go for some time now is to cut the antenna loop embedded in the card – you only need to know where exactly it runs. A friend of mine has exploited the fact that his wife works at a hospital with digital X-ray machine (so no expendables like the film, chemicals etc.) and has made her X-ray the card to see where the antenna is, then drilled small holes in two places along the path, just to make sure. The card works OK in ATMs and such but not with the contactless terminals.

Now I am considering exploiting my own aquiantances in the X-ray rooms…

Of course this is not applicable to RFID-only fare cards etc.

Clive Robinson October 15, 2012 7:25 AM

@ Vantos,

Just one thing when it comes to “scanning” many EPOS systems have dinky little “mobile units” that are carried around the premises. These are not “near field” devices and often use the ISM band some actually use WiFi.

Which as people have been able to “line of sight” evesdrop WiFi at several Km raises the notion of your local outlet being a major source of your Plastic details being put into free space.

The question then becomes what level of security is used on these ISM non near field radio systems?

It could be that the answer (being implementor choice) could be none ranging upto the better (but still breakable) levels of the WiFi protocols.

Then again consider that some EPOS terminals did have a cellular radio / mobile phone and other gizmos added in the “supply chain” before they were instaled at various retail outlets in the UK.

So it’s not just the near field link you need to consider when using your plastic even if it does not have near field comms your details may be vulnerable further down the comms chain or as @ Nick P notes where the current action is with getting at EPOS DBs etc.

Which with the usual “low hanging fruit” principle the attackers are going to move along the comms chain from the DB towards the cards one weak link at a time as each successive link gives less oportunity for in most cases increased risk of getting caught.

However with the ISM band stuff the risk of being found activly exploiting this is very very small as you don’t have to be anywhere close to the plastic. or place it’s being used at…

At the end of the day the real issue is the level of security that the plastic has on it’s comms channel, it realy should be PK cert SSL equivalent or better, but in most systems I’ve had some knowledge of it’s been little more than plain text or simple obsfication.

And one thing history teaches us with comercial organisations security at best plays second fiddle to many other business drivers such as cost reduction, and as long as it remains a “free market solution” then security will be a race for the bottom. And to be honest the Payment Card Industry (PCI) even though it mandates standards has a history of security blunders and faults not least being the audit process they use where the incentives are not best aligned with customer detail security at anything more than a rudimentry level (ie data at rest and a limited scope data in use/transit).

And as has been pointed out in the past this lack of security has much to do with a desire by card issuers and merchants to have the “conveniance of off line transactions” be it to make sales or process sales less expensively or more bluntly to do both as cheaply as possible.

Nick P October 15, 2012 3:22 PM

CIA adds Skyhook system instructions to its museum

They used a variant of this in the movie The Dark Knight. My brother and I were watching it. He told me it would be awesome if we had something like that. I told him, per Richard Marcinko, we used something like that over a decade ago. It had a few kinks compared to Batman’s version. However, it was also real, making it more useful.

RobertT October 15, 2012 5:13 PM

I’ve never heard of anyone extracting need bank account information ( to make fradulent transactions) from an RFID card, (using only the RF interface).

I wont go so far as to say it is impossible, but it definitely belongs in the VERY hard bin.

So while it might not be possible to directly extract a cards account information details, it is possible to use the mobile reader to mount a MiTM attack. Basically an RF link / NFC bridge device (maybe BT or simple FM )is held near the real contactless card reader, the mobile attack terminal relays the information between the real card and the real reader. Classic relay MitM attack.

Typically an attacker would still need the PIN before any transaction was possible BUT this can often be gotten with a little shoulder surfing.

Clive Robinson October 16, 2012 2:16 AM

@ Nick P,

Yup the skyhook was real, and I for one would not have liked to be lifted by it for various reasons. One being the idea of the mechanism that first clamps the cable and then cuts the balloon side of it, the second was what happens if the cable missies the “scisors” and ends up against the wing or an engine or its prop?… Then there was the very predictable flight path the plane had to follow could get it easily shot down, and of course the deployment of the balloon was not exactly covert and would be almost as good as firing flaress into the air to attract the enemy to your ground position…. It was however recomended for the US Coastguard and even to NASA for astronaut recovery.

However did you follow one of the links in the article to this little jem,

It provides some good tips on how to totaly kybosh an office or other workplace efficiency and reduce morale to the point that even the worst of employees will make like rats on a sinking ship.

Just adding two ideas from Scott Adams would make it instantly recognisable as the “secret to cubical life”. Those two rules being,

Catbert’s Managment Rule number 1, “never be in the same room as a decision”.

Wallie’s “annoying sounds screen saver”.

Oh and one other Rule that I identified independently and others have also done the same would make it the Senior managers guide to getting ahead,

Start a major “company saving” project and then jump ship about 1/3 of the way through.

This enables you to always claim it as a success on your C.V. and at job interviews. Because if the project succeeds it was down to your skills in giving it the strong footing and giving the employees confidence in their own abilities etc etc, If however it goes bad you still claim the same initial success but say that those brought in after you demoralised the workers, did not have the confidence, did not see the strategic vision etc etc. Win-Win just what a hopless sociopath needs to get to the top and do an Enron, or get into Government Service post election…

Nick P October 16, 2012 1:32 PM

@ Clive Robinson

Yeah, I saw that one. I enjoyed your expansion of the idea too. My favorite:

“Start a major “company saving” project and then jump ship about 1/3 of the way through.

This enables you to always claim it as a success on your C.V. and at job interviews. Because if the project succeeds it was down to your skills in giving it the strong footing and giving the employees confidence in their own abilities etc etc, If however it goes bad you still claim the same initial success but say that those brought in after you demoralised the workers, did not have the confidence, did not see the strategic vision etc etc. Win-Win just what a hopless sociopath needs to get to the top and do an Enron, or get into Government Service post election…”

Might have to try that sometime. In mean time, I have used techniques 2, 3, & 4 against major retailers trying to make us “efficient” (read: cheating us for profits). Now, sabotage is probably illegal in this country. So, you could say that I just changed the focus of how I was doing business during serious time constraints from making the company money to (arbitrary goal here). 😉

If labor got low, I’d ignore the phone & claim we lost business b/c they had me doing too much at once. Couldn’t get to it. I’d focus on quality rather than quantity & quality takes time. I’d try to innovate new ways of doing things, most of which were unfortunately slower than best practices. And so on.

Of course, I did excellent work for the companies not jerking us around. Quite a few different industries. I was dedicated & always improving how I did my job. But for those companies that like to play Dilbert’s boss, I always have a work strategy for them.

Vatos October 16, 2012 3:15 PM

Thanks for the responses people. It sounds like the wallet is not a good investment at the moment. I will follow the Krebs site. If attacks by criminals using RFID become significant, I will consider what countermeasures to employ

Nick P October 16, 2012 3:24 PM

OFF TOPIC (entropy generation)

Alright, years ago I told a bunch of people (here?) about a clever TRNG of last resort. Maybe first resort for people who are broke & running complex workloads. That was my excuse. So, what did the TRNG do?

Well, the designers noticed that what developers & admins see is an illusion. There’s isn’t a single clock, a computer, a anything: it’s many different things acting together with a bit of uncertainty all throughout. Processors also have plenty of hidden state that affects the timing of their activity. So, the algorithm just tried certain operations dependent on such timings & measured how off the timing was. This was used as an entropy source. I can’t recall if the original I learned of was TrueRand or 2003-4ish Havage.

Well, I was thinking of making an improved version of it that utilized a number of cheap COTS boards w/ different OS’s & architectures. I do that anyway in diversity setups, so why not? I serendipitously found the article below where Dan Kaminsky essentially redoes TrueRand into DakaRand.

Things got more interesting in the comments. Two other similar solutions popped up. Havaged had an interesting design that also messes with the branch predictors & its output passes the AIS-31 test suite. Another commenter mentioned Entropy Broker, which is for distributing random numbers from other devices. Links below.

Unlike Dan, I have little objection to using these techniques & how they work doesn’t seem so mysterious. It’s kind of the opposite of Orange Book-era timing channel elimination, just more black box. I’d add that it would be a good idea to try to figure out the entropy amounts of each part of the hardware skewing the clock or timing. That way, we could figure out which one’s are the best to focus on for entropy & performance improvement. I see in each of the linked designs pieces that could be combined into a more effective one. Wonder who can guess it?

I was also thinking about the issue in general. The main danger, if the entropy exctraction mechanism is good, is that there might be little actual entropy. The enemies might be able to model system behavior to measure the entropy & predict randomness. So, the system behavior itself might need to be erratic. How to ensure that?

The main objection might inspire the solution: servers. Modern networks do plenty of things that are user and application driven. A server-grade OS has many protocols & many potential services. Each of these is likely to have a different effect on CPU state, cache & other sources of timing. So, a simple example would be to randomly launch services on the machines. The services might even be simulated with a random looking dataset & done locally so networked attackers never glean information from sniffing the exchange.

Let’s look at the details. We can use a few basic protocols or services with EXTREMELY watered down & isolated implementations. TCP with odd settings, TFTP transferring random tiny file, gopher (!), telnet leading nowhere, etc. Let’s say 8 services available, meaning 3 bits. So, 3 bits of actual entropy chooses a service, it runs in background alongside actual work & dies. Practically speaking, we get an entropy magnification effect because that 3 bits affects the network stack, OS, caches, CPU state & more. So, anything harvesting them will get plenty of variance from baseline pre-service launch, more chaotic state during service launch, & maybe-modified baseline after service ends. Our key assumption is the attacker has an extremely hard time guessing this stuff anyway, so the entropy addition is unlikely to be shortcut around mathematically.

There’s many possibilities. My old tried and true concept is a dedicated system for certain security critical activities, including TRNG. It creates plenty of quality randomness that others can use upon system initialization. Remember that we only need a little because a CRNG is good enough if it changes seeds periodically & seeds are unique per machine. So, you can have a protected delivery over the network to the starting machine or generate a bunch of seed files for each one to use on load, never reusing one. Of course, the remote seed material should be cryptographically combined with local entropy or static information first to give attackers more headaches.

So, let’s assume we have an easy way to give each machine a bit of quality randomness to go on for a while. The DakaRand-type system should be running in the background this whole time, with events affecting its entropy pool. It might even use a bit of seed material itself. After a set period of system work, which hopefully has many affects on CPU/cache state, the component starts producing output that adds to the main entropy source.

So, how to launch the services. Well, it can be done periodically using data from onboard entropy sources. As stated with 3 bit example, it’s not as if there’s a simple formula going from the input to the output. Closest thing to overunity I’ve ever designed. 😉 Alternatively, the system can have a port listening for service execution instructions (with limits on what & how often it will execute things). The port might accept truly random instructions from a central server or (decentralized) might accept it from other machines. Heck, might even have each one occasionally send a request to the whole network, with each machine interpreting it a different way.

The performance & security implications are straightforward enough. Feeding the initial entropy is fast & uses existing mechanisms/primitives. We have proven CRNG’s for THEM to use to stretch it out a bit from there. Actual TRNG’s and/or CRNG’s on central device can make the seed material (VIA PadLock is cheap…). Any DakaRand-style component should be designed to impose minimal execution impact during operation. Limits on service execution & careful design of the simple listener should prevent this from being a DOS tool. Simplification of the services themselves & limits on their resource utilization should cause them to have little impact. Custom services might also be written to mess with timing of various components of the system long enough to impact measurement (not operations).

Thoughts, concerns, criticisms?

Note: I’m ignoring Clive’s active emanation attacks & assuming physical security. I mean, how many RNG issue are caused by that versus bad entropy sources or generation mechanisms? Let’s solve the easy one first guys.

RobertT October 16, 2012 6:41 PM


The following is all related to standalone microcontroller systems rater than complex server based CPU’s, however many of the same synchronization mechanisms exist within servers….

One of the things to be aware of is that most external events used inputs to a computer are sampled by an IO clock and buffered through a suitable FIFO so that a condition called metastability is avoided.

This has the effect of making randomly timed external events all appear synchronous to the I/O sample clock. So if the event has time jitter (uncertainity) it will appear as a beat pattern with the I/O sample clock.

A second issue is the way in which events are processed in a microcontroller. Generally speaking, task execution is driven by an Interrupt (say 1millisecond timer) so external events, as seen from within the CPU, are synchronously sampled by the IO clock and executed when the OS tasker allocates CPU cycles to this task, usually the Interrupt task clock will be some divide of the I/O sample clock so thw two are effectively locked.

What this means is that CPU clock jitter, branch fails, cache misses etc will only increase entropy if they cause a process to overrun the minimum task interval.

The problem with all TRNG’s is convincing yourself (and others) that the underlying noise source is truly random, which for most professionals means the RNG source must be some form of amplified thermal noise.

Most of my RNG’s use an LSFR (PRNG) with a unique loadable state which is Xor’ed with the TRNG. This output drives a stream cipher. I always include a mechanism to directly measure the TRNG stream separate from the PRNG / CRNG streams. In the test I want to see at least 12 bits of “randomness” in the TRNG. Usually I find simple FFT analysis to be the best tool for determining if the TRNG is working properly. Most failure mechanisms involve some form of unexpected task synchronization which shows up as a frequency spur on the FFT. If you can pass the FFT test then the rest of the randomness tests tend to be easy to pass, especially if your sampler also has some form of algorithmic “noise, if you fail FFT you will definitely fail any larger randomness test suite.

Unfortunately if you only have access to the final RNG output you will pass every test every time BUT you will never really know what’s true randomness and what is simply complexity.

Clive Robinson October 16, 2012 10:47 PM

@ Nick P,

Back in the 1990’s I had to design an external RNG for an organisation to make cheaply but not in the numbers that would make any kind of custom silicon viable.

As @ RobertT has mentioned True RNGs are almost always based on some “Noise Source” that uses at it’s base some kind of thermal effect. The reasons for this are many including not wanting radio active materials being easily accessable, using very expensive detector equipment or quite a bit of electrical power (think fluorescent tubes).

But thermal noise is actually a very very rare beast and hides behind all sorts of other noise most of which is not random but just looks random. Worse it is usually very faint and needs considerable amplification before it’s usable and the amplifiers have their own noise which is again usually (but incorectly) considered to be random.

Overly simply all amplifiers act as receivers not just to the wanted very weak signal but any stray EM field that happens to be around as well be it radiated or conducted or creeping through the dielectric. The materials the active devices use also generate noise dependent on the current passing through them or the voltage applied across them. One of the joys (read nightmares) of mixed analog and digital design is stopping the digital edges that have considerable energy getting back into the analog sections and going through the aplification to create new digital edges that feedback to create more digital edges…. Oh and then there are filters that are in reality tuned circuits which add resonance or ringing when they get hit by a digital edge or stray EM field from some other part of the circuit or external source. Oh and speaking of tuned circuits, your entire curcuit layout has as many inductors as there are traces and component leads, an almost unlimited source of capacitance between components etc etc. There is an old joke in RF engineering “Oscillators don’t but amplifiers do…” and as a historical note nearly all early radio receivers that had RF as opposed to AF gain did not actually use amplifiers but quenched oscillators, or were what we now would call “parametric amplifiers”.

The trick with analog design is to seperate the wanted weak signal from all those unwanted strong signals, and one way to do this is to restrict the bandwidth. The problem with this is it “adds colour” to the noise signal which in the case of RNGs is unwanted bias that has to be addressed. A reduced bandwidth also reduces the bit rate of the RNG.

So you need another way to do things, one of which is to use an oscillator as your noise amplifier. Basicaly you take your very weak noise source and use it to change the reactance of the tuned circuit of the oscillator. In effect what you end up with is a VFO driven by the noise from your thermal noise source. You then use this randomly changing frequency source to “sample” another frequency source. One simple way to do this is with a D-Type latch. However you have to be very very carefull because the latch is effectivly mixing the two frequencies together and reflecting them down to base band which means you have a strong set of frequency components to deal with which could end up “in band”.

In essence this is what you are trying to do with your design have some kind of “unknown” signal that looks random and sample it what you end up with is not quite what you might think as RobertT indicated.

Any way my design used a reverse biased semiconductor junction as the noise source this was amplified in a diferential amplifier that fed a low frequency oscillator the output of which went into an interupt pin of a PIC micro controler which in effect sampled the CPU clock signal. The CPU main task was to run a modified ARC4 stream generator as fast as it could go, the interupt from the VFO simply copied the latest byte into a circular buffer. This buffer was in effect randomly sampled by the IO request from the comms signal which worked it’s way around the buffer for each request from the computer the TRNG was attached to.

It worked well and passed the required randomnes tests.

However… I made a modified version that did not use a noise source at all what it did was use a BBS generator to modify the output process of the ARC4 stream cipher. The output of this generator also passed all the tests required of it.

And this raises the point that RobertT did which is almost philisophical in nature, you have to ask the question does a fully determanistic generator act as a random generator to an observer if they cannot reproduce the determanistic process or identify the next bit with anything other than a 50:50 chance?

And the simple answer is “yes” that is if it walks like a duck, quacks like a duck and looks like a duck, do you have any reason to suppose it’s a goose?…

In essence this is the idea behind many CS-PRNGs, what you are actually doing is taking away the hand waving magic of making a TRNG and replacing it with a mathmatical model that you can actually draw some valid conclusions about. Oh and it can also run a darn sight faster and produce considerably more random bits with less hardware…

It only leaves the issue of the “startup test”… But that’s another issue altogether.

Nick P October 17, 2012 2:24 PM

@ RobertT

“Unfortunately if you only have access to the final RNG output you will pass every test every time BUT you will never really know what’s true randomness and what is simply complexity.” (RobertT)

“In essence this is what you are trying to do with your design have some kind of “unknown” signal that looks random and sample it what you end up with is not quite what you might think as RobertT indicated.” (Clive)

That’s actually the point of these designs, I think. Ironically, it’s the mathematical techniques that keep failing because we can’t get enough true randomness in them or are wrong about the source. The internal state of the machine is probably not truly random, merely complex as all get out. The result to outside observers, though, is something that might be computationally equivalent to predicting a TRNG. “Might” is a key word, but do you know of any practical attack w/out full system compromise?

I don’t. And I know real-time programmers that would be interested in a modeling technique that precisely times arbitrarily complex hardware/software systems w/out the initial state supplied. 😉 It seems that’s what it would take unless there’s a shortcut I’m not seeing. And it’s not lost on me that there could be a shortcut: a key reason my design has a trusted start & CRNG.

Thanks for the alternatives. I’m sure they might come in handy in a future design attempt that’s closer to hardware. Thanks especially for the tip about FFT’s. I had never heard of that trick before.

“Most of my RNG’s use an LSFR (PRNG) with a unique loadable state which is Xor’ed with the TRNG. This output drives a stream cipher. I always include a mechanism to directly measure the TRNG stream separate from the PRNG / CRNG streams.”

Why do you do it that way? I have ideas about why, but I’d rather hear your security argument for the construction.

@ Clive Robinson

“The reasons for this are many including not wanting radio active materials being easily accessable, using very expensive detector equipment or quite a bit of electrical power (think fluorescent tubes).”

Yeah, I asked for a pound of weapons-grade uranium for a TRNG project. They never got back to me on that. So, I’m exploring alternatives. How did you guess?

“Any way my design used a reverse biased semiconductor junction as the noise source this was amplified in a diferential amplifier that fed a low frequency oscillator the output of which went into an interupt pin of a PIC micro controler which in effect sampled the CPU clock signal. The CPU main task was to run a modified ARC4 stream generator as fast as it could go, the interupt from the VFO simply copied the latest byte into a circular buffer. This buffer was in effect randomly sampled by the IO request from the comms signal which worked it’s way around the buffer for each request from the computer the TRNG was attached to.”

Ah, more designs to work with. Maybe one of my hardware knowledgeable friends can build the thing. Or give me an idea of what it would cost.

“In essence this is the idea behind many CS-PRNGs, what you are actually doing is taking away the hand waving magic of making a TRNG and replacing it with a mathmatical model that you can actually draw some valid conclusions about. Oh and it can also run a darn sight faster and produce considerably more random bits with less hardware…”

Yup. “Unpredictable to them” is more important than “unpredictable.”

“It only leaves the issue of the “startup test”… But that’s another issue altogether.”

I hate it when you do that. Care to elaborate?

Nick P October 17, 2012 3:08 PM

Links relevant to TRNG discussion for any crypto fans reading along.

Intels old design (passed Kocher’s review)

Intel’s new design (2012 analysis)

“Provably random” TRNG design

Stateless, small, FPGA-neutral design

Old favorite: lava lamps

(Note: This one is interesting in that it can be made subversion resistant & you don’t need much hardware knowledge to build it. Subversion resistant comes from ease of building it using components from arbitrary hardware vendors. Semiconductors verification, on other hand, isn’t happening for non-hardware experts.)

RobertT October 17, 2012 7:01 PM

@Nick P
“”Most of my RNG’s use an LSFR (PRNG) with a unique loadable state which is Xor’ed with the TRNG. This output drives a stream cipher. I always include a mechanism to directly measure the TRNG stream separate from the PRNG / CRNG streams.”
Why do you do it that way? I have ideas about why, but I’d rather hear your security argument for the construction.”

I always have separate PRNG and TRNG because each has different pros / cons.

ALL real TRNG’s are subject to signal injection lock attacks. This might be caused by very strong EM / RF fields or intentionally added power supply noise (whatever) point is the analog circuit can be made to lock up (all 1’s or all 0’s) or simply produce an output which is a down modulation of the sample frequency with the injection frequency. This signal can still be very complex but it is no longer random.

By contrast the LFSR will continue to work correctly even in the presence of VERY strong EM fields. If the PRNG sequence is long enough and changeable then it is difficult to predict the next state, it is just a problem that is too complex. I always seed the LFSR startup with a key which both sets the start vector and determines the order of the LFSR.

So even if the real TRNG is compromised the attacker still has a difficult problem to solve.

The final stage is an intermittently re-seeded stream cipher. This exists because it allows the system to generate lots of RNG’s quickly, rather then requiring the accumulation of entropy from the TRNG.

WRT to the FFT analysis of the TRNG, I want to see a minimum of 12 bits (72 dB) randomness free running, additionally I want to see that this never decreases below 48dB even when power supply noise (sine wave signal) is added that is sufficient to cause the device to reset. (typically about 0.5 to 1 volt PS noise on a 1.2 to 1.5V process)

With the Server based system that you have proposed I dont think you can ever quantify the randomness of the sources, nor can you ever access their susceptibility to intentional jamming / injection. So this means you have to assume the worst and add / XOR another pseudo random sequence. The result is complexity plus an unknowable start states. This will RNG output definitely look random and as Clive has pointed out it will be good enough for almost all applications, especially if physical security can be guaranteed.

Having said all this, my last design was actually the opposite, it used a PRNG to seed a TRNG. The TRNG section was a 4th order Sigma delta modulator (very similar to a standard Audio ADC signal delta stage). The modulator was self oscillating type so it’s output has both amplitude and timing uncertainty. A low frequency PRNG feeds the sigma delta input and the output is a 16 fold over sampled 4 bit output. The advantage of this approach is that it is very difficult to jam or frequency lock and furthermore it creates strings with maximum sequence lengths for all 1’s / all 0’s guaranteed by the PRNG, this is important for certain applications. BTW it also lets us reuse analog hardware rather than designing it from scratch. Higher order sigma delta modulators especially for Audio are well studied for TONE problems, because our hearing is very sensitive to Tones. So this absence of Tones means they always pass the FFT tests.

Clive Robinson October 18, 2012 3:08 AM

@ Nick P,

I hate it when you do that. Care to elaborate

Yes it’s quite simple 😉

To do start up tests you do a couple of different things, firstly you take several of your RNGs and “cold start” them all together and see how much they differ or not in their outputs. Secondly you take individual examples of your RNG and repetadly cold start it and record and compare the outputs to see how they differ.

The hard. part is “see how they differ” it involves many many tests over and above the usual test suite for randomness. You’d be surprised at just how many otherwise acceptable generators fail this test.

Also if used during incremental design and build (such sensitive systems are rarely just built on paper and final tested) it can be a rather usefull test tool or indicator that something hidden from plain sight is going wrong.

For instance if you take five examples of your thermal noise source and run them all from the same power supply if they show some kind of corelation then you know you have an issolation issue that you need to address. Oddly perhaps as a test tool it’s way way more sensitive than the best of lab scopes and other off the shelf test tools.

Which reminds me about FFT’s watch out for window artifacts of the FFT not the DUT. Even test tools have their failings. The important thing to remember with FFT’s is that they are “bandpass filters” in their own right that is they don’t go down to DC and don’t go up to daylight. Similarly another interesting tool FWT’s don’t do DC to daylight either.

Oh and the “startup tests” are a different issue, due in the main to engineers seeing the trees not the wood.

If you think of your RNG as just a complex determanistic process or even a simple LFSR it has a “start state” dependent on the hardware and what the designer has done or not done to offset the hardware start state issues.

Unfortunatly the reality of the hardware start state is, contrary to many design engineers beliefs hardware at start up has little or no entropy.

The reason for the incorrect belief is that you cannot predict the state a bunch of flip flops (cross coupled NAND gates) will come up in the first time in any one instance of hardware. However if you look a little further after ten or fifteen cold starts of an individual hardware instance you will see that most times individual flip flops will start in the same state with much greater than 50:50 odds…

The consiquence of this is you cannot leave the start up to “chance” (as there is next to none), you need to have a hardware “seed” in a hardware RNG just as you do a software seed in a software RNG.

The need for this seeding of the hardware raises three issues,

1, How do you maintain a seed through various power cycles.
2, How do you select seed values the first time.
3, How do you ensure the seed value is different each time.

With regards power cycles it’s not just on and off cycles it’s under and over volt, spikes and other overlay waveforms (which brings back issues of stray EM and other energy sources).

Selecting seed values the first time is obviously a factory issue but you could easily make serious problems there if you just used the serial number or some other predictable number.

Ensuring that the seed value changes for each and every power cycle is two fold, not only should it be unique for each power cycle for each RNG but IMPORTANTLY it has to be unique to each RNG you make (and that’s what is politely known as a hard problem). The simplest way to do this is a fully determanistic solution such as using a seeded counter a block cipher and a unique key to each RNG put in in the factory.

Which with other issues is a good reason to consider not using True Random “Noise” Generators.

However there is also a philisophical issue as well. If you look at definitions of random you find from a practical view it is actually no different from that of an unknown (to the tester) complex determanistic generator….

Nick P October 18, 2012 11:09 AM

@ RobertT and Clive

I appreciate the additional insights. RobertT’s approach is looking to be the easiest, so far. Assuming I don’t use a COTS trng.

Clive Robinson October 19, 2012 5:06 AM

@ Nick P,

RobertT’s approach is looking to be the easiest, so far Assuming I don’t use a COTS trng

Maybe maybe not 😉

As I indicated I realy would not use a TRNG COTS (thermal noise or otherwise) these days except say a diceware solution to generate three or four intitial secret values of counter Increment (I) Initialisation Vector (V), whitening (W) and Key (K) for AES in a modifed CTR mode which you use as your “factory generator”, which provides “unique secret values” for your production fully determanistic CS-PRNGs, this way you avoid all of the problems both RobertT and I have outlined.

Effecivly run the factory generator to generate values for your production RNGs,

//Factory Generator.

  • create data-out file;
  • load(I,V,W,K) from secrets file;
  • AESKey(K);
  • CTR = V;

TL1 :

  • PTXT = CTR xor W;
  • ROUT = aes(PTXT);
  • append(ROUT) to data-out file;
  • CTR = CTR + I;
  • if (not done) goto TL1;


  • V = (CTR + I);
  • write(I,V,W,K) to secrets file;
  • exit(end_ok);

As for your production RNGs consider them software only and these days I would use a modified “card shuffling” stream generator as the “entropy pool” and “entropy stirrer” and mix it with another stream generator that uses an “orthagonal” method to “card shuffling”. Oh and remember RAM and FROM are dirt cheap these days so go for big state arays in the stream generators and fast update/generation.

That is take RC4 it has a State array which is in effect a pack of cards, it’s state is the entropy pool which is stirred by the Card Shuffling algorithm.

In the case of RC4 the Shuffling algorithm is also the output algorithm and key schedualing algorithm and uses two pointers one that increments Pi and one that jumps Pj based on it’s current value the value in the state array pointed to by Pi and a key value,

//Update pointers and get cards to shuffle

  • Pi = (Pi + 1) mod X;
  • T1 = Sarray[Pi];
  • Pj = (Pj + T1 + K) mod X;
  • T2 = Sarray[Pj];

//Shuffle Sarray

  • Sarray[Pi] = T2;
  • Sarray[Pj] = T1;

//Return “random value”


return((T1 + T2 + Off) mod Y)

In normal RC4 K is set to zero during a stream (byte) output and is only used during a Key load Sarray shuffle and the output offset Off is nott used. Likewise the mod values X and Y are the same and are the size of the Sarray.

Now consider a modified form of RC4 I call BOARC1024, I thought it up back at the turn of the last century whilst designing a TRNG for a customer but needed to be able to repeatedly test how it behaved so I replaced the True random thermal noise source with a BBS generator. It uses an augmented RC4 stream cipher as an “entropy streacher” and adds extra confusion by way of “whitening” the output by adding values from a maximal length PRBSR (modifed multiplying Mitchel-Moore generator)

BOARC1024 outputs bytes just as RC4 does so Y=256, however the Sarray is 1024 bytes so X=1024. As the ARC1024 is used as the equivalent of an entropy stretching entropy pool it takes two inputs from other stream generators. Originaly a thermal noise source TRNG but replaced with a BBS RNG and modified multiplying Mitchel-Moore generators to add “whitening” or “confusion” at the output.

The lower ten bits of each BBS output is used to provide a vale for K once every compleat cycle of the incrementing pointer Pi, and the Mitchel-Moore generator provides a new value to be used for the output offset Off every output of the BOARC1024 generator.

Thus each BOARC1024 production generator needs secret values from the AES256-MCTR factory generator for,

1) Primes for the BBS which needs to be atleast 1024 bits in size.
2) The BBS startup updating seed value.
3) The BOARC1024 key value.
4) The Mitchel-Moore generator startup updating seed value.

Which maxes out around 9-11Kbits or 44 AES256 outputs from the factory generator…. Oh and obviously an equivalent amount of F PROM in the device to store it across power cycles. In practice the bulk of this is actualy the max size of the RC Key and this could quite easily be made much smaller if required. Importantly you can get PIC or other microcontrolers with this amount of Flash or EE PROM and a USB interface for less than 5USD these days in quite small quantities. As they also have “pins to burn” you could add all sorts of fancy indicators, displays, keys and other inputs to produce an entire range of CS-PRNGs to customer requirments…

Now if you look at RobertT’s design as described it is actually about the same (he goes for 12bits of TRNG I use 10bits BBS and we both use stream generators as “entropy streatching” “entropy pools” and semi linear stream generators. The main difference is RobertT uses his linear stream generator at the input to the entropy pool and I use mine at the output. I did consider doing exactly this originaly for exactly the same reasons RobertT outlines, however I was being overly cautious on my card shuffling entropy pool (this was due to some ARC4 analysis showing potential issues with cycle shortening if the basic operation was changed slightly).

Since I did the original design I have thought about if the computationaly very expensive extra security of BBS is actually needed and have since played around with the idea of using the Modified Multiplying Mitchel-Moore Generator (MMMMG) or its Additive (MAMMG) or Subtractive (MSMMG) forms with the output sent through a couple of filters to taylor it’s output to replace the BBS. Oh and as a consiquence find another orthagonal stream generator to replace the “confussion” generator (thankfully eStream has provided contenders for this 😉

The problem now however is actually getting the required computational resources to run a full set of the latest tests…

Nick P October 19, 2012 11:52 AM

@ Clive Robinson

The main takeaway from both of you is combining several generators in a way that leverages their strengths over potential weaknesses. Additionally, as I suspected, some secret initial state is the best option for avoiding many pitfalls. That’s actually getting easier to do these days as there are quite a few methods for “burning in” chip values. I do think your design could use an update.

So, I looked at some tests on RNG’s. Mitchell & Moore seems to be a good choice. It has a clustering problem early on, so I’d iterate it a few hundred times upon initialization. Salsa20 is a quality and very fast stream cipher. AES acceleration is common, so your first design is a better idea. I’ve also considered using one of these fast SHA-3 hash functions in a CRNG design. Not sure of security implications, though. Doubt there is a justification for BBS & its computational complexity. A correctly designed RNG system is virtually never cracked in practice.

I really caught myself focusing on your shuffling algorithm. It’s certainly interesting. The point seems to make sure there’s a bunch of state driving the process that complicates things for the attacker. An improvement dawned on me: scrypt. It’s designed to take a long time in hardware and software to interate in part due to memory usage. As I’m not a cryptographer, I must keep things simple to avoid pitfalls. So, here’s a stab at it.

I’m calling it a memory-heavy counter. On startup, the system generates a huge array (preferrably high in MB) of statistically random numbers. There are two counters: one that increments & one that is an index to the array. Both are used in the input process during each round. This means a prediction will need to break more than the algorithms: it will be memory intensive for each step. As in PBKDF’s, we might do a bunch of iterations upon initialization to further cause problems for the attacker.

This scheme might even work on a PIC, albeit with less memory usage. I’ve essentially forced them into a serial process. They’re not parallelizing it. The process itself is also kind of slow. It’s simple for analysis purposes. It feeds a stream cipher, so the RNG itself will be very fast. For safety, could optionally use two ciphers (with different keys): AES in CTR mode & Salsa20. These outputs would be xored together. If simple hardware, then just Salsa20 or reduced-round Salsa.

The question would be… what’s the ideal input process? Got to be careful about mixing things so that your implementation performs well & doesn’t invalidate a security property. I want a counter, a value pulled from the huge state array, an extra step like one of your PRNG’s, & then somehow shoved into the stream cipher. The obvious answer is to tweak one of you guys’ designs to use my huge array, but asking in case I’m missing something obvious.

(Uh oh, Fortuna just popped into my head. I sense a Big Array slash Fortuna slash Salsa20 hybrid monster forming.)

RobertT October 19, 2012 6:55 PM

@Nick P, @Clive Robinson

“As I indicated I realy would not use a TRNG COTS (thermal noise or otherwise) these days except …..”

I agree the TRNG is a problem circuit that I would rather do without, however there is only one problem more difficult than making a good TRNG and that is making a near perfect RESET circuit.

The only real purpose of the TRNG is to generate some entropy regardless of the Start-up state. Since the circuit always runs it always generates entropy.
Now if you have physical control of the system and can ensure a known good start-up sequence is followed for ALL start-ups then the Start-up problem does not really exist and therefore previously accumulated /and stored entropy ( or stream cipher output) can be used to seed the system re-start.

Unfortunately for products that I work on the hacker will have physical access to the device so they can easily fiddle with the Power Supply to create nonstandard start-up sequences which do not properly trigger the Reset circuit. Typically tricks involve using non-monotonic supply ramps, Narrow supply glitches and VERY slow ramping of the power supply (slower than the Reset time constant). Sometimes combinations of these conditions are used to try to achieve a partial reset (something like uC Program Counter (PC) resets to 0000H without generating a system wide Power-on reset) In other words software restarts but hardware does not restart.

It is also worth mentioning that non-volatile memory (to store the previous entropy state) is easy to bypass by simple tricks like disabling chargepump used to generate the high voltages needed for EEProm and Flash operation.

So the real purpose of the TRNG is to remove the need for nonvolatile entropy storage between power-off and power-on AND to remove the need for a perfect POR (power-on-reset) circuit.

For the record, with my latest Sigma Delta RNG, I actually get about 110dB to 120dB , so this is 18 to 20 bits of entropy (under non attack conditions) and about 70dB (close to 12 bits) under signal injection attack conditions. Interestingly most of the injection signal feed-through comes via the Bandgap reference circuit. So there is still room for improvement.

One other thing that I forgot to mention is removing signal bias from the TRNG. I do this with a combination of digital tricks and a feedback offset cancellation circuit that basically insures average 1’s and average 0’s count is equal, this is done with a low frequency digital IIR filter on the output. For most uncorrected analog circuits a DC bias exists at about the 8 bit to 10 bit level. For most digital type circuits the systemic bias is much higher 4 bits to 6 bits. This is something that you need to test for….

Clive Robinson October 19, 2012 11:51 PM

@ RobertT, Nick P,

… the hacker will have physical access to the device so they can easily fiddle with the Power Supply to create nonstandard start-up sequences which do not properly trigger the Reset circuit. Typically tricks involve using non-monotonic supply ramps…

And before the hacker was the low cost battery charger…

I kid you not, I used to be involved with FMCE design for the likes of cordless and mobile phones and looked into issues with electronic wallets and pocket gambaling machines going back into the 1980’s.

One persistant problem was lockup or eratic behaviour caused by the rechargable (NiCad) batteries and the 10cent or less charger circuits. In the hand held device to save cost there was no power switch that is the battery was always connected to the electronics…

Essentialy what happens (and it’s difficult to reproduce reliably) is that the charge circuit is effectivly currentl limited and the very nonlinear load of the discharged (but not compleatly) battery and the CPU trying to reset pulls the charger into currentl limiting and “wierd Sh1t” (Trade mark claimed for 😉 starts happening.

What is happening is aproximatly as follows,

The battery is at some low voltage Y that is bellow the threshold Z that the CPU will work reliably but not below the CPU off voltage of X. Although the difference between X and Z is small at some point the battery will be in the gap. CPU manufactures like Motorola put circuits into their CPU chips to try and detect the condition and prevent it, they also try to arange that the reset cascade happens in a certain sequence to stop excessive current being drawn. What the chip makers cannot do is use “time” for the sequencing as there will always be a slower rise time, thus they often use voltage which is also problematic. What their designers did not expect was for the slowly rising voltage to dip again, and this happens due to the current limit effect and the increasse in load as the CPU cycles up…

The obvious solution is put the physical switch back but this has it’s own problems which was why it was removed in the first place. So the next best solution is an “electronic switch” but… they have their own problems as well when you are designing them with descreate transistors and resistors etc. Not the least of which is all the designs people had come up with had very poor and unreliable switching charecteristics due to getting the series pass element to switch. In effect they were all linear high gain amps using multiple transistors (expensive) which also sucked almost as much current from the battery as the CPU in standby mode cutting the standby time by nearly 70%.

The solution I came up with back then used just two transistors the series pass “switching” element (PNP) and the detector/driver (NPN) the trick was to make the detector/driver a current biased circuit using two resistors one from before the series pass element and one from after the series pass element thus using the start of the turn on of the series pass element to significantly increase the current bias to the base of the driver transistor. The detector was in effect the base bias turn on voltage of the transistor (aprox 0.7V) which although temperature sensitive was sufficiently reliable. The two currents also ment importantly that unlike the other linear amp designs the circuit had hysteresis so the turn on voltage could be set well above CPU voltage Z and the turn off just above the CPU turn off voltage X so the CPU did not see a reversing voltage in the gap. Further it also alowed the turn on voltage to be set such that the battery was sufficiently charged so that it would be able to handel the current rush of the CPU turn on without it’s voltage dipping back below Z. Although I urged the company to pattent the idea they decided not to… Any way the UK arm of the company is now long gone because it never went out to get pattents to protect it’s IP because it thought the process to slow and to costly…

All that aside as you note there is a choice between a fully determanistic CS-PRNG and one which is augmented by a TRNG as a TRNG on it’s own is way way to far from being sufficient in it’s own right even for low security applications. At the end of the day it is very much down to how you chose to do things based on the application involved and the “real estate” costs in the design. In the likes of electronic wallets and pocket gambling machines you have no choice but to use real entropy somewhere in the system, and importantly it has to be somewhere fundemental and inherantly not “get atable” without being atleast tamper evident if not preferably destroying the device entirely.

Where you are not dealing with attackers with direct physical access and you are not working at the chip level you have to consider “supply chain poisoning” etc where you cannot trust the components to be what they say they are in the data sheet and on the package. And this is probably not be with the intent by covert foreign Gov operatives of owning a down stream system at some point in the future. No it’s much more likely to be “Chinese Knock Offs” or stolen “out of spec” parts or even low spec parts relabled as high spec parts carried out by organised crime.

The result of these faux parts is in most cases the internal function is broadly the same just not to spec in ways you can easily measure from the device pins. Unfortunatly TRNG’s for the most part work at the edges where “out of spec” most often happens. Which is why from a COTS system developers point of view avoiding TRNG’s is advisable as it’s less costly than mittigating them

I like your idea of using a ADC as a seeded TRNG however I will need time to mull it over. One thing I would note however is your seeding LFSR should realy have a nonlinear output that drives the ADC to make the attacks harder. As a quick “off the top of my head thought”, you should be able to get a realistic nonlinear circuit using an AND and OR gate array with little difficulty and not much real estate. And you could always gate paths through the array or simpler add a little “whitening” at the array input with XOR gates and “PUFs” to make it more unique to each actuall chip.

The joys of COTS system design as opposed to chip design is you are always “piggy in the middle” and avoidance or mittigation of your suspect component parts is all you can do. The former is generaly reliable in outcome the latter definatly a “cross your fingers and hope” stratagie.

However it’s getting to the point now with SOC that avoidance is just not cost effective in an ever increasing set of cases and mittigation unavoidable even though it’s costly and unreliable unless done with extream care.

As I’ve indicated in the past “Prisons-v-Castles” is a mitigation stratagie of parts of unknown provinence that could be suspect in many ways. Unfortunatly TRNGs by the very definition of their function are very difficult to mitigate if not impossible when it’s possible to hide in the grass of the statistical tests…

RobertT October 20, 2012 8:42 PM

@Clive Robinson
“I like your idea of using a ADC as a seeded TRNG however I will need time to mull it over. One thing I would note however is your seeding LFSR should realy have a nonlinear output that drives the ADC to make the attacks harder…”

I’m not a believer in “full disclosure” of security related circuits / systems, so I think you might find that some critical information is missing from my descriptions.

I also like to use an assortment of non-linear circuits to mix together digital and analog signals so that complexity plus unknownable signals are combined. Having said this you need to be very careful when combining Gaussian signal distributions with non-linear systems, because the resultant output data PDF can easily be non-gaussian. (If you have any marine friends working in ASW then ask them about mixing random data with nonlinear effects)

Clive Robinson October 21, 2012 1:48 AM

@ RobertT,

I’m not a believer in “full disclosure” of security related circuits / systems

It is another one of those philosophical questions 🙂

Back in the days of code books and mechanical cipher machines and simple ciphers that could be memorised and carried out with pencil and paper a Proffessor of Languages in a select Paris accademic institution thought the problem with obscurity in largely deployed systems used by the Diplomatic and Military services was insoluble. So Auguste Kerckhoffs came up with a series of principles that still carry his name which were published in 1883 [1]. A lifetime later a mathematically inclined individual who would become one of the founders of information theory Claude Shannon re-expressed one of Kerckhoffs Principles as “The Enemy Knows the System”.

Which gives rise to the question have we advanced sufficiently in our understanding and science to make obscurity / obsfication sufficient such that the enemy does not know the system?

Hmm two deeply philosophical security questions in one thread… I guess @ Nick P will be bookmarking this page 😉

Your comment suggests that you think in certain areas, security by obscurity is worth while if not valid in mass produced items. And my personal guess is it’s again an open question in certain respects.

My guess is based on the following thought chain,

1, We know all security is a tool.
2, We know all tools are agnostic and can be used both in defence and offence.
3, We further know that in the main defence against offence relies on Information or Mitigation.
4, We also know that broad mitigation can work against attacks we have know knowledge of thus we don’t need to know all information [2].

Therefor it’s sensible to mitigate by obscurity specific details as even though it may not stop an attack it will raise the thersholds of ability and time against potential attackers. It also helps reduce the oportunity of others to turn the tool on you which is a viewpoint taken by most major Governments and their security organisations and in certain respects used as a double bluff in the likes of the Haglin coin counting mechanism used in some US field cipher machines.

But in recent times it would appear that the US and other Mil organisations realise that the boot can be firmly on the other foot with obscurity being most definatly out of sight. And thus recent behaviour by the US Mil on funding to finding supply chain poisoning appears to confirm we are not alone in that thought that in certain places and certain ways obscurity can be good security.

[1] See David Khan’s book Codebreakers.

[2] We saw this point in practice in the 9/11 attacks where a fire/safety officer in one company had made sure that the employees all knew the fire evacuation proceadure and had often practiced it. The result was a much greater percentage of that organisations staff survived compared to other less well prepared organisations in similar positions. There is little or no chance he would have considered a plane flying into the building as part of the evacuation training [3]

[3] The 9/11 attacks were not the first time NY had had a plane fly into the side of a skyscraper. Back in WWII at 9:49 on Sat 28th July 1945 a ten tonne B52 bomber piloted by Lt. Col. Bill Smith in low visability conditions missed several skyscrapers before it did fly into the side of the Empire State Building. But compared to 9/11 it actually caused minor damage, of a 18 by 20 foot hole on the 79th floor, killing the three crewmen, 11 office workers and injuring 26 others, with repair costs estimated at around 1million USD.

Figureitout October 21, 2012 11:34 AM

I’m not a believer in “full disclosure”..
–Me neither, hence some of my questions may ask for too much. Regarding the other philosophical question, my dad has said simply “There’s no such thing as random”.


First, thanks for publicizing your convo. Could you possibly post some of your random output, right next to non/pseudo random output? (That may be pointless but I’m curious, sorry)

I was going to ask if you could send out circuit schematics, and how you could do that in an ultra secure manner (hand-drawn and delivered maybe), but that may be too much.

Do folks in the military get access to special suppliers or are you getting your parts truly COTS-style?

Have any of you ever thought of either starting a business or buying one, with like-minded people, where you can be a supplier and sell parts that aren’t poisoned?–Ofc, money, time, expertise will be barriers; and my generation is known for its lack of initiative (which is why I love working w/ engineers and military-folk). Basically was wondering if there’s a better solution besides crossing my fingers and hoping for best…

Nick P October 21, 2012 2:06 PM

@ figureitout

The military and govt have switched to COTS for many of their supply needs. The primary defense contractors are also known to subcontract this way. The COTS products proved superior to GOTS products in most ways, so they dominated. Both DOD & academia have projects in place to develop technology that accomplishes things like…

  1. Prove an arbitrary component behaves according to a specification. The specification is considered trusted & is somewhat abstract, the component is a concrete instance from untrusted source.
  2. Detect the presence of malicious circuits in foreign chips. (Yeah, I’m sure THAT will work out…)
  3. Develop constraints for untrusted builders that make it provably impossible to cause certain security violations or make it easier to detect them.
  4. Develop easier methods of producing provably correct systems/software/hardware/components along with evidence/proof of what it does.

Both high assurance & security communities have been researching this stuff for a while anyway. Fresh infusions of funding were given to good teams recently via NSF, DARPA, DOD, European, & Australian efforts. China is doing less in Correct by Construction, I think, but they have similar concerns leading to development of their own processors, firmware & OS stack.

So, no, there’s no shortcut. As for trusted suppliers, assuming that’s even possible…

The focus of the demand side on more releases, more features, more integration with untrusted legacy stuff, long shelf life & cheap cost means any kind of assurance efforts take the back seat. Safety- and security-critical markets are the exception, but they have dominant players. (You really going to beat likes of Boeing or Raytheon on big contracts?)

This means we have few strategies. I’ve been working on them (and throwing them away) for years. Here’s some right off the top of my head.

  1. Ignore the problem. Focus almost exclusively on recovery abilities, insuring against damages, & employ only standard security practices. Lawyer up, too. This is the COTS standard.
  2. Use high assurance methods to develop critical, long-lasting components of our IT infrastructure. Focus on interface level & make almost any part of infrastructure easy to swap out. We WILL have to.

  3. Systems with a single function can be created with limited TCB & robust software stacks. Diversity happening here can reduce risk of compromise, as well. There are quite a few processor, firmware, RTOS, middleware, etc. vendors to choose from.

  4. For complex systems, we can analyze them for the most critical points & put assurance efforts there. If it’s secrets, that part can be extracted into a secure component or device. And so on.

  5. Code development. All best practices in software development, esp quality centric, should be implemented. Various methodologies have proven low-defect track record. Certain tools help reduce development problems. Certain languages & platforms either make working code easier to create or are provably immune to key problems. We need to use what we have better.

  6. External components. All components should include a behavioral specification describing inputs, outputs, & ESPECIALLY error states/behaviors. Components should be tested/reviewed to some extent to ensure they behave as specified. They might also be wrapped or externalized in such a way as to control how information flows through them & contain issues. Monitoring is a good idea for coarse-grained complex components.

  7. If hardware subversion is a concern, the system should be made a bit hardware agnostic if possible. The entity acquiring the hardware can do it through a third party with NDA, disguise the usage case within legal allowance (EULA’s & stuff can be an issue), & randomly select from a range of usable providers. Some systems can be designed with mixed hardware/OS platforms running the same software, minimizing risks a bit.

  8. An old strategy of mine to deal with governments is to make sure you buy the hardware or host the systems in their opponents’ jurisdiction. If it’s data or a system, it might go through several jurisdictions during operation that are all unlikely to cooperate. It also helps if those organizations benefit the local government in some way, but I’ll leave that to your imagination.

There’s more strategies. I’m keeping a few secret until I’m sure there’s nothing I can do with them. Particularly, a business model for producing high assurance components sustainably in unsustainable markets. The one’s above will help solve many problems. Think of them as a mental framework for dealing with issues during requirements gathering, design & implementation.

RobertT October 21, 2012 6:41 PM

@Clive Robinson
“Your comment suggests that you think in certain areas, security by obscurity is worth while ….”

I absolutely believe obscurity is worthwhile, I’d even go so far as to say that it is mathematically inclined (Academic) Crypto types that have bad mouthed obscurity just to sell their own brand of snake oil.

Having said this, the problem with obscurity is maintaining absolute secrecy in the techniques used to obscure AND changing these techniques too often for the attacker to learn all your tricks.

I use a lot of special circuits that I believe the attacker has never seen before. Many times this means reaching back into my past to combine very old styles of logic in modern systems/processes. For example recently I used I2L logic (Integrated Injection Logic) in an advanced CMOS process, this logic was used to hide some critical steps. In another section I used NP domino logic, both are very old design styles that would confound someone new to the game and defeat any automatic reverse engineering system.

You might say that these steps will not stop a well funded state actor, but I’m not sure that this is true. Even the best funded organization are prisoners of their own institutional bias, especially intellectual bias. Think about the report from a young engineer at some TLA, his boss does not want to hear “its tricky” his boss just wants a report on top level architecture followed up by analysis of each individual block as needed. Generally the first report back on the “tricky circuit” is that it is TRIVIAL and not worth further study. It’s nothing but circuit tricks and obfuscation, hardly the kinda thing a prestigious organization should waste its precious budget on, but you know what that level of study suits me just fine!

Sometimes I’ll even use logic that changes it’s function when a micro-probe is added at a certain obvious test point. This way the individual block level analysis never matches the extracted “top level” functionality.

By contrast think about the fragility of algorithms like AES when analysed with the help of DPA techniques. Frankly I find it intellectually dishonest that so many in the field of security were prepared to simply bow reverently in the direction of the crypto algorithm guys. They even had/have the audacity to lecture me on the properties that make brute force the best attack against their algo. (Side channel anyone?)

So Yes I do believe that obscurity is a very valuable security technique.

Nick P October 21, 2012 8:13 PM

@ RobertT

“I absolutely believe obscurity is worthwhile, I’d even go so far as to say that it is mathematically inclined (Academic) Crypto types that have bad mouthed obscurity just to sell their own brand of snake oil.”

I’ve been slammed on this blog & BinRev for saying the same thing. However, many successful security tricks in the old days of UNIX administration were essentially obfuscation. And they worked. A perfect example is not using the default port for common services. This stops automated worms & attacks that just aim for that port on tons of IP’s. It’s obscurity, but it works for the threat.

Another example of me using principle this is recommending businesses use Linux on POWER, SPARC or MIPS instead of x86. (Critical services, at least.) I tell them to hide evidence of what OS it is whereever possible (anti-OS fingerprinting). Just to make sure that the attacker isn’t told it’s “Red Hat for POWER” or something. All their system-level code injections will fail & most will never figure out why. With hardware security, it’s even harder to look inside at the details. Hence, obfuscation has even more benefits by significantly raising the bar for attackers. We’re seeing that to a degree with iOS hardware encryption. Feds aren’t cracking it left and right, so it’s working. 😉

“Frankly I find it intellectually dishonest that so many in the field of security were prepared to simply bow reverently in the direction of the crypto algorithm guys. They even had/have the audacity to lecture me on the properties that make brute force the best attack against their algo. (Side channel anyone?)”

Yes. Most of their stuff failed. That’s why I push for methods to still make beating COMSEC a hard COMPUSEC problem in addition. There is a benefit to well-vetted algorithms. However, crack one standard & everyone is toast. I prefer to make a solution that applies something well-vetted in a way where an attack on it won’t crack the whole thing.

“So Yes I do believe that obscurity is a very valuable security technique.”

It’s literally saved people’s lives and assets. Evidence is on our side on that one. I would also bet that NSA’s Type 1 crypto, good as it probably is, would get cracked quickly if they didn’t keep its design secret & the equipment accounted for.

Wael October 21, 2012 10:21 PM

@ Nick P, Clive Robinson, RobertT

I absolutely believe obscurity is worthwhile…

I agree as well. Obfuscation does have its place.

Clive Robinson October 22, 2012 7:34 AM

On obsfication & randomness.

For a moment flip your thinking the other way up and instead of thinging “information” is an artifact of our “physical universe” ask yourself what the implication of our “physical universe” being an artifact of an “information universe”?

It is very easy to realise that there is more information in the universe than there is physical matter, that is information can be encoded not just on the state of the most fundamental particles it can also be encoded by the relationship of particles to each other and other encodings as well (what I call data shadows).

The problem with information encoded by relationships is two fold, firstly like a one time pad you need to know the key to reveal the information, secondly you need to be able to store and communicate the encoded information and the key to also be able to store and communicate thus recover usably the information…

Thus we know from this that it is not possible to know all the information there is to be known…

Therefor we know we will not always be able to determine information because even our physical universe obsficates it from us.

Now if we assume our physical universe is a subset of an information universe (which is what it appears it might be) we need to ask questions about information, but can we actually get meaningfull answers?

Possibly not, look at it this way unstable atoms break down to other atoms that may or may not be stable untill you get down to the bottom of the curve where the atoms are stable. The thing is the process is both apparently random on which individual atom breaks and when but seen overall is very predictable in terms of a half life even in quite small quantities. Ask yourself how do reconcile this in reality (not statisticaly), that is how does an individual atom know it’s time for it to break to maintain the half life curve?

Then look at all our other “random processes” which in reality when they are below a certain scale are not realy independent of each other, because on mass their behaviour is predictable statisticaly.

You could argue that on a higher level than the physical universe we perceive there is actually a determanistic process we cannot see going on, and on the levels we can see on an individual basis of the minimum quantity of matter or energy a process that looks random put on mass determanistic.

Our definitions of random are manyfold but at the end of the day they come down to “unpredictability” which as far as we can see means not “nondetermanistic” only “we cannot see the determanistic process” thus it is hidden or obsficated from us.

So as for the “mathematically inclined (Academic) Crypto types that have bad mouthed obscurity just to sell their own brand of snake oil” they know (One Time Pad) obscurity is actually the bench mark against which their determanistic models are measured.

But obscurity has just like the one time pad to be bounded in it’s use otherwise determanism shows through. In the case of the OTP use the same key twice and you can cancel the random noise out to leave the addition of two determanistic but unknown sequences (plaintexts) which can usually be seperated from each other.

For instance if you take several plain texts and add them together (mod 2 etc) at some point their determanistic charecteristics will be nolonger determinable from an unknown or random sequence (at one point it was belived that a book code using four books was from a practical viewpoint unbreakable).

And it is this “add independent determanistic effects” to get “random effects” which is where obsscurity does work. That is you can raise the bar to the point where our current mathmatical tools cannot tell the diference. In this respect it is a form of “defense in depth”.

But… tools are methods and we know at the end of the day methods like wine generaly get better with time.

This is something we see with block crypto with the safety margin of extra Fiestel rounds.

So I don’t think at heart cryptographers are trying to sell snake oil, but ONLY “mathmaticaly valid methods”. It’s the next step to practical implementation where it all goes wrong and “side channels” especialy timing side channels open up because they were not considered as part of the idealised / clean mathmatical model.

Now I know one reason why this happens and it’s the perenial war of “security -v- efficiency” whilst it is possible to have special purpose dedicated systems that are both secure and efficient they are of no use for other things. General purpose systems are neither inherantly efficient or secure but are of use for many things.

It’s squaring this circle between specialised and general purpose that is extreamly difficult. An unoptomised CPU design is actually easier to write secure code for than an optomised CPU design, it is the optomisation to improve efficiency in certain operations that opens up the side channels. If and only if you are aware of exactly how these optomisations work can you mitigate their effects and have a secure design.

I very firmly point the finger at the NSA for the problems we have seen with timing channels in practical AES implementations. Why. Because they would have known without doubt about “security -v- efficiency” and as NIST’s domain experts they should have included this in the AES competition specs. The fact they did not can only be seen as a fairly deliberate “oversight”. When you add in the deliberate requirment for speed/efficiency into the competition with the requirment that this “speed freak” software be made available to all it was inevitable that nearly all initial AES implementations would be riddled with timing channel issues.

The question is why did the entrants not raise this with NIST it is clear that some of the other entrants have higher immunity to side channels than novel “bricklayer design” of Rinjdael.

It is in effect a higher than “specification level” subversion attack and as such you have to wonder who thought it up.

Although we are not stuck with AES NIST never put into consideration a mitigation policy of what to do when (not if) AES defficiencies come up. For sometime now I’ve advocated for flexible and extensible framework standards that allow for the changing of not just the primatives such as AES but the modes they can be used in as well but further what you might call the Meta-Modes which are in effect the practical realisations of the way we implement crypto in applications. For instance a PRBS to generate a nonce is defined by the size of the output and the seed values, the fact it could be done with either a stream or block cipher is fairly irrelevant to it’s function, likewise if it was a block cipher it’s mode be it CTR or a block feedback mode with or without IV/whitening is again irrelevant to the overall function, likewise if the block cipher is AES/blowfish/etc.

One of the things about the latest competition for hashes is that again the “novelty act” appears to have been selected, giving rise to the thought about what practical implementation issues such as timing side channels are we going to see with it down the line…

Clive Robinson October 22, 2012 8:54 AM

@ figureitout,

“I was going to ask if you could send out circuit schematics, and how you could do that in an ultra secure manner (hand-drawn and delivered maybe), but that may be too much”

It’s not realy necessary, there are many agnostic designs already out there. The important thing to remember that the less complexity there is any individual circuit element the harder it is to “backdoor” in some way at the factory it is made in.

You can make a simple low level noise source by using a didoe junction and a resistor from a clean power source like a battery. Some diodes are inherantly more noisy than others and lowish voltage zenner diodes have been used for some time to produce usable noise up into the RF bands.

You can improve it’s performance by actually using two resistors one from supply to zenner and one from zenner to ground, and capacitivly coupling both ends of the zenner into a differential amplifier like a low noise Op-Amp. You can then take the output of the Op-Amp and feed it into another Op-amp to provide either more gain or provide hysterisis like a Schmitt trigger. Providing you run the second Op-Amp of the same rails as the CPU chip then you can connect it into a CPU pin via a resistor.

There is a trick whereby you don’t need to build the Schmitt triger, what you do is AC (capacitivly) couple the high level noise signal into a CPU input pin via a resistor and put a small value capacitor from the input pin to ground, you also connect another resistor to the input but DC (ie directly) couple it to a CPU output pin. By taking the output high the input will at some later point also go high the time interval is based on the sum of the two input voltages integrating on the small value cap you then take the output low and some time later the input will go low. If you average the go high times and go low times over several reading the difference between this average and a single reading time can be used as a crude but effective A to D converter. It’s often called a ramp- compare ADC or dual-ramp compare ADC ( ). Similarly you can with a little extra effort make it an Integrating or dual-slope ADC which is often used in Digital Volt Meters for their high linearity conversion. Either of these can be used with oversampling to make a simple Sigma-Delta converter which trades conversion time for accuracy using digital filtering techniques.

However where ever there is analogue circuitry there are not just noise issues but EM and other energy (mechanical/sound etc) fields that will cause a low level desired signal to be swamped and disapear below the noise floor. There are many books and application notes on EMC and good circuit layout techniques for analogue circuits there are even those specific to “the care and feeding of ADC’s ( ).

As RobertT indicated FFT’s are a good way to spot when such fields are having an effect (FWT’s do a similar thing for “sequency” not “frequency” space). Although FFT’s are somewhat complicated to code in software you can use a DFT built around lookup tables to get a less resource hungery solution ( ).

That kind of takes care of a low end TRNG “noise source”. However as indicated by RobertT you can partialy protect yourself from external attack by adding in the output of a stream generator into your ADC the effects it has are a little complicated to put it mildly but you can see the interferance being re-modulated by the stream cipher and in effect pushing it down by the coding gain to a point after which the Noise source ceases to be having any significant input and the RNG becomes in effect a CS-PRNG. There are various ways you can use the stream cipher but one is as the driver for the slope or ramp generator if done right this also has the effect of adding “jitter” which can actually improve an ADC.

RobertT indicated a Linear Feedback Shift Register or LFSB, many stream ciphers are based on “lagged generators” of one form or another, however they all have the defect of being “linear” and thus amenable to mathmatical annalysis. So although the lagged generator is the heart the output is actualy done by nonlinear combining of the various latch outputs. The simplest way to do this is with a combination of AND and OR gates if you draw out the truth table of a two input AND gate and two input OR gate you will see that in either case the four output states are unbalanced that is for the OR gate you get zero out for the zero zero input and one for the other three combinations with the AND gate you get zero out except for the one one input state. If however you feed an OR gate with the output of two AND gates which are attached to four of the latches you get a more balanced output from the sixteen states. The problem for an analyst is they cannot go backwards from the output being a zero or one to find the actual input state on the four inputs. Various combinations and levels of such gates make the mathmatics not impossible but generaly a lot harder than trying to brut force every state… You can also add elements like the “stop and go” generator technique but be carefull you don’t unbalance the output to much.

Any way have fun 🙂

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.