Brian Sniffen September 7, 2013 8:54 AM

If you have an OTR fingerprint, your PGP key probably isn’t worthwhile. Why?

Almost every OTR client uses libpurple: Adium, Gaim, etc. It’s an endless source of remotely exploitable bugs. When they’re exploited, that machine and its logs are gone instantly. Its passphrases are gone shortly thereafter.

Without a description of operations, an OTR key makes a claim of PGP-based security less plausible, not more.

Sami Lehtinen September 7, 2013 9:16 AM

Interesting, I guess you know that you can set preferences for your public key. I just wonder why your highest priority cipher is AES256 and not TWOFISH? Do you think that TWOFISH256 is worse than AES256? You don’t trust twofish at all, because it’s not listed at all?

tobi September 7, 2013 9:25 AM

Isn’t it funny how the NSA scandal changes peoples behavior? I used to to scare a friend into using OTR with me. He never wanted to.

Foxtrot September 7, 2013 9:35 AM

I don’t understand Public Key encryption well enough to know if my use of it is secure. For example, the discussion above is over my head. So I think it might not be worth my time and effort.

Bruce Schneier September 7, 2013 9:45 AM

“Interesting, I guess you know that you can set preferences for your public key. I just wonder why your highest priority cipher is AES256 and not TWOFISH? Do you think that TWOFISH256 is worse than AES256? You don’t trust twofish at all, because it’s not listed at all?”

I accepted the defaults in all except key length.

NobodySpecial September 7, 2013 9:47 AM

But Mr Bruce (allegedly) Schneier – the only way I know this is really you is the SSL cert for the site. Which is issues by Symantec who have been fully cooperating with the NSA .

I suggest we meet in the usual place to exchange certificates – wear a pink carnation.

dafs September 7, 2013 10:03 AM

where do you store your openpgp private-key? just on your (online) computer or on some kind of dedicated device like a smartcard? do you think storing the private-keys on a smartcard has significant advantages?

Perseids September 7, 2013 10:54 AM

As long as you have done nothing to deserve special attention from the NSA, they will not bother with a man in the middle attack with this sophistication just for you. If you plan to deserve their attention in the future, grab his (and everyone else’s) pubkey now.

R September 7, 2013 11:03 AM

@dafs Bruce S is now working with Snowden docs. He’s already revealed a lot about his new workflow. But telling you where he stores his private keys probably isn’t in the cards.

I am curious about how long it will be before the FBI get warrants to seize non-journalist’s devices who are working with Snowden docs. Schneier’s will be the most secure of the bunch. But, what is his exposure for simply having the docs in his possession, which he admits, even though the are not accessible? The same protections afforded journalists will not be extended to him, though he may start positioning himself as a journalist to cut them off at the pass.

I am actually surprised the publishers holding control of these docs don’t insist new eyes only access and review on their premises.

Forewarned is forearmed.

(Bruce, feel free to not approve this comment and/or delete, if you wish.

Impossibly Stupid September 7, 2013 11:51 AM

“But Mr Bruce (allegedly) Schneier – the only way I know this is really you is the SSL cert for the site. Which is issues by Symantec who have been fully cooperating with the NSA .”

You should assume that Bruce himself is fully cooperating with the NSA. He has been unwilling or unable to say that he has not received a NSL. There is no reason to think he can be trusted, and the same is true of anyone else who can’t speak freely.

SteveJ September 7, 2013 12:10 PM

Shortly after a public announcement that Bruce is working with the leaked documents, a new set of keys makes there way on to his website.

What assurances do we have that it is even Bruce updating the website? How do we know he hasn’t been whisked off to some secret facility and was forced to create new keys?

I’d suggest a video of Bruce reading the fingerprints, but that wouldn’t really prove anything either.

Could be relevant?

Anon Y. Mouse September 7, 2013 12:14 PM

It appears that you are forcing everyone to access your blog via https. I am unable to read it now
with my old system/browser, as they do not share any keys (it’s that old.) However I don’t care
if anyone knows I’m reading your blog, and the content is publicly available anyway. In this case,
Shouldn’t you let the user choose whether they want to use https or unencrypted http?

Chuck September 7, 2013 12:23 PM

so, how should an up to date gpg.conf look like?

I pruned mine from using shorter symmetric keys

default-key XYZ987
hidden-encrypt-to 123ABC
s2k-digest-algo SHA512
s2k-cipher-algo TWOFISH
personal-cipher-preferences TWOFISH AES256 CAMELLIA256
personal-digest-preferences SHA512 SHA384 SHA256
personal-compress-preferences BZIP2 ZLIB ZIP
#cipher-algo TWOFISH AES256 CAMELLIA256 #force longer keys?
default-preference-list TWOFISH AES256 CAMELLIA256 SHA512 SHA384 SHA256 BZIP2 ZLIB ZIP

Curious September 7, 2013 12:46 PM

Bruce, now that you had a good look at certain documents and are now replacing your keys shortly afterwards I wonder if you could comment on some of the specifics you used for creating your new key. GnuPG on Windows, what version and binary/source-compiled? Did you do something different in the ways you chose algorithms, key-lengths, random number, OS used and how you will store your key?

Just curious 🙂

NobodSpecial September 7, 2013 1:38 PM

@Perseids – I think it’s rather Mr Schneier who is a “person of interest”.
It seems likely that people with interesting stories about surveillance would contact him – presumably using encryption.

Chuck September 7, 2013 2:22 PM

If you are using
* Firefox: open about:config, search for RC4 and disable the booleans
* Opera: go to settings–>security–>protocols and uncheck RC4
* ??

and then you get AES256/DHE,1024,RSA,SHA

Mr. Stone September 7, 2013 3:25 PM

Given that your access is to a public blog with public posts, I’m not sure why you’d worry too much about the cipherspec.

abcdefg September 7, 2013 3:50 PM

But Mr Bruce (allegedly) Schneier – the only way I know this is really you is the SSL cert for the site. Which is issues by Symantec who have been fully cooperating with the NSA .

1) What you need is certificate pinning (or public key pinning). See here:

Without pinning, HTTPS fails to MITM attacks which replace the cert with another one also accepted by the browser.

2) As far as I know, a CA issuing a cert only knows its public key, not the private key. The latter is known to the website only.

Additional hint to point 1: If you use certificate pinning, you can disable the default browser certificate validation via CRL and especially OCSP, to prevent tracking of visited websites by CAs.

Also, there are many other browser configuration tweaks which can be used to harden security and privacy. And are they not the responsibility of the website owner, but of the user.

Dirk Praet September 7, 2013 4:52 PM

@ Impossibly Stupid

You should assume that Bruce himself is fully cooperating with the NSA. He has been unwilling or unable to say that he has not received a NSL

You’re probably just trolling to live up to your handle, but Bruce did answer this question a couple of threads ago: .

@ Curious

I wonder if you could comment on some of the specifics you used for creating your new key

But he did in reply to a similar question from @Sami Lehtinen to the top of this thread.

If you guys can’t be bothered to read up on the whole thread and those related, please go somewhere else. You’re wasting Bruce’s time and that of many other people who actually care about what is going on.

NobodySpecial September 7, 2013 7:06 PM

@abcdefg – I wasn;t sure about point 2. I couldn’t remember if the CA could only certify a site cert or they could completely fake the original site cert for a new site.

Curious September 7, 2013 7:26 PM

I suppose this other ‘Curious’ guy/girl above might be fishing for some kind of admission from Bruce in having handled documents as such. Or I am slightly paranoid here in thinking that such a concern would matter at all. Having said this, I do not have an overview of what everyone writes here so I feel abit dumb expressing my concern here (because it might have been irrelevant in the first place).

newbie question September 7, 2013 8:48 PM

Can someone explain the difference between ascii strings between the —BEGIN … — and — END …. — blocks in the armored public key linked to in the current post by Bruce and the one under EDACEA67 at The first few lines are the same, after that are differences due to version? Or are they different keys and the similar initial stuff is possibly email info?

Private September 7, 2013 9:20 PM


If you doubt the validity of his new RSA public key, you can ask him to sign it with his previous RSA private key.

Actually he should have done that by his own judgement

SteveJ September 7, 2013 9:22 PM


I had thought of that as well and was a bit surprised that he did not. Then again the $5 wrench solution negates that too.

Amber September 7, 2013 9:37 PM


Whilst Bruce didn’t state which version of Windows he uses, he does state that he currently uses it.

As such, it is questionable as to how secure his system really is.

Now wondering if Windows can even be configured so that if the right Bluetooth device is not connectable, at bootup, the system automatically goes into “failure mode”, the way that one can configure Android devices, and Linux boxes to “fail”, if the appropriate Bluetooth device does not connect to it.

LossOfTrust September 8, 2013 7:41 AM

I can’t trust any commercial encryption product, as the NSA may have weakened the code. And I can’t trust TrueCrypt, which might be an NSA project. Open Source encryption software might similarly have been weakened. And the NSA could have decryption capabilities beyond what experts examining Open Source software know. So basically, there is no reliable source of security software.

kevinm September 8, 2013 12:24 PM

@newbie question
Good question, the difference is due to the key being signed. When you import the key from the MIT keyserver you see some text that begins with
“gpg: key EDACEA67: “schneier” 10 new signatures”

Jacob September 8, 2013 1:14 PM

Bruce, when you participated in the SHA-3 competition, was there any hint of any gov agency’s attempt, influence or desire to weaken the Skein or the winning Keccak algorithm?

ihazaquestion September 8, 2013 4:33 PM

Why have you not signed your new key with your old key? This is basic stuff.

Recently Cory Doctrow claimed that 1025 bit asymmetric encryption was twice as hard to crack as 1024 bit asymmetric encryption.

I’m starting to get sick of this kind of thing.

Aardvark Soup September 8, 2013 4:51 PM


And even if you could somehow verify the security of an open-source project with 100% accuracy by looking at its code, you still don’t know whether you can trust the compiler, or the compiler’s compiler, or the OS, or the CPU, or…

Obviously, perfect security does not exist, so you’ll just have to make a decision which product you deem most likely to be trustworthy. That is probably more effective than complete paranoia.

Anon10 September 8, 2013 9:04 PM


  1. Do any of the Snowden documents change your views on two factor authentication?
  2. Related to that, what is your view of the value of soft pki certificates v. smart cards with embedded private keys?

Larry September 9, 2013 1:27 AM


I sense a need for an update to Practical Cryptography/Cryptography Engineering as soon as the dust settles.

ATN September 9, 2013 5:19 AM

Not being a specialist, simple question: is there a way to recover the private key from the public key if you have the last month worth of random data generated by the PC?

Same question, is there a way to recover the intermediate keys (after uncrackable authentication has been done) to read the content of messages knowing the last few days of random data?

Clive Robinson September 9, 2013 8:28 AM

@ ATN,

    … is there a way to recover the private key from the public key if you have the last month worth of random data generated by the PC?

Your question is lacking in what you mean by random data.

However the answer to “is there a way to recover the private key from the public key” is yes in theory but currently impracticle in practice. What we do know is there is no proof that it can not be done extreamly quickly, just that nobody has published either a proof or a method, so the game is open either way currently (as far as we know publicaly).

What we also know with keys using PQ primes as the base, is if you know one you know the other, and that there are very fast tests that show if a public key shares a prime with another public key. So if your pubkey is tested against all the other pub keys on the internet it can quickly be found if your key has a prime in common. This would not realy matter if we had good random number generators but invariably we don’t thus with embedded systems that generate pubkeys on first start up the amount of entropy is often less than desired by a very large margin. Tests have shown that there are a lot of pubkeys out there that do share common primes so much so that it defies probability of it happening unless the random generator is crocked. Further investigation has indicated that the source of these improbable pubkeys tends to be common to the same software used…

Whilst pubkeys with common primes is bad enough on it’s own it raises another issue which is “limited search space”. For the same prime to be selected by two different bits of kit the probability of it happening is related to the range of random numbers produced. The smaller the range the greater the probability of it occuring, thus the greater the odds of tailoring a simple search to match other pubkeys.

It’s an odds on bet that the NSA know of every weak random number generator there is in commercial equipment and have charecterised their limitations and in some cases built systems to exploit this.

Worse is the fact that the systems most likely to have crocked random number generators are those that are embedded devices such as network devices such as firewalls, switches and routers, which is exactly the sort of gear the NSA are beleived to target…

But there is another issue which is kleptography [1] which is especialy bad in generating public keys due to the very high level of redundancy possible. Basicaly you can write a program to generate public keys that using a hidden public key in the software of about a third of the length of the public key being generated, can hide the starting point of the search for the first prime used in the users generated public key. The problem is even if the software prints out the two primes and the generated public and private keys, there is no way the user can analyse them to find the use of the hidden public key. However as the software writer you (may) have access to the private key coresponding to the hidden public key and you can use this to decipher the search start point, you then feed this into the rest of your prime search algorithm and out pops the first of the primes used to generate the public key pair in a matter of a few seconds or minutes, you then use this to find the second prime from the users public key and thus have their private key…

[1] The idea originated from Adam Young and his academic supervisor Moti Yung,

CallMeLateForSupper September 9, 2013 8:36 AM

@ Foxtrot
“I don’t understand Public Key encryption well enough to …”

You might want to dig up a copy of “PGP – Pretty Good Privacy” by Simson Garfinkel. Very good read. I don’t know if it is still in print. My copy, 1st edition, is dated 1995.

h4x September 9, 2013 8:39 AM


I still use Skein/Threefish implementation because they work well on Android.

As for PGP keys, generate a gigantic password because of the ease they can break bitcoin keys that were generated using brain wallet. People with 20char passwords are finding their coins stolen from the block chain. Also read tobtu/hashcat forums where they have been slicing through lastpass and 1password.

ATN September 9, 2013 10:18 AM

@Clive Robinson

Your question is lacking in what you mean by random data.

I mean the random data that software has used up to now.
In Linux, I mean what has been read from /dev/random.

There does not seem to be a lot of software which uses random numbers – most of the program want to be consistent and produce the same output from the same input.
So it does not look like an impossible task for a “virus” to leak all the data that has been produced by the random generator and send that to another computer.

Now, if you have all the random data used by “gpg –gen-key” and the public key, can you deduce (later on, on a big computer) the private key?
Same, if you are using an automatic password generator and know every numbers read from the random number generator, can you deduce the password which has been generated?

Clive Robinson September 9, 2013 12:18 PM

@ ATN,

Right by random data you are realy talking about the TRNG output.

The answer is yes if you can work the one way functions backwards or if the TRNG is badly designed run them forwards.

But if we are talking kernel level malware the best thing to leak would be the input to the one way functions which is sometimes an “entropy pool” or apparently in the current Linux setup the output of the Intel Chip (supposed) TRNG. In either case the entropy pool should be stired at different rates from milisecs to weeks.

If I was going to take output from /dev/random I would absolutly not use it “raw” I would take a number of readings and using the time / process id / user keypress timing to shuffle parts of the readings around as well as flipping bits and use the result as the key and seed input to AES-256-CTR. If done right this will help break any determanistic link between RND data and actual usage as KeyMat etc.

Randell Jesup September 9, 2013 4:15 PM

@ Clive Robinson:
When I was building a netrek client for the Amiga in the days before this new-fangled “http” thing, people had decided they were annoyed enough with other people compiling clients that auto-targeted, etc, that they started releasing clients with RSA-derived certs (with permission of RSA). I went to get my Amiga client signed, and before publishing the public key I logged into a netrek server to make sure it kicked me out and didn’t crash the client. To my surprise, it let me in and said “jesup (HPUX xxxx)” (or something like that). Apparently the random number generator used in the keygen on the Sun 3/50 I ran it on wasn’t so hot….

GIGO (or in this case, non-GI, non-GO)

Dirk Praet September 9, 2013 5:27 PM

@ Clive

But if we are talking kernel level malware the best thing to leak would be the input to the one way functions which is sometimes an “entropy pool” or apparently in the current Linux setup the output of the Intel Chip (supposed) TRNG

I suppose you are aware of an ongoing discussion at Reddit about this:

@ everybody else: has anyone bothered to sign Bruce’s new public key yet ?

Clive Robinson September 9, 2013 6:49 PM

@ Randell Jesup,

Yup many RNGs were quite bad (and many still are). I’ve been designing TRNGs and CS-PRNGs off and on for over a third of a century mainly on the hardware side using individual components for the noise sources and limited functionality IC’s (analog OP-Amps and 74xx TTL chips) to provide a sufficiently usable source for a micro controler to do the software bits.

I’ve learnt a lot in that time, mainly just how badly other people do it and how they don’t take the time to lift real entropy out of the faux entropy…

But it’s somewhat surprising to reflect that most programers don’t know even today the advantages and disadvantages of various RNGs and which one is most appropriate for their application and why, even some quite famous programers…

@ Dirk Praet,

I was vaguly aware that Linus had thrown the toys out of the pram over the RNG in the linux kernel.

I only had to hear the words “Intel onboard…” to get a shudder down the spine and a queasy feeling.

To put it simply I’ve said repeatedly for something like fifteen years that the Intel “on chip” RNG went about things all wrong ( search this blog for my name and “magic pixie dust” to see some of them) and thus no confidence should be held about the quality of output.

RobertT September 9, 2013 7:39 PM

@Clive Robinson
I’d be careful to not attribute Intel’s RNG design incompetence to anything other than incompetence.
Intel has never had a competent Analog, RF or mixed signal design group and without this DNA in the groups background you’ll find that even first order effects like PSRR (Power Supply Rejection Ratio) on the RNG get lost in the PseudoNoise stage and are never directly measured, tested or even testable on a fully working chip.

Part of the problem is that most real life random noise exhibits a bathtub type curve, at low frequencies (where most of the noise is) it is dominated by 1/F noise (typical 1/f today corner is about 100Khz). Above this frequency the noise will typically be between 1nV/sqrtHz and 10nV/sqrtHz, At very high frequencies we find the noise level hooking up again BUT this is a VERY difficult area to work in because sample accuracy/inaccuracy folds back into amplitude noise, in other words phase noise (jitter) becomes amplitude noise.

Unfortunately in the most useful region the low magnitude of the noise means that second order effects like PSRR and substrate noise tend to dominate the overall results. This causes many people to resort to directly using “noisy” self oscillating LFSR’s these cells primarily fold clock jitter into pseudonoise. its really complex BUT its NOT whitenoise.

AC2 September 10, 2013 2:19 PM


Linus was happy to include Intel’s rdrand and responded to a recent petition to get it removed with the following

” Where do I start a petition to raise
the IQ and kernel knowledge of
people? Guys, go read drivers/char/
random.c. Then, learn about
cryptography. Finally, come back
here and admit to the world that you
were wrong. Short answer: we
actually know what we are doing. You
don’t. Long answer: we use rdrand as
one of many inputs into the random
pool, and we use it as a way to
improve that random pool. So even if
rdrand were to be back-doored by
the NSA, our use of rdrand actually
improves the quality of the random
numbers you get from /dev/random.
Really short answer: you’re ignorant. “

ddr September 10, 2013 7:23 PM

RdRand is used exclusively for memory address randomization in the Linux kernel, so technically an adversary could defeat ASLR and know exactly what the memory layout is.

* Get a random word for internal kernel use only. Similar to urandom but
* with the goal of minimal entropy pool depletion. As a result, the random
* value is not cryptographically secure but for several uses the cost of
* depleting entropy is too high
static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash);
unsigned int get_random_int(void)
__u32 *hash;
unsigned int ret;

    if (arch_get_random_int(&ret))
    return ret;


OpenBSD they have used /dev/random for basically everything, which is good. The PRNG pool management has a set of nested data recursions that mix newly collected randomness from interrupts and other sources with the timing of extractions. It’s probably the best PRNG out there. In light of this clusterstorm of NSA spying and shoddy Linux security practices over the years I’m probably going to switch everything I can over to OpenBSD, and the linux development machines I run offline will have to start doing deterministic builds, because now we can’t even trust the compiler since it’s feasible all signatures and keys to repositories and source downloads have been altered. I love turning 5hr builds into 10+hr builds thanks NSA.

Clive Robinson September 10, 2013 7:26 PM

@ RobertT,

I’ve not attributed much to Intel’s RNG design team after I realised that accepted custom and practice was not on their objectives list, thus it appears that common sense was likewise not on their list.

As you know entropy sources tend to be “delicate flowers” and need much “care and attention” if they are to “give of their best”.

Accepted custom and practice back last century was to provide direct access to the entropy source buffer output so you could “test” it whilst in operation to see if it was failing or comming under undue influance etc as well as being able to use your own de-bias and filtering etc.

For whatever reason Intel’s team decided to only let you get at the output from a hash function thus only limited tests at best could be carried out. As I’ve said a few times befor a hash function does not magicaly create entropy thus it’s “magic pixie dust” thinking which at best only obsficates poor design.

Others however have pointed out that it could easily hide a fully determanistic generator. And they have a point even a simple counter xored with a constant will look random on the other side of a hash function.

If they don’t have a good analog team then I dread to think what is actualy comming off their “thermal noise” source and how they get it up to logic levels without it suffering “undue influance”.

As you say “complex but it’s not white noise”[1], and as Hanlon’s razor has it, “Never attribute to malice that which is adequately explained by stupidity”.

@ AC2

As I said I’d only heard that Linus had throw the toys out of the pram, and decided to “only use” the Intel RNG. The piece you quote aludes to other entropy sources which is thus not “only use” but “also use”, which is a whole different can of worms.

However his statment makes me pause for thought on his knowledge, True RNG’s are not CS-PRNGs thus cryptography has little to do with their design and use. Further from an entropy point of view a fully determanistic generator effectivly has zero entropy, thus cann’t be used to improve the entropy in the pool, just stir what’s there, which can actually be a problem that reduces the entropy in the final output.

To see why you have to consider how the safe guards on the entropy pool work. Obviously each time you read from the pool you leak a little bit of information about it’s state, and part of this leak is some of the entropy in the pool. Thus each time you read from the pool you decrease the entropy there, no matter how much you stir the pool the entropy does not go up. What makes the entropy go up is the real entropy hidden in the faux entropy of the entropy sources entering the pool. Now obviously you need to somehow “rate limit” the number of reads so that the loss of entropy is less than or equal to the gain of real entropy from the sources. But how do you do this, well theres three basic ways,

1, Cap the number of reads per unit of time.
2, Cap the number of reads to a fraction of the input bandwidth.
3, Estimate the entropy in the pool by some measure of the pool or sources.

The first is very easy and the second only marginaly harder, however neither is actually connected to the actual change in entropy in the pool, thus have to be set conservativly to avoid draining the pool of entropy. The third method is difficult at best because it requires seperation and measurment of real entropy from the faux entropy that just stirs the pool. One reason it is difficult is there is no reliable measurment that can tell between real and faux entropy, there are just measurments that can show one aspect of faux entropy being present to a certain probability. So you end up with a whole battery of tests that almost invariably cannot give real time results.

So most safe guards will be set to the first method with a few using the second method. Often they will be set far to optomisticaly due to not being able to measure the real entropy even remotly reliably.

So having a determanistic input which has no real entropy but produces faux entropy that passes tests will have the side effect of setting the safe guards way to optomisticaly.

Does this matter? Ordinarily it would be considered an almost philosophical question, however when it comes to security it’s very real as lives have been lost to poor security. Thus we take a leaf out of the crypto play book to get some assurance, we don’t use the raw output from the pool, we instead put it through one or more crypto primatives such as a block cipher or hash and use their output. This gives us, as an assurance, the percieved strength of the crypto primative we use.

Thus what we realy end up with is a tweakable CS-PRNG where the tweaking comes from a mixture of real and faux entropy. In most but not all cases this is acceptable.

[1] For those that don’t know what “white noise” is welcome to the world of engineers where there are more types of noise than you can shake a stick at. Noise is generaly considered an undesireable side effect of basic physics which interfears with a wanted signal. In general engineers classify noise by it’s statistical properties or by the physical effect that causes it. For use in TRNGs the most desired property in white noise is “independance” and it is this that distinquishes real from faux entropy. Faux entropy always lacks independance in some way, the question though is can it be detected or eliminated. For more info have a look at

RobertT September 10, 2013 10:02 PM

@Clive Robinson
I agree, the first task of any real TRNG is to maintain the integrity of the actual entropy stream and enable suitable measurements (FFT’s DFT’s etc) to be made on the real entropy pool.

Without real entropy all you end up with is a circular proof of the “whiteness” of the SHA algorithm output, unfortunately this “whiteness” is maintained without any real input entropy (even a linear counter feeding an SHA algorithm will look good after the algorithm. Effectively this makes all entropy tests that occur after the first SHA stage nothing but tests on the known and well proven whiting effect of seeded block / stream cyphers and hashes.

It might pass all the test but its still not random.

For TRNG’s I personally try to achieve a worst case source randomness of about 12bits. That is one part in 4096, not great by crypto standards but never the less not actually an easy task. For a system with 100mV of permissible supple ripple (other signal) I need to have about 60dB Power supply rejection ratio. This is possible with good amplifier design but it is not trivial.

BTW In most CMOS Inverter based jittery ring oscillator TRNG designs the PSRR of each element is typically less than 20dB (about 1 part in 4) beyond that all you have is complexity that obscures the weakness of your source.

me September 13, 2013 6:08 PM

If you have a recent Intel chip with RDRAND (it will be listed in /proc/cpuinfo), you can disable its use by the Linux kernel and openssl library with the following two steps:

1) Disable rdrand in the linux kernel

Add “nordrand” to /etc/default/grub

2) Disable rdrand in openssl

Add the following to /etc/environment

After a reboot, you can verify removal of rdrand by checking /proc/cpuinfo and “openssl engine -t”.

A cheap solution to the entropy problem is a $14 usb DVB-T dongle and the following software:

Someone should take a look at the openssl rng — which under Linux is seeded by /dev/urandom and then does its own crazy thing. A quick patch to bypass would be welcome.

Natanael L September 14, 2013 10:56 AM

Somebody mentioned Bitmessage above. I would like to point out that it isn’t actually anonymous against ISP:s, WiFi hotspot owners or NSA since public key requests and replies can be tracked (it doesn’t route traffic at all).

I2P with Bote mail is however interesting. Bote mail is DHT based serverless mail, with optional random time mail relay delays.

me September 15, 2013 11:12 AM

As a follow on comment regarding removing rdrand from both Linux and openssl, and using a DVB-T hardware dongle as a source of entropy, here is a quick patch to bypass openssl’s crazy RNG with a direct call to /dev/random. Adjust as needed, improvements welcome.

This is for openssl-1.0.1 on a debian based system.

1) crypto/rand/md_rand.c – rename or delete the existing ssleay_rand_bytes() function and add:

#include <fcntl.h>
static int ssleay_rand_bytes(unsigned char *buf, int num, int pseudo)
    int r, fd;
    if (num <= 0)
        return 1;
    if ((fd = open("/dev/random", O_RDONLY)) >= 0)
            r = read(fd,(unsigned char *)buf, num);
            if (r == num) return(1);

2) build openssl to /usr/local

./config shared --prefix=/usr/local --openssldir=/usr/local/ssl -DOPENSSL_NO_RDRAND

3) install patched openssl (sudo make install)

4) modify /etc/ (if needed) – I added /usr/local/lib64 as first line then ran ldconfig

5) verify modified crypto library is used
ldd /usr/sbin/openssl => /usr/local/lib64/

Hope this helps,

me September 16, 2013 1:11 AM

Improved patch to replace openssl prng with bytes directly from /dev/random

#include <fcntl.h>
static int ssleay_rand_bytes(unsigned char *buf, int num, int pseudo)
    int r, fd;
    int n = 0;
    if (num <= 0)
        return 1;
    if ((fd = open("/dev/random", O_RDONLY)) >= 0)
        do {
            r = read(fd,(unsigned char *)buf+n, num-n);
            if (r > 0) n += r;
        } while (n < num);

me September 20, 2013 1:54 PM

For those of us who are looking to eradicate all traces of RDRAND from our systems. It turns out Intel’s RDRAND was added to GCC’s libstd++ on 9th September, 2012. Your system may be affected (or is it infected?). For removal, a recompile of the library might be in order.

2012-09-09  Ulrich Drepper  <>
        Dominique d'Humieres  <>
        Jack Howarth  <>
    PR bootstrap/54419
    * acinclude.m4: Define GLIBCXX_CHECK_X86_RDRAND.
    * Use GLIBCXX_CHECK_X86_RDRAND to test for rdrand
    support in assembler.
    * src/c++11/ (__x86_rdrand): Depend on _GLIBCXX_X86_RDRAND.
    (random_device::_M_init): Likewise.
    (random_device::_M_getval): Likewise.
    * configure: Regenerated.
    * Regenerated.

Also, check out the following post from the liberationtech (1) list. It points out that the recently uncovered Android pnrg “error” was the work of an Intel employee (Yuri Kropachev) (2) back in 2006.

  1. liberationtech 011604

  2. harmony 872

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.