Schneier on Security
A blog covering security and security technology.
« Another Liars and Outliers Excerpt |
| The Economist Debate on Airplane Security »
March 22, 2012
Can the NSA Break AES?
In an excellent article in Wired, James Bamford talks about the NSA's codebreaking capability.
According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: "Everybody's a target; everybody with communication is a target."
Bamford has been writing about the NSA for decades, and people tell him all sorts of confidential things. Reading the above, the obvious question to ask is: can the NSA break AES?
My guess is that they can't. That is, they don't have a cryptanalytic attack against the AES algorithm that allows them to recover a key from known or chosen ciphertext with a reasonable time and memory complexity. I believe that what the "top official" was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on. These attacks are likely to be much more effective against computer encryption.
EDITED TO ADD (3/22): Another option is that the NSA has built dedicated hardware capable of factoring 1024-bit numbers. There's quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe.
EDITED TO ADD (4/13): The NSA denies everything.
Posted on March 22, 2012 at 7:17 AM
• 110 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Sloppy password creation habits.
Always a winner.
Why would anyone ever attack the hardest part of any security mechanism?
Simple way to tell - see if the NSA's opposition starts to switch from AES.
The real guarantee we have that any of these systems are secure is that the CIA wouldn't use them if the NSA could read them, the secret service would use them if the CIA could read them...and so on....
Bamford mentioned something similar in his 2008 "Shadow Factory". Crypto geeks were introduced to something which made them come away dumbfounded. The implication seemed to be some sort of very effective attack against computer encryption (my initial impression from the description there was was that NSA could do something similar to breaking AES but you are the expert in this area). Either way - it's not the first time that Bamford has brought something like this up.
How about against certificates? There are both exciting factoring shortcuts or brute force for short keys or against RC4 or it could be as simple as getting Verisign to sign any fake key.
Or maybe its practical Tempest attack against anyone via their home wiring?
Maybe quantum computers ? so related to Discrete Log and Factorizing problem based ciphers.
Sounds to me like assymetrical encryption is broken. The NSA may have some way to factor very large semi-primes. This would break PGP, RSA, etc. The list of attacks that Schneier suggests are not an enormous breakthrough.
Suppose 1) the NSA could devise a cypher with a hidden back door such that only they could perform a cryptanalytic attack, and moreover that nobody else could detect its presence. Also, suppose 2) they would choose to promote this cypher as the standard if it existed.
Item 1) is a tall order, requiring mathematical knowledge beyond what's publicly known. Still, the NSA has some smart people working with them in secret. I'd estimate there's a 10% chance of 1) and maybe a 50% chance of 2), for a total of a 5% chance that the NSA has been keeping quiet about a secret AES back door.
These probabilities can't be based on hard data - we don't have any. You are welcome to tweak the 10% and 50% figures as you please. I assert that while it's probably true that AES doesn't have a secret back door cryptanalysis, it would be foolish to assume the product of the chances of 1) and 2) is vanishingly small. Discuss.
I think the main point of the new Utah facility is to crack the past, not the present. The NSA has been hovering up encrypted comms for decades and it may be that the combination of a petaflop computer plus terabytes of data might be enough to crack crypto weaker than 128-bit (and especially 64-bit).
But as far as cracking 256-bit AES coolly used now, I think Bruce has it right. The vulnerability is poor implementation and side channel, not direct brute force.
There is a "follow up" article of how the NSA have denied the article and the NSA representatives cannot even get the author's name correct (even though he is the leading journalist in investigating the NSA so must be well known to them),
The chance of 1 and 2 happening is hard to predict, but the chance of them happening with regard to AES is basically zero since the NSA didn't design AES and didn't select it as the standard.
what should raise suspucion/inspire confidence is when there is a difference between US encryption and the rest of the world
Either XXX is a good algo, or it is not, according to NSA knowledge. If it is not, the NSA takes a big chance not disclosing it to US citizens, effectively claiming that nobody but the NSA will ever find the loophole
If there is a difference, it might be found, again a little anoying for befriended countries
I believe that what the "top official" was referring to is attacks that focus on the implementation and bypass the encryption algorithm: side-channel attacks, attacks against the key generation systems (either exploiting bad random number generators or sloppy password creation habits), attacks that target the endpoints of the communication system and not the wire, attacks that exploit key leakage, attacks against buggy implementations of the algorithm, and so on.
All the more reason why the public needs to be able to verify the correctness of the implementation, not just the soundness of the algorithm itself. Open-source software is the key.
Also, maybe its just me, but based on those I know who have discussed this, I would think this has more to do with the analytic capabilities that come with hoovering up so much data, such that you can do meaningful large-scale comparisons of different plain and cypher texts. The entire article is about a super-massive data storage center, so it isn't an unreasonable conclusion.
The USA government promotes AES. It uses it itself. It is used in commerce in the USA.
If there were a back door, and that back door were discovered, commerce in the USA would be broken. It would be foolish for the government to take that risk.
It seems unlikely that AES is broken. Instead, this facility is probably processing the bounty provided by Google, Facebook, iCloud, Rackspace, Amazon EC2/S3, and the Narus boxes spliced off the Internet hubs.
"EDITED TO ADD (3/22): Another option is that the RSA has built dedicated hardware capable of factoring 1024-bit numbers. There's quite a lot of RSA-1024 out there, so that would be a fruitful project. So, maybe."
You mean the "NSA has built", right?
"It would be foolish for the government to take that risk."
Yeah, they've got a proven record of not doing stupid stuff, right?
Traffic analysis, starting with the dialing records for a *lot* of phone calls.....
Typo at the end; NSA, not RSA.
Thank you Bruce for finally commenting on this bit of Bamford's story!
The biggest breakthrough would be if NSA could build a quantum computer that could implement Shor's algorithm using the Quantum Fourier Transform (QFT). This breaks RSA and Diffie-Hellman. The quantum research program at NSA used to be headed up by Mark Heligman who is now at IARPA:
A big question is whether the NSA would knowingly allow the US Govt to use an encryption algorithm that has been broken: does their mandate to break other govts' communications trump their mandate to protect their own govt's communications?
The conjecture is easy enough to test: someone send an AES-256-encrypted e-mail from an internet cafe in a problematic country through a throwaway gmail account to another throwaway account. The text should allude to an armed nuclear device in a case of bananas in a ship bound for Los Angeles. If the port gets shut down, or the city is evacuated, or Seal Team 6 starts conducting impromptu customs inspections on inbound ships, we'll know for sure.
So this Bamford has been known to take information from government sources and speak about them to the press. If I was the goverment, I would consider it an effective strategy to "leak" that they could do very advanced cryptanalysis even if they couldn't. You don't actually need to be able to do it to promote the fear that you can. It sounds like a typical government deterrent to me.
Has the NSA found a weakness in AES well the answer is we know there are practical attacks against "online" implementations of AES, this has been known since it was just a candidate.
We also know that the NSA has approved AES for use at "code word" level, BUT the last time I looked only for "data at rest".
The NSA without doubt knew about timing side channel attacks due to CPU Cache etc long long before the AES competition. But as NIST's crypto advisors they chose (jointly or on their own) not to include this known problem into the competiton.
It has been noted that one of the criterion for the competition was "speed of operation" and that the example code became public. The result was that the code up on the site for download was effectivly optomised to make the issue of timing attacks the worst it could be without deliberate compromise, and also this bad code became part of just about every AES implementation or code library.
So there is some justification for believing that the NSA may well have done this deliberatly (however arguing from effect to cause is not a good way to carry out science).
Now to chuck a little more fuel on the fire, the NSA like GCHQ and a number of other govenment cipher security establishments are known to be ultra conservative in their outlook and their designs almost always have a sufficient but absolutly no more security margin. Further that they are reputed to not use a system unless they either know how to break it or know the only avenues by which it can be broken, and thus have a very very clear idea of just how long information will remain secure under it.
This is because we know that the NSA have deliberatly issued "field cipher" systems with keyspaces containing "weak and strong keys" in some cases 20% or less were strong keys. The NSA also persuaded a non governmental cipher company to do likewise.
It is believed that this was incase anybody copied the cipher design. Because in all probability the nation / entity copying the design are not going to know that there are weak keys, thus a high percentage of their communications would be open to the NSA, and knowledge of the standard plain texts used would in older systems aid in the breaking of the few messages encipherd under a strong key.
This view tends to be further supported by the fact that the NSA used to be the KeyMat controler and issued the key schedules etc so could easily use only the strong keys.
Now it is known that the NSA has been putting "network taps" in not at the boarders but the backbone where timing resolution is effectivly at it's sharpest due to throughput and minimum distance to one end of the online encryption system. Thus they are well placed with such a system to best exploit any timing attacks against AES.
Now currently in the public domain the timing attacks against AES are active not passive, that is you need to activly send data or an application at/to the target system to open the cache and similar timing attack channels. However as the likes of the NSA and GCHQ are without doubt many years ahead of the public domain on EmSec then it is reasonable to think that they may well have one or more passive attacks against AES timing.
Then there is the old issue of "known plaintext", there is an assumption that modern ciphers are effectivly immune to "known plaintext" attacks. But again the NSA and GCHQ are considerably more practiced at this than the academic community is. And the dominant presence of MS with it's very very large quantity of "known plaintext" at "known offsets" in all of it's Office and other file formats would certainly make a "known plaintext" attack considerably easier.
But are there other system implementation issues that would make the NSA, GCHQ et als lives easier?
Yes cipher modes, it is not that well known amongst application programers that you should avoid certain modes because it makes some attacks almost trivial. Further even in more secure modes there are issues over the likes of IV's and Nonces and even how you should pad data to fill a block etc. And even less well known is that some modes whilst secure for some purposes are very weak for others (ie stream cipher use for communications and for disk storage).
With regards has the NSA "backdoored" AES to leak key bits, this is (as far as we know) very difficult with block ciphers due to the lack of redundancy in them (however RSA and other systems using a pair of primes have way way more redundancy in them thus making "backdooring" an implementation almost trivial).
Then the question arises about the use of systems, the number of user errors that can be made are almost more numerous than the number of users... One such fruitful area that keeps coming up time and time again is the issue of "poor entropy" or "predictable random bit stream generation". Again we know there are ways to avoid many of the pitfalls but these are often unknown to application developers.
One significant tell against AES has always been it's novel design with the "brick layer", because it was novel little was realy known about it's strengths and weaknesses at the competition selection stage. Does it have a significant weakness, well none that's been published, but that does not mean it has not.
This gives rise to the question of would the NSA say anything if the knew of a significant weakness?
Well probably not directly, but if not an avoidable weakness they would be unlikley to allow AESs use beyond "confidential".
But they might well do so if there is some way they could keep it under control such as weak keys in the keyspace, where they issue the keys for use...
As Bruce notes there are a whole number of system design, system implementation and system use issues over and above AES and in many respects these are much much more likley to be a fruitfull place for the NSA. Why, simply because there is little or no research being carried out in many of these areas. As I've noted EmSec is such an area, it's only in the past few years any papers on EmSec have been carried out by the academic community. As for active EmSec there has been some work on smart cards but little or nothing in other areas. Over a quater of a century ago I investigated "active EmSec" attackss using EM carriers modulated by various waveforms. The only paper I can remember seeing to do with active EmSec attacks is by a couple of researchers at Cambridge Computer Labs where they subjected a 32bit True Random Number Generator with an unmodulated carrier and reduced it's output from 32bits of entropy down to less than 8bits...
So it can be seen that there are very very large areas of the crypto landscape that have not been walked upon by accademics or other non Governmental researchers. Thus giving the NSA, GCHQ et al plenty of room to find some choice golden nuggets.
Not that I havent said this all before on various pages of this blog going back several years.
That about sums up my thoughts too. It all depends on what one means by "crack AES". In it's most technical sense I doubt highly that NSA can crack AES. But that doesn't mean one's data is safely merely because it's encrypted with with some version of AES in some way.
Couldn't they factor all 1024 bit numbers and create a table? This could easily be done on commodity hardware. The resulting table would not even be that large, especially compared to a rainbow table.
Check your math ...
2**1024 ~= 10**306
So there are many more 1024 bit numbers than there are elementary particles in the universe (~ 10**82).
The use any such breakthrough sees would be quite limited, because the existence of such an intelligence resource is much more sensitive than e.g. information (other than control codes) concerning spy satellites. My guess is that intelligence from such a source would generally only be used by dedicated personnel to independently confirm intelligence from other sources or to guide less sensitive resources, unless the intelligence is VERY valuable and credible (and even then they would work very hard to conceal the actual source and to create credible alternatives -- see e.g. Zimmermann telegram, where there was an attempt to conceal both monitoring of a telegram cable and the capability to decrypt the German cipher).
It's also hard to believe that they couldn't control the small group entrusted with the knowledge enough to prevent rumors like this from spreading -- although people are always the weakest link, this kind of classified information generally comes with enough surveillance of the personnel trusted with the information that they know better than to start rumors.
My guess is that this is just a plain old rumor. If it somehow originates from the government, it might be in the hope that potential targets switch from "untrusted" strong encryption (AES) to potentially weaker systems (this isn't unheard of).
@Clive: "[NSA] would be unlikley to allow AESs use beyond "confidential". "
This one of the main arguments pro AES, IMHO.
The NSA likely cannot have a practical break against AES and at the same time be convinced that AES has a high enough security margin to protect top secret data against foreign governments. Or would they be so arrogant?
@ Random Walk,
2**1024 ~= 10**306
Not all the numbers between 2^1024 and (2^1025)-1 are valid as RSA keys (just exactly how many are and are not on the other hand is quite a hard problem, but we know it's very definatly less than half that are ;)
Oh for those wishing to get a reasonable aproximation of 2^n as 10^m in their heads or on a calculator which does not do large values of n
2^(n mod 10) x 10^(3 x int(n/10))
which for n= 1024
gives 2^4 x 10^(306) ~= 16 10^306 = 1.6 10^307
To get it a bit more acuratly on a calculator
1.024^(n/10) x 2^(n mod 10) x 10^(3 x int(n/10))
Proof positive that if a headline ends in a question mark, the author always answers that question with a 'no'.
I find it interesting that while debating dancing-angel level details of encryption, the notion that "the government" acts as one mind is un-challenged.
Have we forgotten the Thane of Glamis, loyal general to Duncan? I suspect it is quite rare in the history of the U.S. that the intelligence services, the armed forces, the executive branch, the courts, and the legislature all acted fully in accord.
Sean, you may wish to review John Rentoul's famous Internet series, "Questions To Which The Answer Is No", or QTWTAIN.
Maybe they quietly proved P=NP. :)
Unless the NSA is headed up by the people who do the UFO cover ups (those guys know what they are doing btw), then the NSA probably only has dedicated hardware to give it some "advantage". I always thought it was funny that the US Gov't approves your crypto for FIPS, as long as you pay for it. I wonder if they reject anyone that has the coin to apply in the first place. We saw how the Kingston, Blackbox, and Sandisk drive was 140-2 certified and yet had a simple mistake: http://www.schneier.com/blog/archives/2010/01/...
Perhaps the NSA just has piles and piles of these kinds of flaws waiting to be used. How hard would it be for them to ask Veisign for THE root cert and it's pass, they probably have it and many others already. Did/do the NSA have access to Carrier-IQ and it's ilk? Did that USB drive get ok'd out of laziness, was it just a one time thing, a hiccup in the FIPS process...
I bet AES is fine if done correctly, perhaps side-channels and other methods are better than brute force.
Yeah, sure they can break AES. Just like they can't break Skype. ;)
@C U Anon:
Since the number of prime numbers
2^512 / ln(2^512) ~ 3.78x10^151
around half of these are smaller than 2^511 so that's roughly 1.89x10^151. Now pick any of these two so the combinations are roughly
(1.89x10^151)^2/2 ~ 1.78x10^302
A few order of magnitude than 1.77x10^308 but certainly still more than the size of the universe :)
NSA published a short document called "The Case for Elliptic Curve Cryptography" in 2009 just saying factoring big numbers was getting easier and easier over time. So, there's that.
Former NSA folks have also said a quantum computer seems like just a big engineering problem. That seems years away even with NSA's billions, plus if we had them there would be tons of important non-cryptographic impact, but what do I know? ("But the harder the NSA studied, the more it came to the conclusion that quantum computing was simply a massively difficult engineering problem that may eventually be solved," from a December 21, 2006 article.)
Even if they can break aes, in most cases they wouldn't be able to use the intel gathered in ant meaningful way, since this would reveal they've broken aes. Much like the breaking of enigma, really
Are book ciphers still in use? And how would they withstand attack today? [shades of 'The American Black Chamber...Yardley?]
1024 bit RSA is doomed, that's obvious. The real question is "does NSA have impressive advantage in quantum computing"?
@ Ping-Che Chen
A few order of magnitude than 1.77x10^308 but certainly still more than the size of the universe
Undoubtedly so, and also considerably closer to the exact answer than Random Walk's original value of 2^306, but getting the exact answer is still going to be a hard problem...
However thats not the real fun of RSA pairs, have you any idea just how much redundancy there is in there that you can exploit to hide a secret that tells you either what one of the prime factors is or gives you a starting point reasonably close to one of the prime factors?
The fun is you can quickly demonstrate that providing the RSA generator software is a black box Oracle (as in Delphie) it is not possible to prove that it's output is backdoored or not...
Which means you cannot trust "turnkey" code to generate your RSA key, you have to "roll your own". Similar reasoning behind other asymetric key systems means potentialy we have a serious problem.
Even when you have the source to look at few people would recognise the "backdoor" code as it can be quite easily disguised to look like it is a valid part of the prime random selection process.
Few people can look at code and see the real "entropy" from a "determanistic process that looks random" which then gets hashed up etc as a starting point on the prime search.
Is it possible that NSA is setting themselves in the internet backbone to effect massive MITM attacks every time an SSL session is initiated?
From what I have seen, a lot of crypto-implementations suck badly, both in the actual crypto and in the randomness-gathering. Many people designing and implementing this stuff do not really understand what they are doing.
The other thing that typically sucks is the passwords/passphrases users select. For example, I would imagine the NSA has rainbow tables that make Googles data-storage capabilities look small. Nothing proper salting cannot defeat, but, again, many implementations do not salt properly or not at all.
As to AES, I am not worried. The astronomical damage a real-world AES break would do to the world (and US) economy would prevent them from using that capability in almost all cases. And they would have a massive interest in it not being there in the first place for the same reason. After all, if there is such a vulnerability, somebody else could find it.
Site note for the paranoid: You can allways combine different ciphers to make the problem much, much harder. TrueCrypt offers that out of the box, for example. If you combine, say, 3 AES finalists with independent keys in OFB mode, that would be at least as secure as the strongest of them.
"For example, I would imagine the NSA has rainbow tables that make Googles data-storage capabilities look small"
I wouldn't dare compare: Google and Amazon have massive capacity that only increases over time. NSA has this big datacenter. The other two have datacenters. Processing power probably shifts in NSA's direction. Anyway, nice guess at the rainbow tables. Having rainbow tables for the most complete dictionary for most common crypto schemes would be a tremendous advantage.
"Site note for the paranoid: You can allways combine different ciphers to make the problem much, much harder. TrueCrypt offers that out of the box, for example. If you combine, say, 3 AES finalists with independent keys in OFB mode, that would be at least as secure as the strongest of them. "
Excellent advice. I've given similar advice in the past. Triple encryption of a TrueCrypt volume on a midrange laptop from 2009 gave me 10MB/s-20MB/s disk throughput. So, performance is definitely ok. I'd add that TrueCrypt, like my recommendations, hides which algorithm(s) were used. So, when it gets the key, it has to try a whole bunch of them on a sample of data to figure out which one. For the adversary, if the cipher(s) are random, they have to try to attack all combinations.
How much is all this NSA interception going to slow down the internet? Where is the bandwidtch coming from to collect all that data or traffic? Is it even feasible?
I suspect I am naive here but wouldn't the factors of 1024 bit numbers be just a table lookup (though the size of that table is probably beyond my imagination).
"For the adversary, if the cipher(s) are random, they have to try to attack all combinations. "
No, all they have to do is one or more of the following:
- Install pinhole wireless camera(s) aimed at the workstation/keyboard and/or at something which reflects towards the target area, example: a tiny camera aimed at your tv screen or other surface which is positioned in such a way where it can, by reflection, capture your keystrokes and other work at your workstation
- Hardware keylogger or special blackop device inserted around or in a targeted hw device, recent announcements of special devices which go directly into the wall socket/power strip which are made to look like something innocent like an air freshener, etc.
- Laser microphone for keyboard clicks
- Stand outside or b.s. their way inside for sniffing of the keyboard
- TEMPEST attacks
- whitelisted or unable to be found malware slipped in through any number of online vectors
- poisoned router and/or modem firmware with commercially supplied backdoor(s)
- Powerline analysis (smartmeters may ease the lubrication)
- Attacks on wifi from within or outside
- Attacks on wired connections from within or outside / spliced connections
- Force a kernel/memory dump (usually easier on Windows systems) which is logged to disk unless you disable it and possibly sent in part or in whole to Microsoft unless you disable this
- social engineered or otherwise manipulated individuals friendly and allowed inside said workstation area or invited in under the disguise of a religious group or other spoofed organization to plant devices or otherwise access the workstation
- discovering your TC password written somewhere in your environment or on your person (missing time, medical procedures where you're "under", otherwise out of the workstation area during a sneaky entry and exit)
- new electronic or other bugged/poisoned gifts sent to you and you place within your workstation environment, simple envelopes or magazines, books, with a miniture bug(s)
- that slut or stud you brought home with you for a one nighter
- more and more and more
Given wikileaks' exposure of some of the vast companies foreign and domestic willing to sell spyware software and hardware if THEY want your (you meaning anyone and THEY meaning any determined enemy) TC password, they already have it or will obtain it, and it won't be through brute force.
You visit a store to buy something,
an evil person arrives before you,
slips the cashier through a handoff or fake prearranged (or real) transaction,
one or more "special" coins, containing a video and/or audio "bug", maybe with a nice wire which wraps around inside of the modified coinage several times,
convinces or through some other motivating factor(s) (or by chance) the cashier to give you said coin(s) for change during your transaction,
evil person exits the store and the fun begins.
How often do YOU check YOUR coins to make sure they are really the real thing or do you just dump them in the same room as your computer(s)?
@Reed-Solomon at March 22, 2012 7:52 PM
> No, all they have to do is one or more of the following:
Who are they in the context of your post?
@kashmarek: Factorizing 1024 bit via table requires basically tables for all prime products up to 1024 bit.
As a lower boundary, you could estimate the number of primes smaller than 2^1024. (After all one of them has to be in any prime product and there should always be another prime to make the product roughly 1024 bits. Of course there are far more than one prime for each fixed given prime so that the product is about 1024 bits, hence this is a _lower_ estimate.)
By the Prime Number Theorem, the number of primes smaller than n is roughly n/ln(n).
ln(2^1024) = 1024*log(2) ~ 710.
710 ~ 2^9.5
2^1024 / 2^ 9.5 = 2^1014.5
Hence there are about 2^1014.5 primes smaller than 2^1024. That would be about 10^300 primes. So you would have to store more than that for all prime products less than 2^1024. Given that the universe only has something like 10^80 particles in it, this seems pretty infeasible by a huge margin ;-)
So, no, for 1024 bit primes, tables will not cut it, you have to either factor or know something that greatly reduces the number of candidates (bad RNG, for example).
That should read:
"So no, for 1024 bit products of two primes...."
Thanks. That was quite a useless counter aside from enumerating threats for people who haven't heard of them (or don't need to care for most). We're discussing the security of ciphers against cryptanalytic attacks on the cipher text. He's thinking about total security of one's data in use and at rest against... all the major intelligence agencies combined?
Perhaps people like him should actually read the topic and comments before posting.
Read the final lines in the post.
It is explained for you.
Thanks for quoting the time of the post, too, it makes the intent and origin of the reply and user more obvious.
"Thanks. That was quite a useless counter aside from enumerating threats for people who haven't heard of them"
You might want to add more coal to the furnace, you'll attract more trolls that way. Baiting - I love it!
"We're discussing the security of ciphers against cryptanalytic attacks on the cipher text."
Did you read the OP? It included mention of, "side-channel attacks", read it:
Also: "attacks that exploit key leakage" and so on.
"He's thinking about total security of one's data in use and at rest against... all the major intelligence agencies combined?"
"THEY" or "THEM" were explained in the post, no alphabet soup agency was mentioned.
"Perhaps people like him should actually read the topic and comments before posting. "
I did, I responded on-topic.
The baiting continues.
I enjoyed the baiting by both parties in order to gain a response.
I'm such a fool, a fool in love with... oh that's a touching song, isn't it?
"people like him"
Have you ever considered what it would be like to be kind and humble? This is the second thread I've read where several people have circled their e-wagons around a poster who isn't a regular.
Pass along my reply to your "friends" with the timestamp and any logs, I'm always interested in making new "friends".
There appears to be an issue with people talking about rainbow tables for factoring primes and passwords etc.
Firstly it's not the size of the table that's important but how long it takes to use it in any given situation.
There are two major time issues with tables, the first is producing them, the second is how long it takes to do a lookup.
Yes there are more efficient ways of finding primes than checking each number in turn however work out how fast a simple incrementing counter would have to be to count to 2^1023 in say a year (which is a little less than 2^25 seconds)... then compare that with the current clock speed of GPU's etc (which is currently less than 2^32),
2^(1023-25-32) = 2^966 ~= 6.24 x 10^290
Even with special optimization techniques and massively parallel hardware you are looking at an exeadingly long snooze while your table builds. I'll leave the energy requirments for others to work out but lets put it this way (the last time I did a napkin calculation) if the table builds in your working lifetime (~2^30 secs) you are looking at being quite warm at a distance from your "computer" aproximatly the same distance as the outermost planets of the solar system are from the sun...
But lets assume I had the "NSA magic wand" and could actually magic up a table preloaded onto modern hard drives how long would a single query to the table take? and how would I make a million or so queries simultaniously?
Just a few years ago I did the math for a "Home RAID box" table for 40bit DES and each query was going to take ~100mS (that is half the longest access time for the head to move from the inside to the outside of the platter and go round one full revolution) so only 10 lookups/second in a 2^40 table so with that as a basic building block your performance is not going to be very good unless you develop a serious optomisation technique or three (ie sorting the queries into order and firing them off at individual blocks etc)
So a physical table stored on COTS hardware would not be practical.
Are there other techniques that could be used, well yes there are you can make your factoring a lot more efficient by finding how many unknow RSA keys share a common factor. Such a technique was recently discussed on this blog when researchers published results on just how bad the "shared factor" problem was with SSL Certs and concluded there were a lot of very bad random number generators out there.
Thus actually factoring a single RSA key (CPU cycle expensive) can be spread across many many RSA keys once you have found they have a common factor (CPU cycle cheap in comparison).
It's this sort of thing that the likes of the NSA GCHQ et al with all their "caged mathematicians" do as "bread and butter" work.
@Clive Robinson: Normally you are right. But if space is infeasible, you do not even need to look at time and not only because of the large numbers involved.
If space is feasible, then you can look at building the table and at lookup. No need for that here.
I am wondering at that 40bit DES lookup though. You would need one each possible ciphertext with each possible plaintext as lookup-key and look up the key. (Unless you assume chosen plain or ciphertext as attack). That would be 128 bit key and 56 bit element sizes, i.e. 2^128 * 7 Byte storage space, which is about 2.8*10^39 bytes and quite infeasible.
Or is this for a lookup-table implementation for DES? There you would get around 10ms access time for raw disk access and a storage size of 8* 2^(40+64) Bytes, which still should be pretty much infeasible. Unless you use a fixed key. Then you have 8*2^64 storage size which is feasible, but not without a rather large building for the 150 million 1TB disks.
Moral of the day don't let anyone talk to you about this blog and your posting whilst you are enjoying your first pint (20floz for those in US) of badly needed restorative brownian motion generator (ie a nice hot cup of tea).
It has been pointed out (thanks Dave) that I should have finished my previous post rather than let it hang... And made it obvious for "code cutters"... So,
Firstly if you have not seen it already go look at a page on this blog,
And follow the links to get more indepth information.
As I said before factoring RSA keys into the PQ primes is CPU Cycle hungry, I don't know what the current public factoring is (@Bruce or others do you know?) but bassed on previous attempts the NSA should be up somewhere around factoring one 1024bit key a weak without overly stretching their resources.
Which does not sound like a lot. However if you have followed the links you should now know that it is comparativly easy to find out whole sets of keys that share one prime. So factoring a key from this set gives you a fast way into the rest of the keys as you simply divide by one or other of the recovered prime factors to discover the unknown prime factor in the other keys in the set. This uses trivial amounts of CPU cycles in comparison to the actuall factoring. Thuss relativly large numbers of RSA keys can be broken with just one factoring.
Now a little bit of crypto history, encrypted messages in large signal nets usually consist of two parts the Key discriminator and the message encrypted under the key.
The key discriminator is effectivly a way to tell the receiving operator which key has been used to encipher the message. Befor WWI (1914) and for some time into it the relativly few signals could use a code book to translate the key discriminator back into the key. However it was quickly found that this does not scale well and has all sorts of issues with key managment etc. But primarily it ment it was easy to find out which messages had been encrypted under the same key. This often alowed an attack in depth, the clasic example being an OTP used twice, by simply adding the two messages together in the right way you cancel out the key stream that gives it it's security and end up with the two plain texts added together. There are then fairly simple but tedious techniques to recover the two plain texts upto the length of the shortest message (and maybe a few charecters more).
Prior to WWII the Germans realised that warfare was going to be heavily reliant on signals and they adopted the Enigma machine for use. However they were aware of the two messages under the same key problem and came up with a system of key discriminators that alowed individual messages to be encrypted under randomly selected keys. Unfortunatly they made a couple of mistakes firstly they double enciphered the key discriminator that alowed the rotor wiring to be recovered and secondly humans are very bad for many reasons at picking random sequences so many messages were enciphered under the same key, and this alowed three Polish Cryptographers to use the key discriminators to determine which signals were encrypted under the same keys and attack not only the key discriminator in depth but also the enciphered message. As we know know this work moved forward via the French to the British and so the Ultra Secret was born that went on to develop all sorts of new and interesting methods of attacking machine ciphers, some of which still remain "officialy clasified" to this day.
Now spring forward to current times and how do we send encipherd messages and establish secure communications. Well some people with small signal nets or who have very high security requirments (The NSA GCHQ et al) still effectivly use "issued keys" from the equivalent of a book. The rest of us because the size of our potential signals net is world wide mainly use asymetric cryptography based around RSA or other Public Key cryptography.
If you look at an Email or other message sent under public key cryptography what you actually find is the plaintext enciphered under a standard symetric cipher such as AES and an encrypted key that acts as the key discriminator. This is encrypted by the message originator using the RSA (or equivalent) public key of the recipient such that the receiving party can decrypt the symetric key to decipher the message body to recover the plain text.
Thus being able to factor somebodies public key alowes you to rebuild their private key which gives you access to all the symetric keys and thus the plaintext of every message they have ever received under that key.
Buy using the fast algorithm to build sets of all public keys that have one prime factor in common you are effectivly doing "an in depth attack on the key discriminant mechanism", which has significant potential.
As we know (you have read the linked articles ;) there are one heck of a lot of very poor random number generators out there so some of these sets that share a prime factor can be very very large. Thus factoring just one of the keys in the set alowes you to quickly recover the private keys for all the other keys in the set...
Now what the NSA may well be doing is hovering up all the public keys they can and building them into sets to recover keys.
However there is a further wrinkle they can use... Because it's the same poor random number generator generating both primes this gives "likely factors" into other unknown keys generated by the same design of software or hardware. It is very worth while building these into a small Rainbow Table to try speculativly when factoring one or more keys in other sets. Such a small Rainbow Table could be kept in high speed RAM on the CPU "cluster", and a bit of clever software coding could be used to keep much of the information in CPU Cache memory or registers of an appropriate CPU design (some GPU's are almost made for the task).
Now back to what is and is not random and how much of it you need.
There are over 7billion people in this world and with all the bank cards, mobile phones, PC's and "roles" people have it is easy to see we could have a need of something well over 100billion asymetric keys. This is 1 x 10^11 keys or ~ 2^37 keys consisting of two randomly selected primes.
Thus as the primes should be independent you would need two lots of 2^37 or 74bits of independent randomness. However there is a catch known as the "Birthday Paradox" which means to avoid random colisions you need twice the number of bits again. Thus an absolute minimum of 148 bits of fully independent or true randomness before we consider generating the two primes, and you realy should consider actually generating the same number of truely random bits as the total key length (ie 1024 bits for a 1024bit RSA key).
As indicated we know there are an awful lot of poor random number generators out there and worse on examining the code many are very very predictable due to not having true randomness or entropy as a source.
Also two many random number generators written by application programers apear to believe that they can generate entropy by using hash functions etc on a very very limited amount of true randomness derived from a physical source and properly debiased. All using a hash function does for you is spread your tiny amount of entropy across all the bits of your hash output. This is because the output bits of the hash are determanisticaly related to each other and not fully independant of each other.
We may not be able to role a hash backwards but the NSA certainly has the computing resources to work out the limited amount of entropy various devices and software generate and build a small(ish) Rainbow Table based on the entropy.
As Bruce and others have indicated in various books and papers gathering real entropy and generating good randomness from it is actually quite a hard problem.
And please don't rely on "code libraries" even those (supposadly) written by experts because they don't know about the reliability and charecteristics of the physical sources you chose to use as the input to their generators, as has been observed before you cannot magic stuff from thin air no matter how much "magic pixie dust" you use, and the GIGO principle applies with absolute certainty.
[For those who may not have heard of GIGO it stands for "Garbage In for Garbage Out" and thus clearly indicates the output quality of a determanistic process is 100% dependant on the input quality, if it's bad then you get bad.]
Anyway I now feel the need for my third 20floz of strong brownian motion generator ;)
Provoking posting and great comments! I read all the way through and found some of the comments to be as consuming as the original post. That said, agree with Bruce in that the gov't has probably not been successful, but also believe that it is possible with their resources. Especially, given the fact (as one poster pointed out too) that many of the opportunities to crack 1024 bit would be by a method other than factoring.
Clive Robinson: Normally you are right. But if space is infeasible, you do not even need to look at time and not only because of the large numbers involved
Well a couple of reasons to mention it,
Firstly people will talk about time/memory trade offs which to be honest I've never found pan out except in very special (read "very very expensive") circumstances.
Secondly a Rainbow table does not have to be even remotely close to being even a fraction of one percent of the total space to be usefull and people will argue it's still a valid aproach (which it can be as my post following it points out). A simple well known example is with MD5 hashes of simple one or two word passwords.
As for the 40bit DES home RAID box it came about due to cheap hard drives and somebody making the two arguments above. That is there are time memory trade offs to be made and sometimes you don't need the whole space.
If you have (as some protocols unfortunatly do) a known plain text at a known position that exceeds the cipher block size then you end up with a one to one mapping on a subset of the total possible cipher text output in that block.
That is 2^40 keys will only give 2^40 cipher texts of 8 bytes each. The table produced is thus only 2^43bytes or 8 terabytes in size (you need a bit more as you have to alow for the HD manufactures fiddle of 1Tbyte = 1.0 x 10^12 bytes not 1.1 x 10^12).
However the table is fairly usless unless you come up with a way to sort it on the cipher texts. This can be done as it's a sparse data set (ie only 2^40 out of 2^64 or 1 in 16777216) and it can also be speeded up (and marginaly compressed) by having a first byte or word of a hash pre index once it's sorted.
The point of doing the math was to show that although it could be done and there was a time benifit it realy did not pay off due to the maximum access time to read your first byte, let alone the time to build the initial table and then sort it.
Perhaps the question should be, what can the Chinese and Russians do?
So basically, the NSA sent a phishing email with a subject of "2011 Recruitment Plan" with an attached Excel spreadsheet containing an embedded flash object to an RSA engineer...
@ Bob T
"So basically, the NSA sent a phishing email with a subject of "2011 Recruitment Plan" with an attached Excel spreadsheet containing an embedded flash object to an RSA engineer..."
Yeah, basically. Plus a few dozen other companies. Used Chinese relays. Et cetera.
So a question or two...
If a person gets a list of known primes, would current technology allow nearly anyone to do prime factorization of the crypto keys sufficiently fast?
And how large a list of known primes should a person try to get hold of at a bare minimum?
And another two questions...
If US government has found a way to do quick prime factorization (considering the shortcuts Clive and others mention I guess this is not farfetched) to decrypt the "enemy side" keys then the cat would be out of the of the bag and they would just have to deal with the fact that the technology also gives them ability to decrypt domestic communication as well.
For this reason I wanted to ask:
1. does anyone know if any crypto tech exists that does not rely on prime numbers for keys?
2. would it not be possible that some top secret communication channels use some cryptography (or other form of info hiding/channels/etc) that is not suspectible for prime factorization?
Re: "Site note for the paranoid: You can allways combine different ciphers to make the problem much, much harder. [...] at least as secure as the strongest of them. "
I just wanted to point out that this isn't necessarily true. It is entirely possible that combining algorithms makes the resulting ciphertext easier to decipher. Combining algorithms basically creates a new algorithm that needs to be analyzed independently.
I'm a devoted adherent of Occam's razor: while it's impossible to rule out that NSA found a crack in AES, or figured out how to make useful quantum computers (etc. etc.) ... suppose that NSA has simply applied what is already known and developed.
As Bruce observed, there's still a lot of 1024-bit RSA around, even though the keylength guidance long ago recommended that they not be used past 2010. The holders of the public RSA factoring record (a 768-bit product of randomly chosen primes of about the same size) estimated that factoring a 1024 bit RSA modulus would be about 1000 times as complex.
I did a some looking up and number crunching (won't bore you with the details here), and estimated what it would take to factor a 1024-bit RSA modulus.
First, I expect that a core cost is energy (electricity), and estimate the cost to be roughly $200,000 to $400,000 per modulus factored. If I assume the energy efficiency of the computing resources to be on the good side, or about $250,000 per modulus, then NSA could factor one modulus per day at a cost of about 2.5% of their estimated budget (officially, the NSA budget has never been disclosed).
I also made a very crude estimate that hardware sufficient to factor a 1024-bit modulus in 24 hours (using the most economical clustering techniques) might cost about $2 billion, or perhaps even less. If this is correct, then the capital financing cost would be somewhat less than the energy cost (assuming continuous operation).
I have no idea whether NSA has done such a thing. Certainly, it is feasible for NSA to have done so, and it makes sense for them to try. Because the cost of discrete logarithms is supposed to be similar to that of factoring, the same hardware might be used to break 1024-bit Diffie-Hellman, which might be more useful in practice because of its application to online communication.
Note that is far as is publicly known, even the whole of the US security budget still would not suffice to break 2048-bit asymmetric encryption in a reasonable time frame.
Note that is far as is publicly known, even the whole of the US security budget still would not suffice to break 2048-bit asymmetric encryption in a reasonable time frame.
yes but isn't that true ONLY if their process actually took the brute force factorization route?
I mean if they had some sort of tables with the prime numbers or some function that would generate the next prime number would it not be much faster?
For example would they not be able to utilize the RSA algorithm itself to generate keys that they can match with those that they want to break?
@clive insightful as always.
@Reed. You hit it exactly right on the real questions/methods in my humble opinion.
@Bruce you summed it up nicely as what is probably really meant.
I would tend to go with Occam's Razor (joke) on this.
As much as the geek in me wants to believe that the super secret geeks/mathematicians/magicians have a really neat way of cracking AES, I have to realistically assess it as other, easier ways of cracking the encryption.
As usual the answer given can reveal as much about the questioner as the question itself.
I have to get the little kid in me restrained in these kind of discussions. ie. someone can crack the keycode to get into a building with a cracker..or they can see which numbers are clean/smudged...or just take a chainsaw to the side of the building..Why would the NSA build such a huge device if not needed?. Use only what is needed and spend money on more research.....Just saying...BTW I do think they are sucking every bit of information they can for later analysis, NSA testimony really parsed that carefully... :(
Nobody factors large RSA moduli by "brute force" (i.e., trial division). It is practically impossible. The few examples that have been factored, were factored using the best publicly known algorithm, the General Number Field Sieve (GNFS).
It is on the basis of GNFS that I claim that as of today, 2048-bit moduli can't be factored even with government-scale resources.
As to the use of tables, for attacking 1024-bit moduli (which are no longer recommended, being considered too small) ...
... such moduli are usually the product of two 512-bit primes. There are approximately 20,000,000,000,000,000,000,000,000,
000,000,000,000,000,000,000,000,000,000 such primes.
@JimFive: Generally, you are right. But for
a) independent keys and
b) OFB mode
combining ciphers must be as secure or more secure than the most secure of them due to basic properties of XOR. Of course, OFB mode has problems as well, for example that one changed bit of plaintext produces one changed bit of ciphertext.
@Clive Robinson: As long as there is a space/time tradeoff, sure. But if memory is infesible by a very large margin, then there is no such tradeoff and time becomes irrelevant. Also note that what I describe is a very generously selected lower bound.
This is admittedly a special case. AFAIK there still is no efficient way to produce (count) all primes. That would indicate that you do not gain anything with partial rainbow-tables for RSA. Block-ciphers are fundamentally different.
If I were NSA, I would have bought largest keyboard manufacturers years ago just to make sure they are non-TEMPEST.
Biggest problem being after that to keep their market share large enough.
Almost certainly, not necessary! Probably every home or office keyboard you have ever seen leaks information badly.
Though this is not useful, without receivers in range of the right keyboards at the right times. Inadvertant EM emissions tend to be at modest power levels, and not reliably readable without a receiver located nearby (perhaps 10 to 1000 meters).
@ Gweihir, jack, MarkH,
That would indicate that you do not gain anything with partial rainbow-tables for RSA
Not in the "ideal RSA case", but you do get a real step up with relativly small Raindow Tables for RSA keys whos primes were found from a lousy RNG of which there are many as numerous studies have shown.
And at the end of the day, ignoring for the minute "traffic analysis", "getting at the plaintext" from the cipher text in the message is all the NSA is interested in.
So if getting the RSA protected AES key by getting at the prime factors of the RSA keys by exploiting weaknesses in poor random number generation works it's good for them...
Now if you remember the watch word in journalism and spying alike is "protect your sources" so the NSA (I assume) is highly compartmentalised. Thus if you read the chain of events the other way around from plaintext backwards, applying Occam's Razor each time,
As a highly compartmentalised NSA employee you get to see vast quantaties of these AES protected messsages being recovered, you and those you tell might logically assume the NSA has broken AES.
From a step back up the chain another highly compartmentalised employee might also assume as they see all these AES keys recovered from PKI protected messages that "the NSA had broken RSA etc".
However at the next step back up the chain an NSA employee starts to see the real picture of weak RNGs in comms software used in applications and network devices having their starting state low entropy or poor entropy production being exploited.
Now at this point I will put my hand up and admit that I appear to be commiting the cardinal sin of arguing frrom effect back to cause (which I repeatedly tell people not to do). But... I'm actually starting from the known position of,
1, We know from recent research there are a lot of PKI certificates that have PubKeys with shared primes.
2, The research further indicates that certain "Net Devices" and "applications" made these PubKeys. (Because the researchers say they are in contact with the designers/manufactures to resolve the issues.)
3, We know that the usual way that large primes are found for key generation is by generating a random start point and walking forwards or backwards from that point untill a suitable prime is found.
4, We know that the random start point is almost always done by a determanistic software process labeled "Random Number generator" that few if any device/application developerss have any understanding of or interest in understanding.
5, We know that almost invariably the developers will either use someones "pet code" or "book example" or some "software library" because that is the path of least resistance.
6, We know that also the testing of such code is "it looks ok" not "it's passed the following formal tests", because the formal tests require knowledgable people to run them correctly, and there just isn't that number of people out there to do it.
7, We know that nearly all the RNG software has non determanistic input from "entropy sources" but next to nobody knows how much entropy any given "unknown source" has if any it's almost always "assumed" it's enough. [Other RNGs such as BBS still need random data either for primes or a seed.]
8, We know that all entropy sources have bias but again next to nobody knows how to measure properly let alone remove it.
9, We know there is a long and inglorious history of "real world" failed crypto due to poor random number generators from causes 4-8
10, We know that a very big chunk of the people who can do steps 4-8 correctly work for the NSA, GCHQ, et al and the other major chunk in academia, few of the latter and generaly non of the former will assist in ensuring that a product has both a suitable Crypto RNG code driven by a suitable entropy source, and the protection mechanisms to ensure sufficient real entropy has entered the RNG before it makes an output.
So if I use "poor RNG" as my well known "cause" I can easily argue in the correct direction to the "effect" that looks like "NSA has broken AES" and show why NSA employees and journolists might believe that.
As for the use of Rainbow tables in "cracking RSA primes" I'm only talking about small tables of "good guess primes" to use as "an optimistic filter" that simplisticaly does a divide with each table entry in turn to see if the result is another prime. If not it's no big loss you just move onto the next key. The aim is not to crack "a" key but crack "any" easy keys. In this respect it's a bit like a car theif wandering around a carpark trying door handles to see if the car is unlocked, not a safebreaker with a sspecific target in mind.
Why might this work, well we also know that the real world failed cases of RNGs we have heard about existed for some considerable time before they became obvious. And in most cases the failure was due to,
1, A highly predictable input to the RNG that could be externaly predicted (such as a process ID, time of day, serial number, network numbers etc).
2, Incorrect implementation of RNG in software.
3, Little or no entropy in RNG input prior to use of output.
And in many cases the actual entropy at the RNG output was less than 2^20.
We can safely assume that the NSA, GCHQ, et al have people devoted to analysing any new devices and software which might be used by those they have a "mandate to listen upon", and that the RNG would probably be a starting point so any failings would be well known to them. Especialy as there is actually so little Crypto RNG software out there.
Therefore it is safe to assume that the NSA would have both the technical skills and the actuall product knowledge to know how to build and correctly populate such Rainbow Tables.
We also know that a Rainbow Table with 2^20 entries (ie one million) will sit quite comfortably in a PC's RAM even with 512bit entries (ie 512MByte or 0.5GByte).
Such a Rainbow Table system would be quite happy as an undergraduate research project using the PC's in a universsity lab. The other parts would be quite happy as postgrad to doctoral thesis projects. It's also the sort of thing I would expect to see five or six publishable papers written about the various aspects (and arguably some of the existing papers would just need a few sentances changed to give a different view point to make them relavent).
For instance the work done that Bruces blog page from last month posts to could easily have one or more second/derived papers written that show how once the "sets" of keys with common primes are identified the result of factoring just one key could give up all the other primes involved very quickly and these could be analysed either to show further weaknesses in the underlying RNGs or analysed for suitability for putting into a Rainbow Table system to use as a "quick guess" pre filter to make full key factoring unnecessary for a large number of PubKeys in use on the Internet.
If such papers did appear I'm reasonably certain they would attract a lot of attention and as such make a few peoples names sufficiently well know they could capitalize on the publicity career wise (and if you are reading this and think yes I'll do that then buy me a beer ;-)
@Clive Robinson: With insufficient entropy in the key-generation process, all bets are off. And I agree that this is a common problem encountered in practice and that cleverly crafted rainbow-tables may help here.
"1, A highly predictable input to the RNG that could be externaly predicted (such as a process ID, time of day, serial number, network numbers etc).
2, Incorrect implementation of RNG in software."
I agree completely, I would only add that, designing good RNG's is hard but testing & compliance checking RNG's is an hellishly difficult task.
Most engineers select a good basic design for an RNG, but screwup the implementation, especially wrt susceptibility to locking to external signals / power supply noise. Engineers then fail to ever properly test the resulting RNG.
Testing RNG's raises the fundamental problem of "what is random" If I test a single RNG and it never gives me the same number twice, is that random?
Is it random only if there is no determinable count pattern?
Is the number only random when the "entropy" is evenly distributed across the entire length of the generated random number? (think about what is implied by the concept of a maximum "1's , 0's" string distance)
Depending on your intended RNG application you can find at least half a dozen definitions for "random". So compliance testing is a doubly difficult job.
BTW every managers a** will pucker up tighter than a snare drum, when you tell them the minimum sample size and necessary test time required to check for randomness.
BTW every managers a** wil pucker up tighter than a snare drum, when you tell them the minimum sample size and necessary test time required to check for randomness
Last time I quoted to an Engineering Director, he looked like his 455 had suffered from "The healing hand of Jesus" as he fairly rapidly became full of it...
I don't like using the word "Random" any longer when talking to engineers and the like I prefer "nondetermanistic", because the word "Random" appears to engage "magic thinking" in many heads. Random also gives engineering mono focus on the output not the process and many people fall prey to this, not least of all the warious levels of software person (code cutters upwards including some computer scientists).
And this diferent outlook is vitaly important, one way of looking at a nondeterministic source is as a determanistic source with bias and noise. Where the determanistic source is either a chaotic oscillator or more correctly an amplifier of high gain.
Thus as you note any signal at it's input is going to be reflected in the output. Now if this signal is sufficiently periodic then it "might" be picked up during the design and tessting processes but otherwise not.
But what if the signal is actually "truely random" in it's own right? Yup zero chance of it being picked up in the design and test process.
Now if you say that to many engineers you get down to the "random is random" argument...
And it is not untill you ask them what would happen if you have two nondeterministic sources adjacent to each other both subject to the same truely random input signal that some of them get it. Others you have to say and what if I put a truley random signal on to an RF carrier and pointed it at your nondetermanistic process, before they get it. However a sizable number have to have it pointed out to them that as the attacker I now fairly well know what the output of their nondeterministic process is before the penny drops. Oh and then there are those destined for walnut corridor who say "yeah and what does that have to do with anything!"....
[For those reading who might not be "old school engineers" several hundred years ago it was discovered that pendulums that were close enough together influenced each other and would fall into "lock step" where they would synchronize their movments. In modern electronics it's called "injection locking" and the chances are you have an example in your home with the "colour burst" oscillator in your TV]
I was mucking about doing active fault injection attacks with modulated RF carriers back in the 1980's fully independantly of others. What has amazed me is it was only very recently a couple of students wrote a paper showing how they had turned what was supposed to be a 32bit random number generator into one that was less than 8bits just by pointing an unmodulated RF carrier at it....
And as we know the Russians amongst others have pointed both modulated and unmodulated carriers at various diplomatic missions and secure sites for years....
Thus I would say it's actually impossible to certify a nondetermanistic source unless you fully control it's environment. And with 70dB above thermal noise easily achivable in an active attack most engineers are not going to get even close, and worse the users of their products never will either because the output will pass all the tests they throw at it...
And people ask me why I still use dice...
My scheme involves two good ciphers, AES or eStream grade, layered on top of each other with separate keys. I'll reconsider it if you can show one case where a compromise occurred b/c two good ciphers were used in this way. So far, it seems to be a theoretical thing that never happens in practice. At least, it hasn't come to my news wire.
I'm not an expert on cryptography, though I've worked with AES a bit. My level of understanding of AES is not much more than what is found on Wikipedia at this time. That being said, it seems to me that it should be possible that if the header of the encrypted text is known or predictable, such as knowing that the first five bytes of an xml file is most certainly "
@ Nick P,
So far, it seems to be a theoretical thing that never happens in practice
Depending on your view point it's "does not happen now"...
Actually it probably does in a limited way but it should be of near zero significance these days.
That is if you have a known block of plain text and encrypt under one key then decrypt the resulting ciphertext under a different key will the resulting plaintext be the same as the original plain text?
Well you need to qualify the question a bit further by saying for "one, some or all the possible blocks of plaintext under the two keys".
If you think about it because most of our block ciphers are based on a modified Fiestel round which in effect only XORS the plain text with the output of a (supposedly) one way function. All that needs to happen is that across the number of rounds the outputs of all the one way functions effectivly sum to the same value under the two different keys. Or to put it in a looser context on a bit by bit basis across the block width, all the coresponding one way function output bits to an individual plain text bit need only have the same parity...
Thus it can be seen that the design of the one way functions is quite challenging to prevent such an event occuring (and thus why many amateur block cipher designs are weak).
So to prevent the use of two chained ciphers looking like one cipher in part or whole you would need to look at both the round structure and one way functions and check to see that they are orthagonal to each other.
However most ciphers that made it to the final round of the AES are sufficiently well designed that it should not be an issue.
Most probably what NSA (and is associates) have is a special quantum computer Quantum Computation Cognitive Footprint dedicated to find those private parts of RSA/ DSA/ EC asymmetric keys.
Probably they can break each one (from 0 bits up to "infinite" bits of RSA/ DSA/ EC) in less then one second.
So in practice, because their isn't normally anything else protecting AES encrypted messages on the Internet, they can intercept and decode every single one of the connections on real time, or later.
Even if they can't break for example EC easily (that they are recommending for protecting confidential information)... does some one else notice that both NIST and ECRYPT II do state that for Elliptic Curve to have the same level of security like a symmetric 256 bits (security) algorithm it needs to have AT LEAST 512 bits, and use an hash algorithm with AT LEAST 512 bits... but NSA says that for those same 256 bits of security is only need 384 bits and an 384 bits hash... but according to the NIST that is only enough to protect up to 192 bits of security.
Sure you can encrypt the messages using just a phrase with at least 40 ASCII characters... but is only useful if it is just for a few persons... using e-mail... but any discloser (malware, video surveillance, and others) and the message get's open. And... where is that software? And does that securely? And what about, how to be sure is from the real source?
Better option is to use 256 bits Serpent algorithm... considered the must secure AES finalist of all. But still... the implementation must be totally secure.
Also don't forget... DES was also recommended by NSA, and on that time they reduce the strength in bits of the cipher in order to break it easily... AES (Rijndael) was considered the less secure from all at the contest (less security margin)... so the first to probably be fully break (if it isn't already by NSA and is associates).
Why they would put in risk their own confidential stuff, they have already done that in the past (DES), they are pretty sure no one else can break it easily... and doesn't have access to the information in first place (or they think)... and for some reason they have Suite A... with secret algorithm to protect the real Ultra Top Secret information. So the rest of the world using AES is pretty good because they can intercept everything and decode in real time, to see if their is anything interesting inside.
Paranoia aside, even if they can break all RSA/ DSA/ EC keys and see all inside in clear, they still have to analyse, separate the good from the bad stuff, and store all of that... storing sure isn't a problem for them, but the rest still seems to be because even to analyse the clear traffic is already a mess with so much junk out their, and not to add that encrypted traffic is in most cases not that much important, even for them.
@ Clive Robinson on Joao's post
You care to take a look at it? A lot of unfamiliar terms for me. I'm not educated on the topic enough to confirm my suspicion it's pseudoscience or just false.
@ Clive Robinson on cipher issue
"However most ciphers that made it to the final round of the AES are sufficiently well designed that it should not be an issue. "
That's what I was thinking, albeit with less expertise on the subject. ;) I figure you can also benefit similarly from one of the eSTREAM stream ciphers, such as Bernstein's Salsa20. I like Salsa20 specifically b/c it's extremely fast & can be made faster by subtracting rounds, which shouldn't hurt security in practice b/c it's overdone. Two stream chiphers, two block ciphers or stream+block (gotta be careful there).
We could also use older ciphers like Blowfish or IDEA that haven't been cracked. Then, randomly picking a combination should give the adversaries looking at the ciphertext a hard time.
Water follows the path of least resistance. If there is a wall in the way it will go under, around or depending on the material and if it is porous, through.
@Joao the NSA uses Suite A for a reason...:-)
@ Nick P
"You care to take a look at it? A lot of unfamiliar terms for me. I'm not educated on the topic enough to confirm my suspicion it's pseudoscience or just false."
Most terms seem familiar to me, and based on my own previous investigations over the years, this does seem real, or at least close to the reality.
Their was a TV episode (like in History Channel or something like that) where some one that worked at NSA said that they were in 2004 (or earlier), about 50 years ahead in mentality and technology used their. Many things out their, where invented by them and years latter sold to private company's to make profit (they own the patents), when they already had something much better... mostly electronic related stuff.
So having a Quantum Computing dedicated to break at least asymmetric algorithms like RSA/ DSA / EC, is almost sure! Don't forget that for years they (NSA) where doing everything to stop PGP... and from one day to another they stop worrying... I would that be? Couldn't be just some weakness in implementation, because those would be caught over time, has many have, having computer power to break RSA/ DSA/ EC is much easy to believe... since would be the only thing that would give some assurance that they could still surveillance everybody else, like it's their job to do. Weakness would not be enough because many governments/ military/ commercial clients, use their own products design by really smart people that really understands what they are doing... and if they make it compatible with PGP/ OpenPGP... that would be the end of the game.
You are vastly overestimating them. The early Bamford book that discussed their specific technology compared to COTS at the time showed they were often about 5 years ahead & mostly on incremental type stuff. They might have a quantum computer. I've been promoting quantum resistant algorithms for some time now. The best is (NTRU?) right now. I'd advocate a dual signature approach, one being RSA and the other quantum resistant.
As for PGP, I think they stopped being concerned about it when they realized everyone was using computers that are easy to hack and backdoor. It's almost trivial to break into modern home and corporate network if you have their level of sophistication. Just leave USB drives with stealthy rootkits & covert channels in the parking lot, then BAM! You're in!
The most likely case is that public key crypto is still secure enough against most opponents, maybe the NSA too. I think they're more likely to try other means and the quantum computers would probably be doing shared batch jobs. This means they would only be able to handle so much, leading to prioritizing. So, even if they have QC, you're still unlikely to be affected by it unless your really important.
@Joao et al.
The cryptome link posted by Joao is simply a copy-and-paste from ... a comment on Bruce's blog! The comment is signed "energyscholar".
If I understand what it is saying, then it is flat wrong. The core claim seems to be
"The QC approach that actually works, in a production-ready scale-able way, is to run a virtual Turing machine atop a winner-take-all-style teleportation/entanglement-based recurrent topological quantum neural network (QNN)."
Even if that volcanic eruption of Tom Swift jargon means something, a "virtual Turing machine" would have the same limitations as any Turing machine (in other words, any conventional computer). Quantum computers can "run" Shor's algorithm, precisely because they are not Turing machines.
So even if the way to make a practical "quantum computer" were to "run a virtual Turing machine" on some kind of quantum construction, such achievement would not be an advance in the factoring of giant numbers, or the computation of difficult discrete logarithms.
@ Nick P,
... you can also benefit similarly from one of the eSTREAM stream ciphers ...
I need to sound a note of caution about stream ciphers.
As I noted earlier designing good block ciphers is quite difficult, well it's more than doubly so with stream ciphers. Further stream ciphers don't attract as much analytic interest either from the cryptanalysis or engineering fraternity as they deserve.
Which is odd because stream ciphers are at a high level viewpoint easier to attack because the stream generation process is effectivly independent of the text input unlike block ciphers.
At a slightly lower level all stream cipher generators can be seen as a state array that is updated in some fashion and the value of the state array is "mapped down to the output bit(s)". That is you could envisage the state array proviiding the address inputs to a very large ROM and the data output bits from the ROM being the stream cipher output.
Thus in use the weak points of the stream generator are the contents of the ROM and the state array update mechanisum which is often linear in process. In fact without other input it can be easily show (by simple induction) that all stream update processes are simply equivalent to incrementing the state array in your chosen counting method (binary / decimal increment with carry, Chinese Clock counting, Grey code, etc etc). So the reality is the only weakness is the "mapping down" process of the ROM contents and how it protects the actuall value of the state array.
So as a rough rule of thumb when chaining different cipher types, only use stream ciphers on plaintext before a block cipher or after a block cipher that has flattened the statistics of the plain text to an acceptable level.
How ever I urge you to take a step back from block ciphers and actually view them not as a single process but as the combination of two seperate processes.
The first process being the key expansion process which takes in the "key" and expands it to the "round keys", in effect it is exactly the same as a stream generation process but without the continuouse update.
The second process being the rounds process which usually takes the form of a stack of Fiestel rounds acting alternativly on half the input block.Each round takes as it's inputs into it's one way functions both plain text from the previous round and an output from the rounds key.
Thus you can see that in effect the Fiestel round is at it's heart a stream cipher the only difference being it does not update the state array by a fully determanistic process but from the plaintext input.
With a little thought in that area you can see that you can replace the traditional XOR mixing function of a stream cipher with a Fiestel rounds stack that would offer you a significant advantage for minimal effort. Likewise you could replace a block cipher key expansion process with a stream cipher and by keep updating it.
You will after a little further thought realise that the "one way function" is just another "mapping down" process and that a Hash function is likewise a "mapping down" process.
Thus if you wished to you could use a hash function as a stream cipher or as part of a Fiestel round. You can also reverse the logic to realise that stream ciphers, block ciphers and hash functions can all stand in for each other with a little care and that you can leverage the strengths of each in a single "combined cipher".
You could take Salsa and clock it's input into an array that provides the rounds keys as half the input to a hash function, which is standing in as the one way fuction in a Fiestal round, the other half of the input to the hash being the half of the plaintext from the previous round that also gets passed to the next round unmodified.
Now if you think about it you don't actually need to clock the stream cipher into an array for all the rounds in one go but do it on the fly with each round. You could also if you wished to be a little more tricky actually swap the order that the stream output fills the individual round keys. So with say sixteen half rounds the round keys might start being in the same order out from the stream cipher for the first block encryption. But for the next encryption the order might be offset around by say the value of the last four bits of the plain text from the previous block...
Providing the underlying hash function is secure you are adding extra confusion to the process in a dynamic way that generaly only has advantages.
@ Nick P,
I'm not educated on the topic enough to confirm my suspicion it's pseudoscience or just false
I suspect you are but have become distracted by trying to work out why on earth any one would want a Quantum Neural Net, or use it for something as mundane as building a recognition filter driven by trying to determine the cognative signature of the entity providing it's input.
Others have mentioned about the limitation of Turings Universal Computing Engine.
However have a look at the bit that talks about reversable logic and XOR gates. One form of reversable logic is known as Friedkin gates, the idea is you have one or more inputs to the gate and an output and providing you can push the output back into the output it will come back out of the input thus no information is destroyed and thus no energy used.
Now consider the XOR gate it can be viewed as a "controled inverter" as we know the invertion function is reversable wiithout issue so no information is lost. So provided you know the output and the state of the "control" input on the XOR gate no information is lost as the other input state can be fully determined (it's this ability that makes it so usefull for cryptography ;)
Now it's fairly well known that the XOR gate can be made of four NAND gates or their logical replacment (via DeMorgan) with OR gates and inverters (which can easily be made with a NAND or NOR gate).
However consider the problem with both the OR and AND gates even with the knowledge of one of the inputs and the output, can the state of the other input be correctly be determined all the time?
Well the answer is no, in the case of the OR gat it cannot be done when the known input is high. Likewise with the AND gate it cannot be done when the known input is low.
Finally have a think about how you would arrange XOR gates to make an AND, OR, NAND or NOR gate and what implication this migt have on a Turing Universal Computing engine (quantum or otherwise).
Oh speaking of Turing, thißsssssssss[truncated]
@ Clive Robinson
Assuming you hadn't already lost me, my mind definitely went blank on this one:
"Oh speaking of Turing, thißssssssssss[truncated]"
Presumably, this is a Turing machine tape, with an alphabet of 5 symbols, but without knowing the machine's programme, how can we interpret it?
@ Nick P,
Oh speaking of Turing thißssssssssssssssss
The Android phone I (still) use had taken a walk in the park again... and it took so long to recharge and re boot again I'd forgoton to check my post and give the usual appology.
The final sentence should have said,
Oh speaking of Turing this year will be the hundredth anniversary of his Birth in North West London.
As for the rest of my post, it has been said "you can paint a picture with a thousand words" (or some such ;) but there are times when a few simple pictures or sketches would make it clear with just a few strokes of the pen...
Oh my god, I will need to clean and making back to life my "OTP" one time pad encryption software !!! Hell Can I enjoy some time, feeling that my data is secure against all the bad guys (Included NSA)? Please ignorance is one better bed to sleep.......
Now understood, why they launched SOPA/PIPA diabolic program !!!! They want to limit , the quantitye of data to store in the wallet !!!! Yeah !!!!
If your "one time pad encryption software" is some kind of stream cipher (almost every crypto-noob who claims to use OTP, has re-invented stream ciphers for the n-thousandth time) ... then it is probably a lot less secure than AES.
Well MarkH thanks for your advice, but Im aware of the basic, need of the "Pads" minimal requisites to be considered secure, 20 years of experience in the matter...... I suggest you use same program, if you wants you privacy with you , all days..... The best for you. And happy easter.
If -- and only if -- the "OTP software" (A) requires you to securely retain a key at least as large as the sum total of all encrypted data, and (B) uses a key, every bit of which is as random as the result of tossing a fair coin (for example, from a GOOD hardware true random generator), then it is really OTP.
If the key is smaller than the data, then it is not OTP. It is a stream cipher. There's nothing inherently wrong with stream ciphers, some have been quite successful. But anyone selling such software as "OTP" is either ignorant, or committing fraud.
But OTP can never be broken by cryptanalysis -- it is absolutely impossible for anyone to decrypt the ciphertext without the key, or conversely to reconstruct any part key without already knowing the corresponding plaintext. Further, knowing part of the key (say, 1000000000 bits of it) does not in any way help in guessing any other bit of the key. But these properties only hold, for true OTP.
A stream cipher can be cryptanalyzed like any other cipher, and may be broken despite once having been thought to be secure -- many of them have.
Though the principle involved here is simple, thousands upon thousands of people without proper cryptographic education repeat this same mistake. I hope that Jose already knows this, but so many fall into this trap I like to "set the record straight."
When you said,
But OTP can never be broken by cryptanalysis
You forgot to add the very important rider of,
when used properly
Because if the keytext gets used twice it's trivialy simple to find and recover both plaintexts.
Which is another reason OTP is usually considereed impractical for ordinary use (though it is still used for certain specialised applications).
There is a partial solution to the "accidental reuse" problem by using a more complex mode for mixing the plaintext and keytext to get the ciphertext, thus increasing the difficulty of finding the reuse and recovering the texts. However this breaks the rule of "simplicity" for hand ciphers.
Homo sapiens being the doltish species that we are, it is indeed necessary to explicate that "one time pad" does not mean "n time pad" such that n > 1.
I still marvel that people capable of missing such concepts fancy themselves crypto inventors. I have to assume that "sapiens" was applied with a dash of irony.
@ Mark H,
Only a "dash" :-)
Maybe the NSA can break AES. But if they can, it is a big secret. And that constrains them.
What happens if they read the secrets in your AES encrypted e-mail? They will tread carefully, maybe try to "discover" the same thing again, but by a less sensitive technique. They won't want to reveal how they found out. For example, they won't give the plaintext to your local police department, but if the discovery is big maybe they tip off police or FBI with something vague.
Think of it as a partial, and self-imposed, exclusionary rule.
Frankly I hate math, and I'm probably on the wrong board. But I use encryption because I sometimes travel to countries where organized crime and government snooping is a big problem. Inferred government keeps building bigger and bigger supercomputers. We seem to be in race to build a bigger one. Human nature being what it is, the key is in the un encrypted data somewhere that someone wrote down (stored digitally), as most of us can't remember our keys. hence, these large computers are just a more powerful method of making 'sideways' attacks looking for random human slip ups. The NSA is secure. They really do keep their mouth shut. Its the other government that worries me. What if they talk because money talks where they live. Then someone in the criminal network has access to some of the world's most powerful computers to search for all of those slip ups that people make. Well, for now, they haven't. They are still using what you all would consider primitive, such as spoofing an email address so that some low level clerk reads it and lets them into the network. But what if they get access to that (those) government super computers to run quantum level attacks? No, i never ventured anywhere near the physics labs. I meant gargantuan spoofing attacks. And what if inferred government 'sublets' its government hacking out to organized crime to cover its tracks? And what if black becomes gray and the 'black contractors' decide to make a little money on the side using those supercomputers for crime while they are hacking government computers? I may not know much math. It hasn't happened, YET. But sooner or later inferred government computers will be used for a giant bank heist on a level never seen before (again using all of that computing power to find some sloppy bank CEO who doesn't like math either), leaving them very red faced. Could that happen? I may not know much math. But I can ask questions...
I've programmed AES several times, like many others and evrebody knows, it's impossible to break it. Math rules are invariable. You can still make own app to map bits on matrix,and move rows or columns randomly, while you are only who knows right moves. There's not algorhytm to arrange randomly mixed matrix. However, there's still roundkey, wich makes it much more impossible ;)
If this whole talk of quantum computing starts to actually work in the next 10yrs it seems all these hash algorithms would be rendered useless.
So I guess all this means is that if I send an encrypted message to a friend asking if he wants the latest Playboy centerfold I have scanned or he wants a free copy of my ringtone, it will not be worth the G's time, effort, and money to try and decipher my message.
What about this?
"TWIRL is still a hypothetical device - it has not yet been built. However, its designers, Adi Shamir and Eran Tromer, estimate that if TWIRL were built, it would be able to factor 1024-bit numbers in one year at the cost of "a few dozen million US dollars". TWIRL could therefore have enormous repercussions in cryptography and computer security - many high-security systems still use 1024-bit RSA keys, which TWIRL would be able to break in a reasonable amount of time and for reasonable costs"
Re the problem of 'true' random number generation.
Why not: "please hold your watch close to the radioactive-decay counter for at least 1 second" ?
we could have 2 level of sercurity 1. on password , and 2nd on token + finger
2, the user needs to have the password to verify the user and the token + finger print to continue access it should be hard problem for a attacker
Follow NSA's money and what do you see?
Highly restricted communication.
Lots of smoke.
Events prior to the event (slicing into 20+ telecoms, etc).
It means they have already discovered something really significant in their research lab and this is the roll out. Otherwise, money for a new mega facility would not have been justified.
This new development is not a fishing expedition to figure out how to do something. Labs are good enough to demonstrate capabilities and they already have labs.
In this case, they have likely hooked the big one and they intend to land it, or likely already have. The Jaguar was sitting there and they were playing around with it and they hit on something big.
They most likely have a proven way to access data hiding behind AES256 and everything thing beneath that. Likely not a brute force attack - another way - perhaps they found a weakness in AES, or not a weakness, but a back door designed from the beginning into AES by the designers.
Due to AES's scrutiny, it is more likely a back door than a weakness. It is also more likely the NSA found it out by low tech means. Spying on the AES developers who, one careless day, used the backdoor they made and NSA followed them through it (by one of the myriad of ways already mentioned above).
"Bingo! We are in!"
Not hard to imagine or do in either case. Otherwise, why so secret, what justifies such a facility and such secrecy? Why report anything to a select few in Congress unless you actually have something worth reporting? And why approach Congress in the first place unless you need additional undisclosed funding for capitalizing on the new development or breakthrough?
What would it sound like?
"Dear Chairman, AES is compromised. We caught the developers using an undisclosed backdoor and now we also have unlimited access to every AES encrypted file we have ever collected and will ever collect. So we will monitor/persuade those two to make sure they keep it to themselves. In the meantime we need $XXX billion to make an intel factory to process and analyze all this data. This is the turning point in history for US Intel. We will continue promoting AES until we discover that the backdoor exploit has proliferated beyond the developers. So far, so good. To play it safe we will advise the most critical govt data that is using AES be shifted off of AES without public notice, or a modified version/application of AES sans the backdoor (at least, THAT backdoor) be implemented selectively by us for approved govt depts only. Oh, and this is actually how we caught Osama, but please don't tell anyone or we will have to, ahem."
Life basically goes on. They got their money, they built their factory and now everyone who thinks their data is safe, is basically living in the illusion of AES security - just the way the NSA likes them to live.
The very existence of the giant facility means it is a done deal, not a speculative venture to explore a hunch or theory. That's my speculative hunch of a theory, at least.
On another note, I caught TrueCrypt taking a screen shot just after entering my password at the time of volume creation after clicking to create the volume. I had a really good anti-spy ware running that blocks and alerts you of all such things (key loggers, screen shots, etc). Then I see that the State Dept is a MAJOR donor, amongst other govt depts, of Truecrypt. The Aha moment.
Seems clear that Truecrypt takes a picture of the password/key/etc at the screen shot and either stores it out of sight as a type of forensics fingerprint that can be retrieved by the State Dept or any govt agency that wants to access your "protected" data when required?
Also, when you use three methods to encrypt your data (AES, TWOFISH, SERPENT, etc) it would seem that this would make your data stand out from the pack of other AES data and perhaps expose vulnerabilities that can be directly traced to your Truecrypt implementation? If I know immediately that your email comes from x.ip.999.99, then I backorafice your system into your Truecrypt application and download the 10kb gif files stored there to know every password and volume you ever created on each date via Truecrypt. Here is an old Japanese truism that always seems to hold true,
"The free gift is the most expensive."
It seems that unless we have the capacity to "roll our own" in terms of creating our own implementations of encryption using our own ciphers (something like Google did with the android app inventor that allows even kids to make an app that works), that we will never have truely safe data, least so by depending on publicly available implementations that the govt approves of and supports financially. Looks like everything out there comes with a trojan back door.
What is the most cost-effective means of empowering the public to implement a truly safe "home made" cipher? It seems any tool you supply them will have the inherent risk of trojans, back doors, weak RNG, etc. Is there a fool-proof way an average Joe can remain an anonymous Jon without studying the craft?
The upcoming issue now seems to be the security of our brain. If they can reproduce a film you just watched by tapping your brain (and they have done it apparently), then they can tap your brain to get your password a lot easier than a brute force attack on AES. How do we secure our brains and thoughts from the thought police?
I know, that's another discussion board.
Your post is embarrassing. This applies to everything you say about the security of AES. But let's instead look at this quote, which anyone who ever wrote even one line of code can understand:
"On another note, I caught TrueCrypt taking a screen shot just after entering my password at the time of volume creation after clicking to create the volume. [...] Then I see that the State Dept is a MAJOR donor, amongst other govt depts, of Truecrypt. The Aha moment."
So TrueCrypt stores a screenshot of its own user interface to do what exactly? If it wanted to store the password it could do so.
The Russians use elleptic-curve encryption for the asymmetric key exchange.
I won't waste words but Shor's Algorithm AND the Riemann hypothesis. Examine second to see how a short-cut is possible and know that billions are being poured into the former.
If you want total security, meet & share 1-time pad. Even side-channel attacks have been successful to break 'quantum cryptography'!
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.