Schneier on Security
A blog covering security and security technology.
« Chinese Monitoring Skype Messages |
| Nonviolent Activists Are Now Terrorists »
October 9, 2008
"New Attack" Against Encrypted Images
In a blatant attempt to get some PR:
In a new paper, Bernd Roellgen of Munich-based encryption outfit PMC Ciphers, explains how it is possible to compare an encrypted backup image file made with almost any commercial encryption program or algorithm to an original that has subsequently changed so that small but telling quantities of data 'leaks'.
Here's the paper. Turns out that if you use a block cipher in Electronic Codebook Mode, identical plaintexts encrypt to identical ciphertexts.
Yeah, we already knew that.
And -- ahem -- what is it with that photograph in the paper? Couldn't the researchers have found something a little less adolescent?
For the record, I doghoused PMC Ciphers back in 2003:
PMC Ciphers. The theory description is so filled with pseudo-cryptography that it's funny to read. Hypotheses are presented as conclusions. Current research is misstated or ignored. The first link is a technical paper with four references, three of them written before 1975. Who needs thirty years of cryptographic research when you have polymorphic cipher theory?
EDITED TO ADD (10/9): I didn't realize it, but last year PMC Ciphers responded to my doghousing them. Funny stuff.
EDITED TO ADD (10/10): Three new commenters using dialups at the same German ISP have showed up here to defend the paper. What are the odds?
Posted on October 9, 2008 at 6:44 AM
• 64 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
She's not even that cute. If they were going to be juvenile, at least they could have been good at it.
Interesting way of visualizing the problem, unless it's been done before. But wouldn't this only work on uncompressed bitmap images? JPGs would just show garbage as the number of bits per pixel fluctuates throughout the file and I don't think you can recover that info without decrypting it because only a single bit sometimes tells you what to do next.
I wonder if this would work on the RAW images that a lot of digital photographers use nowadays to get the best quality from their cameras. I don't know anything about the format of those, but I believe they are uncompressed.
Have you noticed that the PMC Ciphers website ( http://www.ciphers.de ) quotes you in their break-our-cipher-challenge?
"Try to do it better than Bruce Schneier. Break Polymorphic Encryption" ... in the right column.
You have to admit it sounds better than "Bruce Schneier did not give a damn about trying to break our Polymorphic Encryption".
"In a blatant attempt to get some PR"
...which will, of course, work beautifully. *sigh*
Only Ben Goldacre can save us now!
Another blog I read (I believe it was Good Math, Bad Math) is doing a series on basic cryptography. It used encrypting an image (one of Tux, not the one used in this paper) to show how bad ECB is for that sort of application. Not nearly as fear-mongering, though.
There were two parts to the paper; the first part was all "OMG! All the major block ciphers are broken if you use them in ECB mode. (Ours are slightly better because of our large block size, but they are still broken)". It was laughably bad.
The second part claimed, or implied, that most of the hard disk encryption tools out there use ECB, at least on the block level. Therefore (it is claimed), they are at risk for this sort of attack.
How true is the latter claim?
What they seem to be saying is that if you use ECB in any known mode including the use of counters and tweaks, the cipher text itself no longer gives away the image (see pg 6 right side --- they don't number their figures) but a differential of the cipher text with that for an 'all zeros' image on the same block does.
Of course this assumes that you have access to ciphertext of both all zeros and then the same blocks contain some image on a later date (hence the backup file thing).
Basically, if you've just created your encrypted volume, then back it up, and then start writing bitmaps to it, someone might be able to deduce a lo-fi version of the image if they have access to both cipher texts.
Realistic attack? eh, you decide.
I wonder if there is any use of this technique beyond recovering graphics.
If I have someone snapshotting my Truecrypt volumes during the night when I'm editing, e.g., source code or other documents during the days, how much could they learn?
"I wonder if this would work on the RAW images that a lot of digital photographers use nowadays to get the best quality from their cameras. I don't know anything about the format of those, but I believe they are uncompressed."
There are still a few makers who don't use compression, but most- including Canon and Nikon, who are the largest part of the market- do. It's mostly a practical matter. The compressed images are obviously better for storage space, and it turns out that's helpful for speed, also. Writing out images is slow enough that even with an underpowered embedded processor it's actually faster to compress the image and then save it than to save it uncompressed. There's little reason not to use lossless compression.
"Couldn't the researchers have found something a little less adolescent?"
Bruce Schneier was never born, he decrypted himself out of the ether. :P
Lena at least is a GOOD, tastful image. And downright traditional at this point.
The attack will work in LRW, XEX and XTS mode, too. Isn't the man right?
However, the text is in the source code so you can still read it that way or edit the HTML and chop out the junk.
Google indexes the page content but you'll have the same problem with the cached page as the real thing.
Who responds to a 4-year old 'attack' the way they did. Not that I can actually figure out what they're defending and what arguments they're using to do it...
@ Blaise Pascal,
"The second part claimed, or implied, that most of the hard disk encryption tools out there use ECB, at least on the block level. Therefore (it is claimed), they are at risk for this sort of attack.
How true is the latter claim?"
It depends on if those writing the encryption system know what they are doing or not.
Most hard disk partitions and backups, although in effect are a single "file", they are never intended to be used as such.
That is access to them is supposed to be (virtualy) random.
However if you encrypt the "file" in the usual fashion then it most certainly is treated as a sequential file to which random access is not intended.
ECB is known to be very weak and was only included in most specifications by "tradition". The solution is cipher chaining of one form or another.
The problem with cipher chaining is that you have to start at the begining of the chain ("file") every time you wish to access it.
A simple solution to the problem of random access and encryption is to use ECB BUT either with a key that changes for every position in the "file" or pre-encrypt (whitening) the data with a value that is different for every position in the "file".
Both are fairly secure providing your mapping from "file" position to key / whitening value is not (easily) predictable and does not repeate within the size of the file.
One method is to use a block cipher to encrypt the file position counter and use that although there is a question on it's security as it's effectivly known plain text.
Dorothy Denning covered a lot of the issues back in 1980 in a book so it's a fairly well known subject.
However that said most software "code cutters" have not read anything other than the compiler manual which is one of Bruce's favourite bug bear with security related systems (he even wrote a book about it with Neils Fergerson ;)
The authors of the paper explain an attack on exactly those enhanced cipher modes you talk about.
I'm starting to wonder if people actually read the paper, or just drew a conclusion after hearing 'attack on ECB'.
Then again, maybe I misunderstood the paper altogether.
Either way I'm still not convinced this is a very practical attack, though.
I've got the feeling that Bruce Schneier has doghoused the wrong people.
$5 on Bruce. Giving 3-1.
should really have just been a paper on new filters for gimp.
i have a strange feeling i soon will be seeing ECB mode art in the halls of corporate offices.
In their response to Bruce's doghousing, PMC Ciphers said, "By the way: We offer our disk encryption product “TurboCrypt” on http://www.turbocrypt.com in two versions: AES and PMC. Both have the same price.
Guess how many AES versions and how many PMC versions are downloaded?
80% PMC vs. 20% AES."
Well, since PMC is so much more popular than AES, it much be so much better. And, since there is a woman with a lot of cleavage showing on www.turbocrypt.com, their product must be better than others.
I said, "... it much be so much better."
I meant to say, "... it must be so much better."
Incidentally, to try and win their $10,000 prize one must download the PMC version of TurboCrypt, and it is the only version linked from their challenge page.
Their response is amusing. If I understand it correctly, they are suggesting that their algorithm (which appears to be at least as insecure as the weakest of the "worker ciphers" they use) is somehow made to be magically better than the best of the "worker ciphers" that they use.
I don't think that they have the skills at decrypting messages that I had as a pre-teen, never mind those that I have now (and unlike them, I don't bill myself as a cryptographic expert).
That looks pretty easy to get around, unless they somehow prevent access to video memory - which, according to the news article, they don't. It seems like a tiny variant of a character recognition problem, where they give more than enough information to solve it.
As I understood it, the argument they were using is that even if one of the worker ciphers was broken, the other three would be unbroken and thus the integrity of your message was assured. The part that I found confusing was that part of his proposed polymorphic cipher system simply used a normal 256 bit key with 2 bits added to choose an encryption algorithm. So if, by some crypto breakthrough, any one of the "worker ciphers" became vulnerable to a key recovery attack, it would be trivial to reuse said key to unlock all blocks encrypted by the other "worker ciphers" by brute-forcing 2 bits for each block...
Also, while painful I suggest reading http://www.ciphers.de/eng/content/Backround-Info/... . It makes me want to buy their product right away. Apparently the Polymorphic Cipher engine is so complicated that it requires a whole microprocessor and lots of memory! Key setup takes 100,000 times longer than AES! Sign me up!
They're also privy to some sort of time travel, allowing their encryption tool to be "Unchallenged since almost 10 years " while their challenge to break it started in 2004. I'm telling you, it's the wave of the future.
They seem to have missed one of the few basic rules of cryptography.
A cipher that isn't broken can only claim a bit of security if somebody actually tried to break it. For every attempt, it becomes a tiny bit more secure.
That's why AES is secure: more people tried to break it that any other cipher.
Whether Bruce Schneier tries to attack a cipher, is not the point; he's just one of these people :-)
Pretty courageous to publish this cipher of ciphers. I didn't know this company. Very very absorbing.
They provide a recipe for creating customized ciphers. Start reading those papers.
The URL that you are attempting to access is a potential security risk. Trend Micro OfficeScan has blocked this URL in keeping with network security policy.
Solution: Report the URL to your OfficeScan administrator if you think it is safe to access.
Now that's funny.
Times like these make me wish I had administrative access to the blog log, so that I could see the source IP of some of the posters (cough cough).
never heard of "Proxy Avoidance"?
IP netk00ks are still out there I guess
@FU could you at least _try_ not to be so transparent in your role as pseudonymous defender? You might as well have posted under the name "ciphers.de".
troll fail. (perhaps aggravated by posting in a non-native tongue, but fail nonetheless.)
Sure, but failure to take basic "precautions" is astonishingly common.
Case in point, I wrote a blog post a while back about a cooling solution for a server room... and an engineer for a different company (competing with the solution I was blogging about) wrote a long comment about this other solution I should look at (that was developed by his company, which he failed to disclose). He posted the comment with a source IP that came from the company, using his real name.
People can be remarkably silly, sometimes.
@Paul: Their claim is that it's been unbroken since 1999, "at which time it was immediately seized by the German Government and classified as a State Secret." (Source: http://www.ciphers.de/eng/content/Company/... )
@FU: Perhaps you are actually Bernd Roellgen?
This is retarded.
AES encrypts in blocks. 256 bit blocks (32 bytes). With any good encryption algorithm, the state of the S-box controls the effective key; AES takes something like 16 rounds IIRC, and in each round it changes its internal state. Each round mixes the bits around too.
So let's say I have a pattern:
10110110 01111000 11110000 00001111
Now let's say I have another pattern:
10110110 01111010 11110000 00001111
One bit difference. The output might look something like 2ADF1920 and DE1A2A08 for the same key. If the ciphertext for two similar blocks of plaintext looked similar itself it wouldn't be very hard to break the encryption now would it?!
If the encrypted text reveals anything about the key or the underlying plaintext, you're doing it wrong.
The paper even points out that volumes can use the block number during the encryption; this can be used to generate a new key, or alter the data somehow, or both, as stated. This means, yes, your encryption process can take smooth, uniform data and produce garbage. What's even better, a working encryption algorithm will take a swath of 0x00 and 0x01 streams, modified in some way by the block index (say, XOR'd), and produce two streams of garbage that even together don't reveal any underlying pattern (for example, they won't reveal that each is a stream of the same 8 bits repeated).
Their paper demonstratably shows their attack "works," with source code. Has anyone reviewed the source, tried to carry out the same examples, and reproduced their results? If it does work, why does it work?
The only thing that the paper states ist that you should not use a OTP twice. I as a laymen would implement a disk encryption that way: Use AES. Part the disk in blocks of max 2^8*4 Bytes. Give each block a unique number. Encrypt each block with in Counter Mode. Use as starting number of each block. Block Number for ex 32 bit number, some salt ex 88 bits. Use remaining 8 bits as counter. Change salt randomly each time you write new data to the disk. Should be enough for 20*E12 write cycles.
But I am just a lay man and not a crypothrapher. So I could be wrong about everyting.
Attempting to sow paranoia about AES by demonstrating that they do not understand the domain-specific usage of "Sensitive But Unclassified" does not serve their cause well.
They also do themselves a disservice by ignoring that 256-bit AES is considered strong enough to be used for TOP SECRET documents (of course, with the caveat that the implementation be NSA approved). However, reading over their copy it's obvious they are not trying to sell to people who are going to evaluate the product on cryptographic merits. It's a fear sell.
The "attack" described in the paper works against *all* modes even the most secure ones: IEEE P1619 XTS, LRW, wide-block encryption CMC and EME, Bitlocker's AES-CBC + Elephant diffuser, etc. Not just against ECB.
Nobody seems to have read the paper in its entirety. But there is no need to: it is pointless and stupid.
All it explains is that an attacker that has access to 2 different copies of the same encrypted disk image (such as 2 backup copies that were made at 2 different points it time) can see which data block changed between the 2. Of course this fact is obvious and no crypto mode can prevent such an "attack" (now you reader understands why I use double quotes).
The conclusion then explains that the way they "fixed the flaw" was to add a feature in their software to force the data to be reencrypted using a new key when making a copy of an encrypted image. Of course this doesn't prevent an attacker from, say imaging an encrypted disk at time t0, and re-image it later to see, again, which blocks changed.
Also, the example attack used in the paper that allows an attacker to figure out the outline of bitmaps with uniform colors is VERY misleading because in practice a data block is a sector; a single plaintext bit that is flipped causes an entire ciphertext sector to change (this is true for XTS, LRW, Bitlocker, etc). Therefore carrying out this "attack" would only allow the attacker to figure out the outline of bitmaps with a resolution of 512 bytes per pixel.
So, basically, they have shown that using a strong cypher in a braindead way on unrealistic data is a bad idea?
@mrb: any cypher will be vulnerable to the multiple-snapshot attack to determine which parts have changed, unless the whole file or volume is re-encrypted whenever something has changed. Of course, that attacker wouldn't know what the changed part contains or contained in the past.
This problem could be overcome, I think, by re-encrypting the file using a randomly generated salt, appended to each block, which can be stored in plaintext. Because of the random salt, every single block will change, and the value of the salt is worthless to the attacker.
I forgot to mention, that this is unlikely to be a problem for real photographs, because there is enough noise in the pixel data to prevent large identical blocks.
Commonly used disk encryption software (e.g. Truecrypt in XTS mode) encrypts files in 16 byte blocks, so you can pinpoint where change has occurred at a fairly fine level of granularity. Using a tweaked block cipher or PRP construction that supports a large block size will leak less information.
None of this is relevant if you are just looking to protect yourself against data compromises when your laptop gets stolen :-)
Really pathetic. Schneier is Chief Security Technology Officer of BT. He should know better.
He doesn't understand how commonly used disk encryption software works. Right?
@Sparky: I don't think you understood my post :) You validate 100% of what I explained by paraphrasing me !
"Times like these make me wish I had administrative access to the blog log, so that I could see the source IP of some of the posters (cough cough)."
Do you really need it? I bet you can guess which three commenters on this thread come from dialups on the same ISP, without being told.
For making such raving proclamations about security... they should really check for SQL injections on their login page, oh and everywhere else.
When you can infiltrate a server of this type on your very first attempt, chances are they aren't selling anything but snake oil.
It's not just ECB mode, this affects CTR as well. The simplest way to think about it is that encrypting the same sector twice amounts to reusing an initialization vector.
Which was already a well-known problem. And one that doesn't apply to the most common threat, namely that of someone stealing your laptop.
@ mrb, Sparky,
mrb :- "...any cypher will be vulnerable to the multiple-snapshot attack to determine which parts have changed, unless the whole file or volume is re-encrypted whenever something has changed."
Which is correct only if by "file" you are refering to the whole backup not individual user files and importantly a different key is used each time.
However you go on to say,
mrb :- "Of course, that attacker wouldn't know what the changed part contains or contained in the past."
You are making a very dangerous assumption there which is that the change has no context to other events.
First off how about the fact that a very large number of organisations use a "third party" offsite storage facility for their backup tapes...
The tape "owner" assumption is that as the backup is encrypted then it does not matter what the "third party" does with the tape provided it's available if and when required by the owner organisation.
They do not think, as a lot of backup encryption system designers also appear not to, about either "attacks in depth" or about how "Traffic analysis" methods might be applied to the "deltas" of the backups when mapped onto a time line of known events about the target "owner".
Secondly think about SAN and NAS systems with their "snap-shot" modes. They provide a very fine grained set of deltas so even if the volume is encrypted you will be able to roll backwards and forwards easily almost on a file by file basis. And due to efficiency only the key used for the individual file blocks can be changed.
A lot of "full" backup tapes start with almost "time invariant" system files (which are likley to be known quiet accuratly to an attacker) so a simple examination would reveal if the same encryption key had been used from one backup to another (very bad but happens).
The backups then tend to follow exactly the same file system walk each time (so the same files tend to be in the same place each time).
Worse the information about which individual backup tape file and what it relates too, is often recorded in plaintext as part of the tape header etc to aid use in recovery...
Even not having usefull plaintext knowing such things as when the company starts it's financial year end might easily reveal which part of the tape(s) covers the finance dept server etc etc.
A sudden change in the areas known to be used by marketing would be an early indicator of a new campaign and possibly product. Finding this area would be a simple case of rolling back wards from a previous campaign.
With each delta more and more information about what is where on the backups etc will leak from simple "traffic analysis".
Once the various sections are mapped out just monitoring the size of changes is going to provide usefull intel.
So much so that it might reduce the information required for a direct attack to knowing not much more than the text of a letter, and which person typed it up and having a delta or snapshot from the night before and the night after might be all you need to start a known plaintext attack (often possible as due to speed the encryption actually used is effectivly a stream system in older solutions).
I could go on but the simple fact is that a lot of the systems I have looked at are vulnerable in one way or another to "traffic analysis" on their backups and most definatly on SAN/NAS snapshots.
Security of information on storage systems is a very very hard problem when you start looking at it from a slightly higher level than just a one off image of data.
Things like metadata and invariant scripts aide the attacker, and applying traffic analysis methods elicits quite a lot of information.
Doing daft things like using the same key to back up internal only servers with others that have public data storage (mail servers etc) aid attackers by enabaling chosen plaintext attacks.
Oh and if you examin your backup tapes and discover that an attack in depth is possible due to key reuse or it effectivly uses stream encryption then go and find another solution pronto...
I think using a Filesystem that supports transparent compresion wourld solve this problem.
I'm sorry, but by Bernd Roellgen's admission, his very own cipher is insecure using his arguments against AES. He mentions the German govt was going to classify it a state secret, but obviously they have change their mind and figured his cipher was not worth trying to hide. Therefore according to his logic, it is insecure.
Mr Roelligen's idea of using a set of ciphers based on key is basically sound, if he did not reuse the key for different ciphers. That is actually pretty stupid and the interaction of all those ciphers may compromise their security.
If one uses multiple ciphers, each must have an independent Initialization Vector.
I am dropping a piece of flesh to Mr Rölligen: Why don't you use AES as the S-box of DES. That would probably be in line with that funny cipher-mixing :-)
..that would of course be no longer DES, strictly speaking
Errata: "If one uses multiple ciphers, each must have an independent Sub-Key and IV"
One of the systems I work with has encryption needs that are exactly what ECB was intended for.
But does it use ECB? No. It uses Block Chaining "because that is more secure" -- and re-initializes the chain on every call because ECB is really what the application needs....
"Being able to choose from 2^512 ciphers for a 512 bit encryption algorithm has the advantage that it renders all known attacks that require a static system inapplicable."
If a cipher algorithm is chosen in secret from a set of 2^512 (approximately 13 * 10^153) algorithms, and if it is estimated that there exist 2^6 = 64 encryption algorithms of sufficient strength to resist decryption by an attacker for the forseeable future, then the probability that a sufficiently strong algorithm will be used is 1 / 2^506. Hell is likely to freeze over before a strong algorithm is chosen, so attackers will most likely not have to deal with anything more complex than Caesar or Atbash ciphers.
When PMC talked only of using four ciphers rather than 2^512, it was more impressive. The four that they chose are all reasonably strong. But it only guarantees that the product will be at least as strong as the weakest cipher of the four. A much better approach would be to design the system in a way that makes it provably at least as strong as the strongest of the four.
I wonder why everyone seems to be trying to miscredit PMC while the Truecrypt developers are hiding completey from the public.
They never respond and they never comment about anything that happens on their forum site. They plead for money, but refuse to comment even if someone offers money in exchange for some personal contact. Seriously guys?
A little research about truecrypt and their servers should provide you with some very interesting questions about why AES is of course the best encryption for anyone living in a country where the government wants to read and supervise everything that anyone sends over the internet.
Oh I forgot. Of course AES is still absolutely safe in 2013. What else :-) Any proven attack did never happen and the latest supercomputers are probably only a myth. And the sky is pink. And
Mr. Schneier seems to defend his AES like a lion.
And hopefully for him, no one asks the wrong... or shoud i say the right questions...
@ Michael C.,
Mr. Schneier seems to defend his AES like a lion.
That statment is factualy incorrect, and easily verifiable as such, which puts any other comment you've made into considerable doubt.
Ah, but of course it does that. Did not expect anything else.
Then let me put it this way:
A guy who develops new stuff is trying to communicate with Mr. Schneier. He's trying to prove or even disprove his own claims and ideas. Maybe an enthusiast through and through. Maybe he even makes mistakes, and has to improve what he created. But he's trying and making contact.
But instead of having lots of fun experimenting, analyzing, verifying and attacking the new software, while communicating happily with the developer, Mr. Schneier is simply telling us the old snake oil story, doing nothing to check out what it's all about.
No open discussion, no experimenting, nothing. Very professional. he did not even consider looking at the sourcecode.
And of course someone will find my own comments again very unprofessional.
Just a simple question to think about: Who's to gain and who's to loose if someone comes up with a new and really strong cipher? Who would not in the least be amused?
Interesting question, isn't it?
@ Michael C.,
Interesting question, isn't it?
Actualy no it's not.
Mr Schneier's time is his own to do with as he wishes, and presumably spend some of it pursuing an income, and some in personal interests and home life.
If this "guy who develops new stuff" wanted to be taken seriously he would do so in a different way. One of which is the "Open Standards Route" where he could submit his design as a candidate for a standards process, as the designers of what became AES did.
History has show time and time again two basic facts,
1, Closed systems have weaknesses not just in the theoretical side but the practical implementation side as well.
2, There is no money to be made directly from Crypto algorithm design via closed systems and patents.
However those who freely enter such "crypto contests" do if their ideas are any good develop a reputation and from this they can to a small extent capitalize on their abilities. The problem of course is if their ideas do not have merit or obvious flaws. Afterall the competiton rules are that you have to be able to defend your ideas against all comers and therby convince a majority of people that your ideas are sound.
If you go and look at the AES candidates you will see that the one that made it through the process to claim the title was considered an outsider in the first round, however it survived the attacks where other candidates did not (amongst which Mr Schneier was one).
The fact you are trying to spin a conspiracy theory into whole cloth out of mear whisps of innuendo about "Who's to gain and who's to loose if someone comes up with a new and really strong cipher" does neither you nor the "guy who develops new stuff" any credit what so ever.
Infact it calls into question your independence from the "guy who develops new stuff".
My suggestion is that you and the "guy who develops new stuff" actually spend a little time finding out just how the crypto algorithm side of both business and academia actually works.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.