UK Police and Encryption

From The Guardian:

Police last night told Tony Blair that they need sweeping new powers to counter the terrorist threat, including the right to detain a suspect for up to three months without charge instead of the current 14 days….

They also want to make it a criminal offence for suspects to refuse to cooperate in giving the police full access to computer files by refusing to disclose their encryption keys.

On Channel 4 News today, Sir Ian Blair was asked why the police wanted to extend the time they could hold someone without charges from 14 days to 3 months. Part of his answer was that they sometimes needed to access encrypted computer files and 14 days was not enough time for them to break the encryption.

There’s something fishy going on here.

It’s certainly possible that password-guessing programs are more successful with three months to guess. But the Regulation of Investigatory Powers (RIP) Act, which went into effect in 2000, already allows the police to jail people who don’t surrender encryption keys:

If intercepted communications are encrypted (encoded and made secret), the act will force the individual to surrender the keys (pin numbers which allow users to decipher encoded data), on pain of jail sentences of up to two years.

Posted on July 27, 2005 at 3:00 PM56 Comments

Comments

Chris July 27, 2005 3:25 PM

on pain of jail sentences of up to two years.

Well, that’s an easy call for someone planning a bombing — you take the two years (of which you’re unlikely to serve more than one) and be happy that you didn’t get the life sentence you would have if you’d given the encryption key.

  • C.

Q July 27, 2005 3:43 PM

But they will probably break the encryption in two years and charge the person after he leaves the jail

Nick July 27, 2005 3:44 PM

It is also possible that law enforcement would prefer to have 3 months to crack an encryption scheme, rather than put the question to the suspect and have him bound over for trial … for the very reasons Chris mentioned above.

Juergen Brendel July 27, 2005 3:50 PM

This and the recent discussion about encrypted VoIP is interesting. As Phil Zimmerman pointed out, it is not necessary to exchange keys ahead of time for encrypted VoIP. My guess is that this is because VoIP data is transient. This means that something like the DH (Diffie-Hellman) key exchange can be used to establish a temporary key for this session, without the need to have any pre-existing keys, or to store that key anywhere. It does not provide authentication, but it provides encryption.

Now even if the police would intercept such an encrypted VoIP call, they could not go to either one of the parties involved in the call and demand surrender of the key, since there is no key left anywhere, after the call is finished. Even a complete packet capture of the entire conversation will not allow them the reconstruction of the key (with reasonable effort). And the VoIP software would not have any need to store those transient keys, which were established for the various sessions either. So, nothing to find on the hard-drives, or even in the memory of the computers that were involved in the call.

I wonder what the police would say to this?

Juergen Brendel
CTO
Esphion Ltd.

Concerned July 27, 2005 3:58 PM

It seems to me that between the UK and the US that these so called “free” countries are becoming more and more of a regulated police state. With cameras everywhere and civil liberties being trampled on more and more in the name of preventing terrorism, what’s next? To quote a MegaDeth song “Next thing you know, they’ll take me thoughts away.” It’s getting scary out there, and the worst part is the majority of the general public will gladly lay aside their freedoms one by one in the name of “security” – which is exactly why governments can get away with this.

Bruce Schneier July 27, 2005 4:02 PM

Likely, the encrypted VIOP phone will have perfect forward secrecy. It won’t be possible to give the police the keys after the phone call is over.

Tommy Pirbos July 27, 2005 4:07 PM

“the majority of the general public will gladly lay aside their freedoms one by one in the name of “security””

Yes, they will and do, and that’s unfortunate. The question is what kind of society we get in time if we continue walking down this road…

John Kelsey July 27, 2005 4:09 PM

Perhaps with three months to question you, they more often manage to get your password out of you. This probably depends a lot on what questioning tactics they use in practice for terrorism suspects–which are probably different from what gets used for, say, armed robbery suspects.

–John

Pat Cahalan July 27, 2005 4:53 PM

@ Bruce

It won’t be possible to give the police the keys after the phone call is over.

Combine that with a tech-unsavvy judge and a wily prosecutor and you have life in jail without a trial (or at least until umpteen years of appeals have passed.) Excellent.

Mark El-Wakil July 27, 2005 5:34 PM

@ Juergen Brendel

Sure, but if VoIP is using something like Diffie Hellman to negociate secret keys, that’s suseptible to man-in-the-middle attacks.

I imagine one of the things that’s going to happen if this becomes popular are wiretap devices that will intercept key negociation traffic, and compromise the connection.

Mormegil July 27, 2005 5:50 PM

It won’t be possible to give the police the keys after the phone call is over.

I would call this an optimist’s view. A simple law would be enacted, stating that you must store the keys used for the encrypted communication.

You know that — when privacy is criminal, …

Jacob Appelbaum July 27, 2005 7:10 PM

@Bruce

I’ve been thinking about how to subvert laws like this since I first heard about the RIP act. It seems to me that having an access device that allows you get get data decrypted (that auto-expires) is better than having the keys yourself.

Example:
Imagine a system where you have alice and her encrypted hard drive.

Alice has an encrypted file that when properly decrypted contains the key to her encrypted hard drive.

The hard drive is encrypted with the contents of the encrypted file. The encrypted file is encrypted to a secret key that’s offsite.

She turns on her laptop and is prompted for a password.

The password decrypts something like an ssh private key that then allows her computer to log into the remote system that has the encrypted files corresponding secret key. (this step could easily be a password or an ssh-key)

She never sees the actual decryption key on the remote system. She does not know it. She can only send the encrypted file and get the result.

Using the result she can decrypt her encrypted hard drive.

This is similar to something like FileVault in OS X with the exception of a much better key abstraction.

You can do this with loop-aes in linux and a bit of shell scripting.

In anycase, the remote system is what makes this interesting from a legal approach. It’s an autodestructing system.

If you do not login to the system within a given time frame, you’re locked out. Your secret key is automatically destroyed by the system after a given time frame. Alice cannot unlock her drive anymore. It’s impossible.

This doesn’t prevent against alice caching the results of the key. The idea is that she cannot be forced to give up the key after a given threshold.

She can however comply with the law. She can give up the file containing the key to decrypt the drive and she can give up the access key. She can even give them her password.

As long as she invokes her right to remain silent, she’s pretty much in the clear.

She might get a contempt of court charge or obstructing justice.

If she doesn’t have the right to remain silent (ie: non-free country), she can spill the beans as soon as they actually figure it out. If they ask, they’ll get the info they need, she just needs a really low threshold to ensure her remote server deletes it.

Split the key amongst multiple servers in multiple countries (with redunant splits to account for downtime) and you’d have something really interesting. Another benefit is that the normally weak password (as in the case of FileVault AFAIK) isn’t even tied into the process except with decrypting the ssh key. So no cluster is going to be breaking it anytime soon. Hopefully, right 😉 ?

I’m actually working on this project right now if anyone is actually interested in it.

I asked a lawyer at a certain electronic legal center and they seemed to think it was on solid american legal grounds. They were unsure about the UK but they thought it wasn’t so outlandish. The main issue they saw was that someone would have to be crazy to use it, because losing all their data would be terribly easy in this setup.

I agreed with them. It’s a trade off, right?

Juergen Brendel July 27, 2005 7:10 PM

@ Mormegil

I would call this an optimist’s view. A simple law would
be enacted, stating that you must store the keys used
for the encrypted communication.

But this is not under the user’s control, if the software does this on its own. You then basically have to outlaw software, which does not store the keys.

We then get into that whole issue with outlawing certain encryption algorithms, which is just as effective as trying to outlaw basic maths.

Juergen
CTO
Esphion Ltd.
Blog: http://esphion.blogs.com

Juergen Brendel July 27, 2005 7:22 PM

@ Mark El-Wakil

Sure, but if VoIP is using something like Diffie Hellman
to negociate secret keys, that’s suseptible to
man-in-the-middle attacks.

Yes, I acknowledged that in my original posting, where I mentioned that this would not provide authentication. However, if you want to, you should be able to add authentication to the system, which then might require distribution of public/private keys, however. You could sign the packets, or the connection establishment, or some such thing.

But you can still use DH-exchange to establish the encryption key. So, all you could give up to law-enforcement, if needed, would be the authentication/signing key. You still can’t give up the encryption key, since you don’t have it.

I imagine one of the things that’s going to happen
if this becomes popular are wiretap devices that
will intercept key negociation traffic, and
compromise the connection.

That will be relatively tricky, since you need to go inline. Wire-tapping (as in ‘fiber tap’) would not allow you to do this, since you can only passively listen. You need to go inline, though, since you need to establish your own keys. When such a law-enforcement inline device needs to be installed, the ISPs or carriers will complain even more, albeit it might not help them much.

To be inline, you then either have to be on all the possible links on which you could see the traffic, or you need to tell some router to redirect traffic through your device. None of which is pretty. Still doable, but it gets ugly.

Juergen
CTO
Esphion Ltd.
Blog: http://esphion.blogs.com

peachpuff July 27, 2005 8:08 PM

If they think someone’s a terrorist and want to keep them in jail, they can just charge the person with plotting terrorism. The three months without charges seems aimed at keeping the imprisonment a secret, rather than prolonging it.

TimH July 27, 2005 8:36 PM

@Jacob Appelbaum
You forget that the first police (or whoever) action is to remove and duplicate the hard drive. You also need a mechanism to destroy the HDD contents before this happens.

jammit July 27, 2005 9:21 PM

How about re-flashing the drive firmware with my own program? It doesn’t ask for a password when it initializes, but you “stop” it before it starts to access the first track and type in a password. If the password is incorrect or not entered, it “boots” anyway, but runs the built in program to start deleting data. How big of a program can you put into the firmware of a HD?

TimH July 27, 2005 10:42 PM

@jammit
Again, doesn’t prevent the scenario of the HDD being imaged on a different machine. I suspect that no investigator will simply boot up a suspect’s machine, but will remove the drives and analyse them as data.

Jacob Appelbaum July 27, 2005 10:49 PM

@TimH

“You forget that the first police (or whoever) action is to remove and duplicate the hard drive. You also need a mechanism to destroy the HDD contents before this happens.”

No I didn’t. Re-read my statement. The idea is that even if they copy the drive, the users data decryption key is destroyed after a length of time. Encrypted data is as good as destroyed data as long as it’s in an unrecoverable state. It’s not like erasing the data is actually going to provably remove it anyway. Read the Peter Gutman paper on the subject of secure erasure. I’d actually go as far as to say that you should encrypt the data before it touches the platters and then erase it.

However if you can’t erase it as long as your implementation isn’t flawed, you should be good to go.

My setup also takes the user out of password generated flaws, so it can be a very long and random string.

The people making the attack can copy the disk all they like, first they need to crack the authentication token and second they need to use it within a given time frame.

It’s also possible to setup something of a distress signal that if not typed instantly destroys the key.

You can’t give the key up and you haven’t ever seen it.

Using something like Tor, unless your machine is already owned, they’ll have trouble figuring out what you’re doing on the network.

How would an image of the disk help the police after a destruction threshold of 24 hours had passed?

They can’t login to the remote server to decrypt the local key file.

They’re better off attacking the actual disk crypto implementation. But you can’t help them there as you’ve never known any of those keys. You’ve never even seen them.

Davi Ottenheimer July 27, 2005 11:00 PM

I noted that Sir Ian Blair also said in the interview that “police and the security services do not stop terrorists, communities do”, which seems completely at odds with everything else he says including the comment about “intelligence gaps” leading to the bombings and the fact that the police have foiled numerous other attempts.

15:25 is where you can find the bit about the 3 months detention. I’ve tried to transpose the relevent part below since I think it deserves proper context. The end might be the most controversial:

“Let me explain that…I’ve never spoken about this before, um, but there’s a simple reason here. Fourteen days does not allow for some of the things that we have to do in these kind of investigations. You’ve got encrypted computers. It takes a long time. The kind of material that we are recovering, and explosives. I mean it took us seven days to get into the house in Leeds ’cause it was so volatile. We have to travel abroad. We have to also give, you know, everybody rest. We have to have lots of time for prayer. We have all sorts of things in that way. Fourteen days is a very short period of time to run one of these kind of investigations. I know because the fourteen day limit was in place for one of the, you know, people, one of the sets of people, who currently awaiting trial. And, yeah, it was very close and we were going to have to release them, we didn’t charge them. And it’s a very difficult situation. I’m interested in a debate. Certainly I am not saying go from fourteen days to three months. Let’s go fourteen days, and then another fourteen days, and then another fourteen days. That’s the kind of approach…[interviewer interrupts to clarify whether he will “so maybe settle for 28 at this stage”]…No, no, no, I’m not saying that. Maybe there’s a maximum period of three months but each fourteen days is itself, you know, complete. And then we ask for another fourteen days with judicial oversight. That’s the way we need to do it. We’ve just got to be Twenty-First Century about all this. That’s why we want to make refusing to reveal an encryption key a significant offense carrying a very significant amount of penalty.”

Davi Ottenheimer July 27, 2005 11:32 PM

After re-playing the above speech about a hundred times to try and get the words right, some things indeed stand out as odd:

  • The man in charge of the Met says that extra time is needed to detain suspects in an investigation in order to “give everybody rest” and have “lots of time for prayer”.

  • Seven days to enter a house filled with explosives? How could that not be sufficient cause to place charges in and of itself?

  • Time to travel abroad? I just watched the whole interview from a laptop over wi-fi. I wonder what kind of travel abroad he is referring to that could not be done via some other method. I mean how could travel be a reasonable method to discover something supposedly more important than the fact that a suspect is detained without charge. I say bring the Concorde back.

  • Isn’t there judicial oversight now? Why not start with a rolling fourteen that requires oversight, perhaps even with cooperation of human rights groups?

  • It sounds like most of this is related to the Met being able to build a case, especially where encrypted information has been the key (pun not intended). At first I was completely taken aback by statements about rest and prayer, the need for travel time, etc. but when Blair closed with his blanket statement about “significant” penalties for refusing to reveal keys…that’s just plain scary. It indicates that they can not solve these terror crimes without the decrypted data, which begs the question about police v. community solutions again and the fading right of privacy.

Arik July 28, 2005 12:44 AM

There is already a working solution to this problem: A steganographic filesystem, such as StegFS, http://stegfs.sourceforge.net, let you have n layers of plausible deniability.

Say you have a stegfs drive with files. You can supply a key which will reveal some but not all of the files. The unrevealed files are invisible, and there is no way to prove that there are more files than you have already revealed. You can have as many levels as you want, each is hidden unless you know the key to the level above.

Jacob Appelbaum July 28, 2005 2:05 AM

StegFS is an interesting idea for a project (according to sf.net, it’s hosting nothing and is in the planning stage). I’ve run across it a number of times actually. The only code I’ve seen is for linux 2.0. It’s what I would call ‘out of date’ and it’s not a very portable solution as far as I can tell.

StegFS solves a different problem and I think it’s a really hard problem to solve. It’s trivial to find that there is something on the drive. Often, that’s enough to get you into trouble.

The basic idea is that crypto is legal to use (even in the UK it is) and you have the right to remain silent (here in the USA that’s supposedly the case). It doesn’t matter if they can detect your using crypto. By the time they’ve figured it out, your safe time has passed and you can give them everything.

You’re in compliance with the law but the law can’t access your data.

Jacob Appelbaum July 28, 2005 2:09 AM

@timH && jammit “Again, doesn’t prevent the scenario of the HDD being imaged on a different machine. I suspect that no investigator will simply boot up a suspect’s machine, but will remove the drives and analyse them as data”

Well, booting the machine and supplying power to a drive are two different things. I suspect that if the drive is powered up with a modifed firmware, you could perhaps get some sort of damage in to the drive before someone caught it. Good luck erasing that data though.

If you’ve got something important on a disk, encrypt it below the file system level if possible. Then if you erase it and it’s still possible to somehow recover the data, they’re going to have a hard time recovering the random data.

Just imagine their frustration cracking data that’s not actually correctly recovered. Ouch. 😉

Davi Ottenheimer July 28, 2005 2:42 AM

Here’s an insightful review that indicates police lack information they feel they need relative to the threat:

http://www.guardian.co.uk/attackonlondon/story/0,16132,1537385,00.html?gusrc=rss

“The proposal reflects the lack of intelligence the security services have about Islamist extremism and potential terrorists. They have not yet been able to infiltrate disparate groups with the success they achieved against the IRA.”

I am certain that encryption just compounds the feeling of being left in the dark. Investigators hate dead-ends, especially ones that have a sliver of hope (e.g. a decryption key). Thus the (re-newed) emphasis on harsh measures to gather information after detention, even if the person detained is several degrees removed from the primary suspect…

Andrew July 28, 2005 3:04 AM

What if you keep your data on a networked filesystem which is encrypted? I keep my home machine clean and unencrypted, but I have a machine on a the network which I can access.

e.g.
I live in the UK and I can access a server in China via SSH. I keep my private files on the server in China which are PGP encrypted. I ensure that my SSH knownhosts file is not writeable (or just remove entries in this file) and ensure that I don’t store entries in my shell history, etc. Since you can use something like SHFS to mount the SSH connection as a filesystem you can access your files as though they are local.

Even if someone can detect that I am accessing a server in China from the UK they will need to do some serious work to gain physical access to the remote machine.

Or am I being nieve?

Davi Ottenheimer July 28, 2005 4:05 AM

@Andrew

“will need to do some serious work to gain physical access to the remote machine”

Law enforcement often has strong extra-national or global connections that make it easy to move beyond borders. But even above that, the point of the new measures appear to be directed more at detaining you or people that know (of) you in the UK until someone will be convinced to “give up” the keys, regardless of where the data is located. Physical access to the server is not always necessary unless you are doing something like what Jacob Appelbaum is suggesting and actually wiping data…in fact, based on the little information you provide, you actually could be more at risk of exposure by having your data on a remote system unless it is auditable.

Mormegil July 28, 2005 5:20 AM

@ Juergen Brendel: Sure, I understand that, but that’s only the question of how far you are willing to go with those “anti-terrorism laws”. You would simply require the software to store the keys, or state that its use is illegal. It’s similar to e.g. car-mounted detectors of traffic radars that are illegal to use in some jurisdictions. (Not that Maxwell’s equations are illegal, you just must not use them in a specific way.)

Robert July 28, 2005 8:36 AM

This is offtopic but… I’m glad to see conversations like this taking place on Bruce’s blog as opposed to acrimonious political debates that lead nowhere, like in the last few days. Let’s have more of this, please. There’s a certain pleasure I take in sitting in on conversations where everybody is smarter than I am. 🙂

Roy Owens July 28, 2005 9:30 AM

The terror cells could completely thwart the British authorities by using the cheapest available encryption, poorly implemented, with the weakest possible passwords, and hide everything in plain sight with dirt simple steganography.

Their apparent ineptitude with encryption would argue for their innocence.

Consider how much noise there is in a crappy digital photograph: all that noise can carry a wide steganographic channel and provide one or more masks to further obscure the channel.

mark July 28, 2005 9:43 AM

Surely, I cannot be the only person who has managed to forget or lose a key or password. If presidents can ‘forget’ major covert operations, what of us mere mortals who encrypt private things that are in no way subversive – but which are not important enough that we are assured of remembering our own passphrases.

David July 28, 2005 11:51 AM

It would be easy to create a “panic password” that would work something like home alarm systems. If you press a certain combination, it alerts the system that this is a HACK decryption, not legitimate. When using the panic password, different contents could be rendered so that the original secret remains a secret.

If the police bother you, just give the panic password and the decryption will reveal whatever you want to be revealed and not the actual secret content.

Furthermore, if the encrypted data is stored on the web, on a server outside of your country (hushmail?), then such police tactics won’t work because their laws cannot compell the law in other countries.

Bruce Schneier July 28, 2005 11:58 AM

“I’m glad to see conversations like this taking place on Bruce’s blog as opposed to acrimonious political debates that lead nowhere, like in the last few days.”

I try to strike a balance. Unfortunately, some of the most interesting security questions have a political component. I’d rather the conversations stay closer to the security issues than the political issues, but I’m simply not interested in policing things that closely.

Stefan July 28, 2005 1:44 PM

@David: But the “fake” data has to be stored besides the “real” data, both encrypted. So there would be a visible difference in sizes when you examine the “original”, encrypted data and the given plaintext. Or did I get you wrong?

Stefan July 28, 2005 1:50 PM

There is another interesting aspect to StegFS and similar solutions I once read about in docs about the FreeBSD gbde project: with StegFS, there is no way to prove that there is still hidden data “left”. But on the other hand, you also can’t prove that there isn’t any left, that you revealed all. Imagine a totalitarian regime willing to torture you until they find the “right” data…

Perhaps such a system should include features to prove that there isnt’ anything left to see, eg with zeros overwritten encryption keys.

Jacob Appelbaum July 28, 2005 2:08 PM

@ davi

“I am certain that encryption just compounds the feeling of being left in the dark. Investigators hate dead-ends, especially ones that have a sliver of hope (e.g. a decryption key). Thus the (re-newed) emphasis on harsh measures to gather information after detention, even if the person detained is several degrees removed from the primary suspect…”

I think it’s bullshit. You have the right to remain silent. It should apply to your passphrases and secret keys.

But back here in reality, I know that it’s not happening that way. You’ll have your arm twisted into talking and giving up your key. That’s why I had that idea of the system I described above. The main advantage it has over StegFS or other things like it is that it’s a simple idea built on other systems. If someone built it to rely on https for the transport, it wouldn’t be trivial to figure it out from monitoring alone.

@ Andrew

I live in the UK and I can access a server in China via SSH. I keep my private files on the server in China which are PGP encrypted. I ensure that my SSH knownhosts file is not writeable (or just remove entries in this file) and ensure that I don’t store entries in my shell history, etc. Since you can use something like SHFS to mount the SSH connection as a filesystem you can access your files as though they are local.”

Unless I’m mistaken, this law directly affects you. You under the writ of law have to disclose that data if the court demands it. In addition, if you were to destroy the data, it would be obstructing justice (in the usa anyway). To make matters worse, it’s possible that the UK could get China to seize the server (doubtful but not impossible).

That doesn’t even mention the fact that your remote server could be compromised and you’d be in some trouble.

You should contact me and we can work on a system you would actually use.

@ Roy Owens
“The terror cells could completely thwart the British authorities by using the cheapest available encryption, poorly implemented, with the weakest possible passwords, and hide everything in plain sight with dirt simple steganography.

Their apparent ineptitude with encryption would argue for their innocence.

Consider how much noise there is in a crappy digital photograph: all that noise can carry a wide steganographic channel and provide one or more masks to further obscure the channel.”

I think you should read this pdf regarding stego:
http://www.eecs.harvard.edu/~greenie/defcon-slides.pdf

I talked to Rachel at defcon and LSB stego isn’t he way to go. Digital photographs don’t actually have random data in the LSB normally. Guess what happens when you see some? Whoops! Interesting idea in the labs, not so great with the real world data sets…

Oh and there is the simple fact that the terrorists have already thwarted the british authorities. At least so far as the people who lost loved ones are concerned.

Cracking the crypto isn’t worth while. Watching them, tracking their money, being smart and still retaining the citizens rights; that’s a more logical choice for the police.

@ mark
“Surely, I cannot be the only person who has managed to forget or lose a key or password. If presidents can ‘forget’ major covert operations, what of us mere mortals who encrypt private things that are in no way subversive – but which are not important enough that we are assured of remembering our own passphrases.”

Sure. That’s the idea. The key is automatically destroyed and you comply with the law in full. You new knew the key and as such you can’t really be used as an attack vector against yourself. Invoke your right to remain silent until trail and your keys are automatically destroyed. Seems simple enough. The devil is in the details, mostly legal in this case.

@ David
“It would be easy to create a “panic password” that would work something like home alarm systems. If you press a certain combination, it alerts the system that this is a HACK decryption, not legitimate. When using the panic password, different contents could be rendered so that the original secret remains a secret.

If the police bother you, just give the panic password and the decryption will reveal whatever you want to be revealed and not the actual secret content.

Furthermore, if the encrypted data is stored on the web, on a server outside of your country (hushmail?), then such police tactics won’t work because their laws cannot compell the law in other countries”

The orignal idea of the different passphrases is similar to what StegFS does but they go a bit further. I think StegFS has some other issues though. I’m pretty sure a new drive doesn’t have random data in all of it’s unwritten, unallocated blocks. It would set off some big alarms if I found a drive like that while doing forensic analysis. As soon as someone is in that place, you’d be trying to prove that it was your real key that you gave up. I bet they would try to hold you in contempt of court until you gave them something worth putting you away with. Nevermind that you might not have anything.

The police will try to get at whatever data they can. I am sure the remote server operators will help the police if they’re anything like ebay or amazon, etc.

@Bruce
“I try to strike a balance. Unfortunately, some of the most interesting security questions have a political component. I’d rather the conversations stay closer to the security issues than the political issues, but I’m simply not interested in policing things that closely.”

When someone you vote for proposes something, it becomes a very hard thing to not confuse the issues with the people. Or at least that’s how it’s been playing out lately. People hear the party that proposed something and that alone is enough to make up their mind for a certain issue. It’s really sad.

Ari Heikkinen July 28, 2005 3:22 PM

Three months is already equivalent to a jail sentence. Keeping someone jailed for three months without any charges is simply against basic human rights.

Ari Heikkinen July 28, 2005 3:35 PM

A related crypto question (to Bruce): is there any secure way to encrypt two sets of data with two different keys into one stream of ciphertext so that decrypting with the first key would give the first set of data and the second key would give the second set of data, but so that it would be impossible to tell (by analyzing the ciphertext) if other keys (thus other sets of data) exist?

Davi Ottenheimer July 28, 2005 3:36 PM

@Jacob Appelbaum

“You have the right to remain silent. It should apply to your passphrases and secret keys.”

Yes, but I was trying to highlight that the trade-off that the police are suggesting would change your phrase to

“Due to our lack of intelligence information sufficient to find a credible suspect and press charges within fourteen days, your right to remain silent means you will be detained (without council?) for up to three months.”

Personally, I think there should be intense pressure upon the security forces to do a better job of intelligence rather than just let them strip away civil liberties and negatively impact innocents. If they get away with the latter, then what incentive do they have to ever achieve the former?

Jacob Appelbaum July 28, 2005 3:37 PM

@ Ari Heikkinen
“Three months is already equivalent to a jail sentence. Keeping someone jailed for three months without any charges is simply against basic human rights.”

Agreed. It’s easy to say that it’s not long but that’s long enough to destroy innocent peoples lives. I imagine they’ll get fired, lose their home and if they’re really broke or alone, much much worse.

It’s a blatant power grab against the whole of their society to capture only a few. It’s not a good trade off at all.

Dan July 28, 2005 7:23 PM

@stefan
I think you’re referring to a variation of Deniable Encryption.
http://en.wikipedia.org/wiki/Deniable_encryption
There used to be a package called Rubber Hose (can’t find it now though) that implemented this.

The premise was based on an idea in game theory. If one large blob of random data contains several independent sets of confidential data, then you could withhold all the passwords. After being beaten with a rubber hose, you’d give up one password. Your attacker would try it, find that there was more data, and keep beating you.

The game theory part was that if everyone was 100% rational, then you would realize that there was no way for the attacker to ever be sure that he had gotten every last password from you; thus, he would keep beating you indefinately, and so there is no advantage to giving out any passwords.

The attacker would realize the same, and so there wouldn’t be any point in beating you.

Works in theory. In practice, I’d rather be the attacker than the defender 😉

stefan July 29, 2005 1:24 AM

@Dan Yes, your’re right. But there are some limitations: random data of a limited size could not hold encrypted data chunks ad infinitum, so you could estimate the maximum amount of data or data blocks that is in there. And if the data was in daily use on a hard disk, you perhaps could find the sectors that were changed multiple times (-> changed data), so you try to get the key that deciphers this particular area. Plus, try to remember all these strong passphrases 🙂 And the game thoery only works if there are a great many keys – perhaps the attackers will try this for the first 15-20 times… (Additionally I doubt that angry people would be acting fully rational.)
The solution could be something like a smartcard with protected memory (generally a trusted, non-copyable and non-examinable device) that would decrypt your data with passphrase1 and wipe its memory with passphrase2 and signal this (eg. by smoking? 😉 ), so that there ist absolutely no way to get the keys back

phizm July 29, 2005 5:01 AM

@Jacob Appelbaum – Your security application sounds great. When are you kicking it off?

I use DriveCrypt – You can create virtual drives that have 2 sets of data, with one password you get your ‘normal’ data, with the other you get another set of data, but accessing this data makes any other set unreadable giving the set access to all the space available.
It’s aparently impossible to tell how many sets of data there are or which is the ‘real’ one.
Do you think using this you’ll still end up with the `Rubber Hose’ senario. “Why is there only 5mb of data on a 2gb virtual drive?!”

Anyone have any experiance with DriveCrypt?

phizm July 29, 2005 5:45 AM

OK, my mistake, it doesn’t do it quite the way i remember.
It should do it that way though 😉

It has the potential to make the second set of data unreadable, but only if you write to the fake set.

Jacob Appelbaum July 29, 2005 2:47 PM

@phizm

I’m going to create it if there is interest. I’m going to use already created software and tie it together to make it work. So that it’s not totally obvious (for the client) that you’re using it (say ssh, loop-aes, linux, gpg, etc).

The server would be a custom C program or a perl CGI. It really depends on how I want to hide things.

Sage July 31, 2005 5:21 AM

@Juergen Brendel

Such an implementation already exists, except it is for GSM,
the CryptoPhone http://www.CryptoPhone.de/ . They are a bit
pricey, but depending on the sensitivity & classification of
the information it might be worth the cash. Keep in mind that
this is an end-to-end solution, so the individual on the other
end must also own an CryptoPhone.

I’m not too sure about how it handles MITM attacks (been a while
since I had a peek at the code, it is available for download on
their site for review) though I did see heavy use of
Diffie-Hellman for key exchange.

DarkFire July 31, 2005 6:07 PM

Let’s stop to think for a moment exactly what we mean by a “free” country. The basis for UK law is and allways has been that unless something is specifically banned, then an individual is allowed to carry out that action.

WHen people comment on our freedoms being taken away, let’s think for a moment what life was like under the Taliban, or is currently like under the Wahhabi copntrolled Saudi government:

1) You are forced to dress a certain way.
2) Your beard must be a certain length.
3) If you are unfortunate enough to be female you can’t drive or even be educated.

Thye list goes on. Being scanned by whatever technology when taking a tube ride is a small price to pay for stopping the fanatics who would seek to rule veery possible aspect of ourr lives. In the final analysis we arn’t losing that musch “freedom”…

Bruce Schneier July 31, 2005 7:45 PM

@ DarkFire

I catigorically reject the “at least we’re better than X” system of morals, whether X be the Taliban, Saddam’s Iraq, or the terrorists. It is not enough to be better than X, and a moral system that justifies itself by comparing itself to X is not sufficient.

The question is whether we are a better society because of these things. I believe the answer is “certainly not.”

DarkFire August 1, 2005 10:38 AM

@ Bruce:

While I respect your point of view, I think that with all due respect adopting an idealistic or dare I say it utopian moral view is unrealistic in this case.

Unfortunately what it boils down to is that the terrorists have quite successfulky turned the concept of asymetric conflict against us. Ive posted on this before: conventional military forces with their enforced reliance on the Geneva conventions etc. are all very well, but at the end of the day they are ineffective at countering this threat.

However, an operative at a bus stop somewhere in Kandahar or Karachi who assassinates a terrorist with a silenced pistol is incredibly effective.

Sometimes to effectively combat the terrorists we have to at least think like them, if not play somewhere near their level.

To be sure this is unpleasant and regrettable, but it’s easy for us to make these judgements. Not so difficult in the field.

I think the important difference is that whilst we do carry out some regrettably immoral but necessary acts, we recognise that they are unpleasant and only carry them out in the most needy circuymstances. On the other hand the terrorists have no such moral compunctions whatsoever.

Clive Robinson August 1, 2005 10:47 AM

@Ewan Mac Mahon
@Bruce Schneier

In RIPA originally, they where only allowed access to some keys (encryption) not all (signing) if they where different. This was supposedly to stop impersonation to other “unsuspecting” parties (which the UK police have been known to do with a suspects mobile phone SMS) and also the posibility of forging evidence.

I mentioned RIPA and the 2 year prison sentence a little while ago in one of Bruce’s blogs,
http://www.schneier.com/blog/archives/2005/05/encryption_as_e.html

and,

http://www.schneier.com/blog/archives/2005/06/risks_of_cell_p.html

And sombody (ray) posted a URL indicating that that part of RIPA was due to be removed on the 25th of May,

http://www.linuxsecurity.com/content/view/119193/

So maybe Ian Blair (no relation to Tony Blair) want’s it back.

DarkFire August 1, 2005 2:55 PM

They also want to make it a criminal offence for suspects to refuse to cooperate in giving the police full access to computer files by refusing to disclose their encryption keys.

I would imagine that the real point of this is that for some of the very best commercially available encryption software, it might conceivably take the ladies & gents at Cheltenham a couple of days to break it. This is an unacceptable time frame when you have an active terrorist cell wandering round your largest city hell bent on annihilating as many of the population as possible as soon as possible.

These investigations happen FAST, and the chronological availability of certain transcripts, analysis and file content from seized hardware MUST keep up.

Andy August 2, 2005 2:03 AM

I am curious about how the UK law applies when the method described in Rivest’s 1984 paper “Chaffing and Winnowing: Confidentiality without Encryption” is used.

Is it a crime not to hand over the authentication key necessary to remove the chaff and reveal the confidential content ?

Does the law define encryption in the same way as cryptographers ?

Andy August 2, 2005 2:14 AM

Of course I meant to say his “1998 paper” and not his “1984 paper” (although the law in question is very Orwellian).

Ian January 13, 2007 5:15 AM

Hello…

I see written…”I would imagine that the real point of this is that for some of the very best commercially available encryption software, it might conceivably take the ladies & gents at Cheltenham a couple of days to break it.”

“…a couple of days…” ???

If current encryption is so pathetic as to be breakable inside a couple of days, then what foundation is there to the many claims, by encryption software (PGP, etc.,) authors, of numbers of years, decades, centuries and millennia, etc., involved in the job of breaking through their software encryption?

Should all of them now suddenly be looked upon as liars or tactically commercial exaggerators because some people at Cheltenham can already walk right through everything we do to make private our files?

Is encryption now to be seen as an illusion?…a cruel joke about our files’ security, played on us, by us, and encouraged by encryption software authors and corporations?

How did the cracking time drop from millennia, to a couple of days?

The main question is, about current encryption (PGP being a pretty important core of the question, indeed)…is it or is it not a matter of centuries and millennia before our encrypted files are rendered readable by the Cheltenhamites, and their similars?

Are (or are not) those huge periods of time trustworthy when/where we read them?

Ian.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.