Subconscious Keys

I missed this paper when it was first published in 2012:

“Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks”

Abstract: Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

Posted on January 28, 2015 at 6:39 AM60 Comments

Comments

readerrrr January 28, 2015 7:00 AM

I don’t see how this is an obstacle for law enforcement. In this case all they should do is to “force” the suspect to login as they would normally.

Don’t suspects in this case already, when asked to unlock the data, have the option to either login or provide the password?

blah January 28, 2015 7:18 AM

It would be interesting to see how “logging in as you normally do” works when people are in abnormal situations.

I can imagine people might react differently when they are nervous or stressed.

Clive Robinson January 28, 2015 7:53 AM

Is this not a variation on the “guitar hero” style login where micro movments and rythm act as the password.

Bag Tlog January 28, 2015 8:19 AM

A much more efficient solution is to require a form of double authentication, with each half of the key hosted in a different legal jurisdiction.

uh, Mike January 28, 2015 9:14 AM

For the “I can’t say it” legal defense, you just need a challenge-response system and the words “I forgot.”

For the “Don’t beat me” physical security, you have to stay out of custody. Often, those situations are driven by external factors.

paul January 28, 2015 9:17 AM

This is essentially password steganography.

If it’s done right, the triggering stimuli for authentication simply won’t be available when the subject is in custody unless the custodians have engaged in enough careful surveillance of the subject logging in under normal conditions.

Even simple versions of this would slow down a rubber-hose attack, perhaps substantially, depending on how smart the interrogators are. If you ask me for my phone unlock code, it would take me a while to dredge it up, even voluntarily. But if you presented me with a phone it would take only a second.

Of course, depending on what’s being protected, a subject may or may not want rubber-hose cryptanalysis to be difficult.

Bob S. January 28, 2015 9:23 AM

In England, and elsewhere, the rubber hose is the lawful government: “Give me your password or go to jail forever”. And, they make it stick.

Virtually every government in the world has declared war on electronic privacy and security, regardless of the will of the people.

It’s very disappointing that our best and brightest in technology have either gone to the other side, or simply have laid down and let it happen.

CallMeLateForSupper January 28, 2015 9:33 AM

“… people might react differently when they are nervous or stressed”

Naw. That could never happen. ; -)

Actual court testimony[1]:
Q: What is your brother-in-law’s name?
A: Borofkin.
Q: What is his first name?
A: I can’t remember.
Q: He’s been your brother-in-law for 45 years, and you can’t remember his first name?
A: No. I tell you I’m too excited. (rising from the witness chair and pointing to Borofkin) Nathan, for God’s sake, tell them your first name!

(Apologies to anyone this post might offend. My only defense is that I became unhinged as a result of reading a transcript of Sarah Palin’s … er…. talk at the Freedom Summit last weekend.)

[1] “Anguished English”; 1987; Richard Lederer

Adrian Ratnapala January 28, 2015 9:51 AM

This is interesting, but I feel it is less a practical thing than an assay into a science that will become practical in the age of Buck Rogers.

Imagine a community of cyborgs or robots needing crypto keys to control access to their assets/bodies. One layer of defense is keep the secret key in the same security domain as the bot (call him R. Daneel) that needs to use it.

But then R. Daneel looses everything if that domain gets pwned. Daneel gets some chance of fighting back if he could inject himself into the key in such a way that even if bad people blow open the security domain, the process of “calculating the secret key” is also somehow the same as “R. Daneel thinking freely”.

Bauke Jan Douma January 28, 2015 10:35 AM

The problem is not with the chainee. It’s with the chainer. As long as he thinks there might be info at the far end of a rubber hose, that’s his tool (i.o.w. how to convince him for a start that all I might ‘know’ or ‘have’ is subconscious anyway).

In reality — as with all these ‘clever designs’ (‘clever’ from any perspective), the problem is with accountability of those in power, and how on occasion to hang the lot that aren’t. Note there’s another preceding problem here: define what makes a ‘criminal’ (hint: those in power, the ones reaching for said rubber hose, almost never are).

Andrew January 28, 2015 10:58 AM

This is the main problem with fingerprints, eye scanning or whatever system they may invent based on “external” physical recognition, outside memory / brain.

Anybody can force you in a matter of seconds to unlock your iPhone with your finger but its a bit more difficult to make you tell them the password. It may require torture, not that this is a problem for some governments.

albert January 28, 2015 11:00 AM

All you need is a dual-password system. The first is your regular password. The second is your rubber hose password. The first works normally, but the second deletes your private data, then its relevant code. Since your private data was encrypted, it might be recoverable, but not useful. (An SSD might be useful here) The RHPW would allow access to your ‘public’ private data; that you don’t care if someone sees, so everything looks normal, but the sensitive stuff is gone.
.
I gotta go…

paul January 28, 2015 11:28 AM

Bauke Jan Douma:

In a particularly unpleasant dystopia subconscious keys would be what you use to access corporate servers when you’re at your desk. Then the owner of the information is protected no matter what happens to the employees.

David Scrimshaw January 28, 2015 11:50 AM

I think if someone was threatening me with a rubber hose or something worse, I would like to have the ability to tell them what they want to know.

But perhaps my employer or someone else would prefer that I not have that ability.

Dirk Praet January 28, 2015 12:26 PM

Interesting concept, but if my understanding is correct the only goal being achieved here is that an attacker cannot reproduce the required sequence himself unless he’s been eavesdropping, which this method doesn’t have a defense against. Neither does it prevent the attacker from still using the 5$ wrench to beat the sequence out of the subject.

@ Clive

Is this not a variation on the “guitar hero” style login where micro movments and rythm act as the password?

Yes, indeed. The presentation slides of the paper actually refer to Guitar Hero.

@ Bag Tlog

A much more efficient solution is to require a form of double authentication, with each half of the key hosted in a different legal jurisdiction.

With a combination of Tomb & SSSS, you can split a key between as many parties as you like. I’ve mentioned it a couple of times before on this blog. It’s probably not very practical as an authentication mechanism, though.

@ albert

All you need is a dual-password system. The first is your regular password. The second is your rubber hose password. The first works normally, but the second deletes your private data, then its relevant code.

Most agencies these days are smart enough to first take copies of your hard disk and other media if they really want your data and even just remotely suspect you to be computer savvy. A useful tip when travelling is to set up the named account on your laptop as a guest user and store nothing but bogus data there, including mail accounts and social media profiles. Hide everything of value behind a different user that at first glance looks like a guest user.

Jeremy January 28, 2015 2:18 PM

@uh, Mike, @albert:

A “duress” code that deletes your data generally only works if the system that hosts the data is NOT under the attacker’s control. If you’re connecting to a remote web server, or calling up agency HQ, they’re probably a good plan. If you’re decrypting data on the laptop you were carrying when the guy with the rubber hose grabbed you, it’s probably not going to help (because he can make backup copies, or disable your software and run his own).

@David Scrimshaw: “I think if someone was threatening me with a rubber hose or something worse, I would like to have the ability to tell them what they want to know.”

But if the person threatening you believes that you can’t tell them, then they won’t threaten you in the first place, which is an even better outcome.

Making it so that you can’t tell them isn’t the same as ensuring that they believe that you can’t tell them, but it’s a key part of most plausible plans for achieving that.

Ray Dillinger January 28, 2015 2:37 PM

The best implementations of “duress password” systems reveal your genuinely secret data given the real key, and some innocuous or less-incriminating data given the duress key. This is fine in theory. But in practice there is an important difference between theory and practice.

The problem is that cryptosystems which implement the use of duress keys are sufficiently peculiar that their use attracts attention, and that when you’re using them, even if you are innocent neither you nor your interrogator have any way to determine your innocence.

Let’s say I suspect that someone has stolen and is carrying the blueprints for my nation’s missile factory. On catching her and demanding to know what an encrypted file from her laptop is, she gives me a password that decrypts the file into an archive of steamy love letters. If, like most people who might use crypto to protect an archive of steamy love letters, she used GPG or something like that, then hey, it was a false alarm and she’s free to go. But if it’s some cryptosystem I’ve never heard of, and when I look it up I discover that it has the capability of revealing different “decryptions” given different keys? Well then, I have no reason to suspect that this archive of steamy love letters is not also the stolen blueprints for my nation’s missile factory, and she’s never going to get out of custody because neither of us will ever be able to prove that it isn’t.

So, if it really was just an archive of steamy love letters, she’s in jail, and she and her lover are tragically separated and sad. And if the love letters (and possibly the whole affair that they pertain to, just in case the authorities check to see if the affair really happened) were just “cover” for a file whose real significance is that it’s the stolen blueprints for a missile factory, then the spy is in jail and has failed to accomplish her mission.

The important point? If this was an innocent person using crypto to protect a file of embarrassing love letters, she is worse off for having used a system with the capability of duress passwords. If this was a spy attempting to exfiltrate the plans for a missile factory, then she is no better off for having used that system.

So, given that the crypto needed to implement duress passwords is identifiable, there is no motivation for anyone to use it.

albert January 28, 2015 2:48 PM

@Dirk
.
Yes, copying the HD would be a problem. I imagine an SSD would be copyable as well? Even if you used a USB drive (and removed it), they could see that other programs had been accessing it. It wouldn’t surprise me if the military/TLA folks have HDs that self-destruct if you try to remove them! (they already have manually initiated destruct systems) That would be cool. This could be done rather easily on an SSD.
.
I suppose there will always be a workaround. It’s a cat & mouse game, isn’t it. Encryption is like a lock, it keeps honest folks honest. The rule seems to be: “If you can access it, they can access it”.
.
I like the multiple accounts idea. I also think that not using encryption might be a better approach. I you can keep your sensitive data on a removable device, it could work. This assumes that a) both the computer and the drive can can ‘hide their tracks’, and b) the others guys don’t get their hands on your drive 🙂
.
I gotta go…

David Leppik January 28, 2015 4:08 PM

Since the user has no conscious knowledge of the password, the user also cannot feign ignorance. Back in the old days, when Communist dictators were rounding up anyone with an education, they missed people who could pretend to be illiterate. I’ve always wondered how things might have been different if those dictators had known about the Stroop Test. It’s probably a good thing that anti-intellectuals don’t study psychology.

Thunderbird January 28, 2015 4:16 PM

Quoth Jeremy:

@David Scrimshaw: “I think if someone was threatening me with a rubber hose or something worse, I would like to have the ability to tell them what they want to know.”

But if the person threatening you *believes* that you can’t tell them, then they won’t threaten you in the first place, which is an even better outcome.

First, this paper isn’t describing a security system, it’s describing an proposed mechanism that might be used as part of a security system. It can have some value even if by itself it isn’t sufficient in isolation. Clearly (as mentioned) at a minimum a system would have to prevent opponents from capturing valid login sessions.

Second, to address Jeremy’s comment: belief isn’t binary. They may be k percent certain that you don’t know the secret, but might decide the (100-k) percent chance you do is worth a try. If people are willing to employ torture, presumably risking the loss of their “nice guy” status isn’t much of a concern.

Dr. I. Needtob Athe January 28, 2015 4:32 PM

@albert

Encryption is like a lock, it keeps honest folks honest. The rule seems to be: “If you can access it, they can access it”.

I don’t believe that. I wonder if anyone else here does.

Dirk Praet January 28, 2015 6:22 PM

@ albert

It wouldn’t surprise me if the military/TLA folks have HDs that self-destruct if you try to remove them!

There’s plenty of “self-destructing” USB-keys commercially available (Fujitsu, Kingston, Ironkey). Just DuckDuckGo for it. But it’s not destruction in the physical sense. That would probably involve thermite or magnesium. I’m sure @Clive can come up with a viable solution off the top of his hat. I seem to remember some past thread where he touched the subject.

The rule seems to be: “If you can access it, they can access it”.

As long as rubber hose cryptanalysis and RIPA-like legislation is around, the only data that are probably safe are those existence of which no one else but yourself is aware of.

albert January 28, 2015 6:43 PM

@Dr. I. Needtob Athe
I don’t believe that either:) I should have said ‘passwords/keys are like locks…’.
.
@Dirk
I read somewhere that the mil were experimenting with acids for hard drive destruction. It’s safer than thermite (or C4:), esp. on aircraft! This was after the Chinese captured that US weather recon (i.e., spy) plane. Induction heating of the disks would be effective. I hope I haven’t stumbled into something classified here {:-o
.
If you don’t hear from me….

jc January 28, 2015 7:23 PM

Just an idea:

Perhaps a wristband with micro-storage containing a high-entropy password you couldn’t possibly remember, with the following features:

  1. Micro-storage made instantly wipeable somehow (recessed button?).
  2. Access to micro-storage protected by a password you can supply.
  3. Micro-usb port for connection to computer.
  4. Never leaves your body.
  • Encrypt your data with the stored password.
  • If accosted, destroy the stored password instantly.
  • Bypasses keyboard entry, keyloggers.
  • Rubber-hose tactics are as futile as any other kind.
  • If the wristband is lost or stolen, it’s password protected, and you can just destroy the encrypted data, since you can no longer access it anyway.

Drawbacks:

  • If the stored password is destroyed, all data encrypted by it is permanently lost. This can be a good or bad thing, depending on the situation.
  • If the wristband, the wearer, and the encrypted data are all seized intact, back to square one.
  • Possible accidental wipe.

Joe January 28, 2015 7:53 PM

Torture (rubber hose techniques) was proven to yield bogus data during the years of the inquisition when people confessed to crimes that involved violating the laws of physics. So anyone considering the use of torture is irrational, and trying to guess why they will or will not implement torture based on any rational argument is useless.

milkshake January 28, 2015 8:28 PM

How about the simplest procedural memory-based authentication: handwriting?

The challenge would be few random words, numbers and a simple pictures, like CAPTCHA, and the response would be a handwritten copying the text and picture in the challenge, using a stylus and touch-sensitive pad.

The system would analyze not just the resulting shape, but also record the speed and the applied pressure. It would be very fast – just like signing a credit card purchase on the monitor – and it might be difficult to replicate these personal characteristics, by another person or a device.

(For extra security, the stylus could might also record the grip force and skin conductivity during the test)

Wael January 28, 2015 9:14 PM

@Dirk Praet,

Neither does it prevent the attacker from still using the 5$ wrench to beat the sequence out of the subject.

Yea! I liked @Nick P’s rant as well. Yours isn’t too bad either 😉 pretty clever!

eTrusted? January 28, 2015 10:02 PM

Fact is, if you are caught, you are as good as dead meat. Regardless you can produce the keys or access codes or not, they can do anything to you.

If you self-destruct the secrets, they will still continue what they want to do.

If you hand over and play nice, they will still continue what they want to do.

If you refuse, they will still continue what they want to do.

Some form of noise introduction, obscurity and confusion is the best to go.

Make a secure device look seemingly insecure (like a normal flash drive).

Blend into the crowd and crow along.

Always change identities and stick to the common pace.

Act only when there is very high degree of success rate.

But when acting, use confusion and diffusion as your best friend.

Stubbornness goes nowhere … only death it leads along it’s ways…

Wael January 29, 2015 2:46 AM

@Bruce,

I missed this paper when it was first published in 2012:

You probably missed it because it was buried in the depth of your subconscious blog. Forget herbal tea, and take two spoons of this stuff on an empty stomach every other day. Can you remember that? 😉

Andrew_K January 29, 2015 4:14 AM

I do not know whether I want to authenticate in a way that I do not know the token.

So far and to my knowledge there haven’t been court decisions stating “we accept that you cannot hand over the password, so you’re free to go”. It will be hard to convince a Judge (not to talk about a jury) that it the suspect is technically unable to reveal the password. The only way out might be arguing convincingly that the suspect is just a messenger not knowing package at all, but that probably won’t work for my personal notebook.

And just by the way: This is a perfect setup to use in torture aiming for self-destruction of a person. One of the most malicious parts of destroying a person funds on making the person blame him or herself for psychological or physical pain.
Imagine this: You could finally get some sleep if you just would be able to write down this f*ckin password! But. You. Can’t. The guard will slap-wake you every fifteen minutes until your brain will be able to construct this passwort. But. You. Can’t. He is not the bad guy, it’s you. He just follows his orders, you have to say the password and they will see that the encrypted drive is not what they were looking for. But! You! Can’t! It’s your own personal fault that you are now in that situation where you can’t solve anything.

Thus: I would never hand away the option of telling a password unless it’s protecting something I am ready to endure sheer endless pain for. Hint: There are only very few things on this list.

It ends where it starts: No one will belive me not to have the password. For any practical use, I’d rather point to steganography in scanned love letters.

@ Joe
Coercion is only useless as long as it’s outcome can’t be validated. A password can be validated easily… yes, double-volumes as described may be a way out of it. But then again, you would probably reveal the unsuspicious contents before the coercion really starts.

Clive Robinson January 29, 2015 4:18 AM

@ Wael,

One of tbe down sides of being away from my cave is I have to be certain my brain remembers things right…

When I read the bits Bruce quoted on the header, my brain said we’ve discussed this on the blog before and I had mentally tagged it with “guitar hero”. Hence my comment above, however as I was traveling in a patchy area I did not do a site search to confirm it.

It is eminently possible the reason Bruce did not post this paper originally was he thought it to be the same or to similar to the guitar hero paper. And subsiquently re reading a couple of years down the road his viewpoint had changed (hey some of my viewpoints move faster than fish in a barrel when somebody is shooting at them 😉

@ All,

If you try to think dispassionately about this sort of scheme a few points come to mind.

Firstly the e-vile “Eve” or her hose fiddling torturing boyfiend “Trent” need to have three things prior to whipping up your enthusiasm to be cooperative,

1, You.
2, Access to the data repository.
3, Ability to illicitly copy the data repository.

The first of these is self evident the second almost self evident, the third is to stop booby traps rendering the data permanently lost in any meaningful manner.

This gives the ideas of a way to augment these systems.

1, Ideas to prevent Eve/Trent getting access to the data repository.

2, Ideas to stop the repository being illicitly coppied even if Eve/Trent have access to it.

As discussed before on this blog having the data repository somewhere you are not, and out of jurisdiction is a start and almost a fundemental requirment. Limiting access can thus be done by using a suitably designed “light weight secure terminal” with data manipulating apps running on a secure middleware terminal server, with the data held in a secure backend repository. Access being granted in an augmented version of the “two key commit” systems to launch ICBMs where the other key turners are likewise out of jurisdiction. Thus having you and the secure access terminal gives little or no advantage to Eve/Trent.

In essence what you need is a “secure” laptop or pad, that has only ROM and a CD drive that can establish a secure communications channel, that is in effect locked by the “non copyable” biometric.

Whilst great for international companies and the like, it does makelife harder, especialy for those on their own such as certain types of illegal image criminals.

It might well make a fruitfull research area.

Wael January 29, 2015 5:49 AM

@Clive Robinson,

I had mentally tagged it with “guitar hero”

And I had it mentally tagged with “Obelisk” 🙂

It is eminently possible the reason Bruce did not post this paper originally

I’m not sure I understand your observation! He posted it originally on July 24, 2012. The paper was linked in the first word. And there were no linked paper updates after July 24:

This is a really interesting research paper…

hey some of my viewpoints move faster than fish in a barrel when somebody is shooting at them

My viewpoints don’t move — they are rock solid. But sometimes they move faster than your viewpoints! They’ll move faster than fish in the gun-barrel when you shoot them out 🙂 I said what I thought about the idea then, and I still think the same about it now.

I definattly don’t think these various “Bio-passwords” using various individuals charecteristics are any good be they direct biometrics such as face hand geometry or “muscle memory” / “monkey brain” systems such as the “guitar hero” interface etc.

Say! Is one of your socks related to @Bong-smoking Primitive Monkey-brained Sockpuppet?

Andrew_K January 29, 2015 5:58 AM

@ Clive Robinson, All

I beg to pardon for getting a bit to emotional.

Depassionately, there is a difference between getting data out of jurisdiction and getting it out of reach. Especially if they shall stay accessible with a minimum of performance. To me that seems less of a research but a political problem which is hard to solve.

I’d like to add secure IO to the list of fruitfull research areas: If I have a keyboard and a screen that would encrypt and decrypt, I could stop worrying about the operating system and all it’s underlying hardware — except for meta data. When thinking such scenarios, I usually end with serial lines and circuits inserted in TxD and RxD-lines.

Wael January 29, 2015 6:07 AM

@Andrew_K,

I usually end with serial lines and circuits inserted in TxD and RxD-lines.

Even these aren’t immune to remote Electomagnetic (passive or active) snooping, unless you add proper shielding, noise injection, and other defensive mechanisms…

Michael. January 29, 2015 6:58 AM

@Ray Dillinger
Is TrueCrypt some system you’ve never heard of? Because it’s (one of) the systems I use. It has the advantages of being cross system compatible (Windows, Mac & Linux), and easy to use. Of course, it’s no longer supported, but that’s no stopping me.

It only has one hidden ‘volume’, unlike the original Rubberhose system. But that’s enough for most people.

One way I use it is to put all my bank passwords, passwords for web hosting and similar, in one ‘volume’. For that file, I don’t actually use the hidden volume, but if I did, the fact that I’ve actually got all that sensitive information in the first volume, would be sufficient to explain why I’m using the system in the first place. (And, as explained above, I use TrueCrypt so that I can access those passwords on other computers (presumably not running the same OS) if my main computer is unavailable (e.g. stolen).)

Andrew_K January 29, 2015 7:48 AM

@ Wael

Yes, EMSEC stays a problem but I think any attached console/display poses a much greater risk than the crypto circuit itself.

Dirk Praet January 29, 2015 9:31 AM

@ Michael.

Is TrueCrypt some system you’ve never heard of?

Any particular reason you haven’t moved to Veracrypt yet? Version 1.0f-1 supports Windows, Linux and OS X. It can load TrueCrypt volumes and also offers the possibility to convert TrueCrypt containers and non-system partitions to VeraCrypt format. On top of that, it is more resistent to brute force attacks because it uses significantly higher numbers of iterations, both for system partitions, standard containers and other partitions.

Nick P January 29, 2015 10:57 AM

@ Andrew_K

You’re fine if you just have encrypted input and output but no trusted hardware, OS, or software? Huh? The Trusted Computing Base (TCB) of a system is the sum of things that must work properly to ensure security. On a given computer, that usually means everything from its hardware up to whatever is doing computation on your data. Markus’s TFC uses physical tricks to reduce risk of a key leak while still having some attack vectors and major risk of system destruction on two nodes. Acceptable for his app, but shows the point.

So, for a given usage, you need to think of what it takes to make that work. Anything privileged with access to your secret (Confidentiality) or your code/data (Integrity) is trusted. So, it must be secure with monitoring and recovery. That’s how the game works.

Note: Then there’s EMSEC, interdiction, sneaky maid attacks, and so on.

@ Dirk

Thanks for the link to Veracrypt. Didn’t know about it.

vas pup January 29, 2015 12:40 PM

@Joe • January 28, 2015 7:53 PM:
“Torture (rubber hose techniques) was proven to yield bogus data during the years of the inquisition when people confessed to crimes that involved violating the laws of physics. So anyone considering the use of torture is irrational, and trying to guess why they will or will not implement torture based on any rational argument is useless.” Yes, but: recall latest fight between CIA and Senate Intel Committee. The point on media was that FBI did accept your point, because they have tools more effective than torture (Gitmo threat, threat of couple hundred years in club fed, plea bargain and plenty of time to work with you in the ‘good cop’ modus operandi, time/resources to verify provided information). CIA/DIA does not (at least up to this Senate report) have the same purpose (put somebody in jail), but rather get quick and reliable intel on actual threat conducting interrogation not in air conditioned federal building, but in the field with no such leverages as above. Conclusion: two types of interrogations are not the same based on tools, purposes, goals, circumstances. What I am agree in your post is that pain does not generate valid intel, but rather good intel generated by bypassing your will like the owner of the secret should not be considered cooperative or not because in any case you have no idea regrading validity of the data provided, but his/her will or ignorance/awareness should be bypassed by interrogation altogether. He/she is just ‘vault’, and you need what is inside, but not like old gangs try opening safe/vaults with welding tools and destroying content in the process. Moreover, memory could be altered by application of psychological torture generated by high stress level or psychical torture generated high level of pain. Neuroscience will provide very soon tools to extract all and valid information out of suspect without ancient tools used and listed above. Do I support this morally? No. Why? I posted many moons ago history of monk in Vatican in the Middle Ages who developed torture tool for Papa, but finally was object of the application his own tool.

name.withheld.for.obvious.reasons January 29, 2015 2:57 PM

@ Dirk

Thanks for the link to Veracrypt. Didn’t know about it.

Nick, you are such a link whore. 😉

name.withheld.for.obvious.reasons January 29, 2015 3:15 PM

About a decade ago I was working with someone from Cambridge University (UK) on s project not too dissimilar to this concept. I was asked by a scientist that came to me to design an experimental hardware platform based on some informal and formal problem space description(s). Took me about a month to draft a project outline in response to the query. From the material available at the time I understood the limitations and erroneous track that previous research/researchers followed. I came up with an effective strategy to develop a “testable” system and indicated to the scientist that the scope of the project is/was larger than they had suspected. It took three months for my colleagues to understand what I’d proposed and by then my visitor visa expired (time to return to the states).

What struck me as odd was both how we meet and their lack of confirmation as to their involvement with MI-6. Having repeatedly asked the question “Do you work for/with MI-6?” was always met without response. Also, the project had never really envisioned an application in cryptography. My project outline included a reference for the use of such tech as a method for creating or deriving cryptographic keys. Shortly thereafter the existing research at Princeton was “pulled” from the public domain and the issue buried like a decaying and smelly carcass. To me it was obvious that this technology was on the IC community’s top 10 wish list.

Autolykos January 29, 2015 3:31 PM

@Jeremy

Making it so that you can’t tell them isn’t the same as ensuring that they *believe* that you can’t tell them

This should be printed in huge, red, flashing letters, as a warning to anyone trying to be too clever about it. As a very wise man once said, it can be very hard to get someone to understand something if his salary depends on not understanding it. This probably goes double if that someone is wielding a rubber hose.

Autolykos January 29, 2015 3:39 PM

@Joe:

Torture (rubber hose techniques) was proven to yield bogus data during the years of the inquisition when people confessed to crimes that involved violating the laws of physics. So anyone considering the use of torture is irrational, and trying to guess why they will or will not implement torture based on any rational argument is useless.

True. I’d like to add (paraphrasing Rick Falkvinge) that “Does torture yield useful information?” is pretty high on the list of questions that should never, ever need to be asked, right along with “Is slavery profitable?” and “Does genocide lead to lower rents?”

Nick P January 29, 2015 4:31 PM

@ name.withheld

You know it! 🙂 Btw, I still would love go hear your evaluation of the claims of Hamilton’s Universal Systems Language at htius.com. The testimonials are impressive, the tool could do about a whole high assurance process automatically (minus high lvl design), and Im not mathematical enough to really get it. I’m only asking again because I found this NASA report bragging that she achieved utter perfection for Apollo code using her methods. Description is similar to tool’s claims on web site. They might even give you an evaluation version for free or cheap.

If you check it out, just post results back in a squid thread. Clive originally posted it but without an assessment. Wael’s input would be nice.

Michael. January 29, 2015 6:08 PM

@Dirk Praet
For about three reasons.
1. TrueCrypt works, and worked fine from 2012 to 2014, when it was suddenly not safe. I still think it’s fine for my purposes (not protecting against the government, ’cause they can get all my bank details and coerce my hosting companies anyway).
2. I still have files around in old TrueCrypt formats that I probably forgot about (at least one I’ve forgotten the password for, but still hope to remember it in the future). Can VeraCrypt still access TrueCrypt 5.0 format volumes? No.
3. I’m still running Ubuntu 12.04. VeraCrypt is not, and probably never will be, in the repositories. I can’t be bothered adding yet another repository to my system, particularly when the people I’m worried about aren’t going to be able to bruteforce what I’ve got.

A final incidental reason, I don’t like how they are still on version 1.0, but keep making large changes between releases. 1.0e and 1.0f are quite different, but the version differs only in a minor letter.

Dirk Praet January 29, 2015 6:50 PM

@ Michael.

For about three reasons.

1 and 2 are perfectly good reasons. Personally, I’m not too keen on keeping stuff around that is no longer under development and that everybody is moving away from, especially when the authors themselves warn about “unfixed security issues”. If you’re worried about backward compatibility with the original Truecrypt format, you’re probably better off with Ciphershed than with Veracrypt, but they’re pretty much still in alpha stage. LUKS/Cryptsetup since version 1.6 supports opening TrueCrypt containers natively too, but that’s Linux only. TAILS and Kali, for example, have abandoned Truecrypt persistent storage volumes in favour of LUKS.

As for 3, I don’t see why you should have to add another repository. You can download binaries that will install in /usr or roll your own from source and install wherever you want (e.g. /opt or /usr/local)

name.withheld.for.obvious.reasons January 29, 2015 10:05 PM

@Nick P

I still would love go hear your evaluation of the claims of Hamilton’s Universal Systems Language at htius.com.

For you Nick, anything. Give me over the weekend to take a crack at it. By the way, the reference to the NASA report intrigues me as it was then that NASA learned about redundancy and fail-safe platforms. Two identical systems can fail, identically. And, as we have covered in the past I am unimpressed by the DO-178 processes, seems the the lessons of the past are often lost.

Regards

name.withheld.for.obvious.reasons January 29, 2015 10:16 PM

@ Nick P

Spoiler alert, looked briefly at the “double oh one” system, two quick observations (a detailed analysis is forth coming–HLL joke):

1.) Appear to be “grandiose” claims of performance/capability…
2.) Prospectively, intereting–traceability and integrated change source control (Sun had something similar called Sabre) and looks much like ADA type systems of this nature.

Again, I will be more thorough in my “forth” coming analysis. I will post it to a squid as you suggest.

Nick P January 29, 2015 10:39 PM

@ name.withheld

Very grateful for the help. Yeah, the claims are big but so were the NASA results. Her site has a papers and testimonials section. Going through the papers to understand the method, then using your brain to decide if that could produce results in testimonials might shorten the task. If it can do that, then it’s worth further investigation. If not, then it’s an oversell to be dropped.

Again, I appreciate your time. 🙂

name.withheld.for.obvious.reasons January 29, 2015 11:59 PM

@ Nick P

Going through the papers to understand the method, then using your brain to decide if that could produce results in testimonials might shorten the task.

Good suggestion–it’s normally what I do in development…mapping requirements to implementation plans.

Ray Dillinger January 30, 2015 12:23 AM

As a study, I consider that there are very good reasons to distrust it.

The idea that someone can have (and use) a secret that they cannot disclose is catnip for the TLA’s. It would revolutionize their exposure to betrayal by their own agents, so they really, really want something like this, and are probably ready to throw gobs of money at someone who they believe can actually produce it.

I tend to distrust papers making claims that would revolutionize a potentially lucrative field, unless they are grounded in mathematics, physics, and engineering I personally understand, until they are five years old and continue to be supported by mainstream science.

Bluntly, most such “revolutionary” papers are either wishful thinking produced by someone who wants to believe because, if true, it would make them rich, or an outright scam by someone who knows perfectly well that it’s false. Exceptions exist, but they are rare.

Most such papers are bogus.

I expect that the thing which can actually be done will be not nearly as reliable as the thing which is claimed or, inevitably, sold. The failure will be counted both in terms of false negatives, where the person who “has the secret” cannot in fact log in, and in false positives, where people who do not have the secret can.

Can it work on a statistically significant level? Probably. I’m willing to believe that someone can “get the pattern right” 95% of the time while someone else can only figure it out 5% of the time. Which is a really neat party trick, but it isn’t security nor reliable retrieval of data when it’s needed. But someone stands to make a lot of money while deep-pocketed powers hire them to exhaustively investigate whether and how it can be made more reliable, and that sounds to me like someone who wants to believe.

A paper like this is snake oil until proven otherwise.

Wael January 30, 2015 1:44 AM

@Nick P, @name.withheld.for.obvious.reasons, @Clive Robinson,

Wael’s input would be nice.

I read the NASA report regarding Margaret Hamilton. Also took a glance at htius.com and a few of the publications. I was only able to see the abstracts. From the little I saw, I think these are noteworthy:

Whereas the traditional software development approach is curative, testing for errors late into the life cycle, USL’s approach is preventive

Curative: Prison, where inmates get rehabilitated
Preventative: Castle, no ghosts get in.

How about that! By the way, I am still working on C-v-P when time allows. Need to bring it to a closure.

the mathematical theory upon which it is based, Development Before the Fact (DBTF);

I could not get to this document. But I’m sure @name.withheld.for.obvious.reasons will do a stellar job. I don’t want to say more about an OT subject on this subconscious passwordء or key thread.

Andrew_K January 30, 2015 1:57 AM

@ Nick P

You’re fine if you just have encrypted input and output but no trusted hardware, OS, or software? Huh? The Trusted Computing Base (TCB) of a system is the sum of things that must work properly to ensure security. On a given computer, that usually means everything from its hardware up to whatever is doing computation on your data. Markus’s TFC uses physical tricks to reduce risk of a key leak while still having some attack vectors and major risk of system destruction on two nodes. Acceptable for his app, but shows the point. — Nick P

OK, you got me into thinking — and you’re right. When I say “trusted IO”, it is pretty much what you wrote: A whole system that needs to be trustful. The computer connected via serial line in this scenario is nothing else than an overambitious piece of untrustful network equipment, basically converting rs232 to IP based traffic. Everythink on “my side” including the crypto equipment in the serial line needs to fulfill the requirements you sketched. I just vastly reduce the number of components to trust — on the price of less functionality and loss of compatibility with unaware programs whose output will be pure bogus. In the end, I can open a (specialized) chat program, connect to IRC and will be able to have a private conversation. I admit, it is not very general purpose.
But I can also let the connected computer become more intelligent, let it run some application. I can then take encrypted text from the serial line for further use (e.g. in an e-mail) and send encrypted contents to the serial port to be decrypted.
The limitation is clear: Assuming the crypto to be uncompromised, every application on the computer can be compromised and can do anything evil using everything stored or processed on the machine. Except for what I entered or read using the secure IO device.

Andrew_K January 30, 2015 2:15 AM

@ name.withheld.for.obvious.reasons

To me it was obvious that this technology was on the IC community’s top 10 wish list. — name.withheld.for.obvious.reasons

Of course. In this case, authentication is identification. In best case identification without the subject’s consent or even knowledge.

Think for a moment what this would mean for counterintelligence.

vas pup January 30, 2015 8:59 AM

Psychology and security related:
http://www.bbc.com/culture/story/20150128-were-the-kgb-the-good-guys:
“The Americans makes history personal – and maybe even turns these Russian spies into the good guys, as it becomes clear that they are captives of their home country’s government as much as they are its agents.” The last statement applies regardless of the country of origin and is true for US, GB, China, etc. because it is in the core.
When you put somebody in foreign environment (not only Russians to US or vice versa, but e.g. somebody in prison), your root personality gradually changes by adopting to environment and accepting new (assumed in particular) social roles. In Russia penal psychologists called it ‘prisonization’, i.e. in five years of incarceration you start thinking about prison as a home. I guess same applies to the subject at link above. When new land is becoming your home, you (as reasonable person) can’t be willing to bring harm to your home. I think if US did not find sound mechanism to bring Mr. Snowden back to US (not in hand cuffs), and he stays in Russia for 5 years, US may lose him forever.

Nick P January 30, 2015 9:09 AM

@ Andrew_K

You’ve learned the lesson. Great. 🙂 The good news is Markus’s Tinfoil Chat does what you need already. All it needs is for the sender node to be reimplemented in a low level language that can prevent memory leaks of key material and overwrites plaintext memory. Add in faster I/O, you could do voice and video too with the same architecture.

As Wael said, I’m not commenting further as we’re all way off topic.

Vinny G January 30, 2015 3:18 PM

A number of years ago, I was following a long term project named m-o-o-t, led by a (claimed) UK mathematician named Peter Fairbrother. The premise was to apply steganography to the content of encrypted communications in such a way that there would be two keys: one key would produce the true content; the second key would produce an alternative, plausible, but completely innocuous, text. The obvious point was to produce the key to the alternative text when one was demanded by the authoritae. Fairbrother claimed to have made good progress, and to be holding back on deliverables pending enforcement of RIPA part III. The project updates stopped shortly after part III was implemented in 2007 or so. I have no idea whether Fairbrother really had nothing but snake oil and disappeared in embarrassment, or whether he was somehow coerced, but it was an interesting concept. The site remains up to this day: m-o-o-t.org. Ever hear of this, Clive?

Clive Robinson January 31, 2015 2:02 AM

@ Vinny G,

Looking for info on M-o-o-t that is current has proved little or nothing. Though in times past it has been mentioned on this blog and a journalist once asked Bruce for a comment on it.

There is brief “technical intent” page at,

http://www.zenadsl6186.zen.co.uk/cryptographynotes.html

And it would appear that Peter ha(s/d) an email address there. There are other indicators that back in the early half of the 00’s it was sufficiantly serious to have a slot in PET-04 and other privacy confrences. The M-o-o-t site it’s self appears to have been last updated in the summer of 07, but previous comments suggest that development had fizzeled out some time long before that.

With regards the technical intent page, it gives at best a 20,000ft overview and the tone of the page indicates that the specification if produced from it would have made it “fragile” and “set in time”.

With a few improvments to the technical intent it could be both brought upto date and made future proof at the algorithm and protocol levels (see my comments in the past about “NIST” needing to develop “frameworks”). However I suspect that any implementation would not be secure because of other issues (this is true of all “software only” security products, which might account for why development fizzeled). Another major area of concern to me on this is that the technical intent and other pages sugested an “all in one” aproach, which is just wrong within current “adequate security” thinking. You realy should be able to run parts in segregated hardware segments that are very nearly “air gapped” using guards / pumps / sluices / data diodes between the segments.

I’ll let others make their own comments on what they think based on anything further info they dig up.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.