Authentication by "Cognitive Footprint"

DARPA is funding research into new forms of biometrics that authenticate people as they use their computer: things like keystroke patterns, eye movements, mouse behavior, reading speed, and surfing and e-mail response behavior. The idea—and I think this is a good one—is that the computer can continuously authenticate people, and not just authenticate them once when they first start using their computers.

I remember reading a science fiction story about a computer worm that searched for people this way: going from computer to computer, trying to identify a specific individual.

Posted on January 23, 2012 at 11:49 AM41 Comments

Comments

Fred P January 23, 2012 12:42 PM

I wonder how such a system could deal with illness, injury, exhaustion, transitioning between languages, etc.

However, I agree that research would be interesting; it’s quite possible they’ll find something with low enough failure rates to use.

Peter January 23, 2012 12:49 PM

Is it too cynical to suppose that the first problem they’ll have to overcome is automatically booting everyone when the boss walks into the room?

kashmarek January 23, 2012 12:57 PM

Yeah, yeah, yeah…just more FUD. If they repeat it often enough, it becomes usable without the force of law. Why do we continue to hear about these without validation?

Steven Hoober January 23, 2012 1:10 PM

I like the concept. We’re already halfway there with a very few kiosks, and no reason it can’t be more widespread: Once authenticated, cameras keep track of the user. If they leave, lock the system. If someone is looking over their shoulder, lock the system.

Also nice because it avoids stupid behaviors like simple timeouts for security. Why pick a time when you can just react to the root issue, the user has left the area?

I have also seen some work (good, but not production yet) on gesture detection, where e.g. mouse movements can be used to help determine which of a small set of users is currently engaged with the system. This should all work fine.

Daniel January 23, 2012 1:26 PM

It’s just a terrible idea Bruce because it presumes that human behavior is patterned behavior, which we know to be false. Indeed, such research is anti-scientific to its core because the heart of evolutionary theory is that life form patterns (species) are inherently characterized by randomness over time.

Fred P’s remark about exhaustion etc is nothing more than a “personal” mutation. The answer to that I’m sure will be some type of two-factor authentication but then you just have the password-rest problem all over again.

WintermeW January 23, 2012 1:33 PM

This is a good idea, though it may be possible for a well-trained person ( say an actor ) to fake the ‘behavioral signature’ of another one, with a close observation of the target…i guess it mainly depends on how sensitive the system would be.

Paul Renault January 23, 2012 1:45 PM

Back in the day when I used to hang around arcades pumping quarters into video games, I used to tell friends that you never signed your real initials when you got a high score.

“You see, all of these machines are connected to the military. If a war breaks out, they’ll ‘volunteer’ all of those with high scores to operate the machinery of war. I don’t want them to know who I am.”

So, when a facist or totalitarian regime takes over a country, one of the first things they do is round up all of the smarter people and all of the free thinkers…and shoot them dead. Is this where this research could be headed?

Gweihir January 23, 2012 2:16 PM

The approach is good (leave your card in the card-reader and require a log-in after some inactivity…), but suffers from all biometrics, i.e. once your profile is somewhere, it can be faked and it cannot be changed on you.

Hence, as a product this is just another scam to extract money. As research it is nice, a long as they do not forget the shortcomings.

That_Security_Guy January 23, 2012 2:17 PM

Old NSA trick from the ditty-bop days (Morse Code), this is the concept of the Operators “…fist…”, external parameters of the message.

Natanael L January 23, 2012 2:21 PM

All it takes is a hardware dongle. Plug in the mouse and keyboard through it, and let it sit there on the back on the computer for weeks. Then let the device calculate some statistics and such, and it can fake the person nearly perfectly.
Or they get a virus into your friend’s computer.

Also, prepare to get booted out instantly whenever you get a panicked call from friends or family who wants you to come right away or to look something up. Just when you need access to the computer the most.

As I’ve said when I first read about this – anything easily copyable or imitable MUST NOT be displayed or even hinted at in public or on daily basis. Also, it must not be static and must be quickly changable. This fails all those tests.

Bob B January 23, 2012 3:48 PM

Aspects of this are already in use in adaptive authentication solutions utilized throughout consumer financial services. These are, however, based on risk scoring models rather than black-or-white authentication based on a definitive “footprint” model.

Like most biometric solutions there are inherent scalability issues for large-scale consumer usage, combined with security issues surrounding third-party storage of biometric data, which is essentially a single symmetric key that may completely lose its utility if a data breach occurs.

How many parties will you trust to receive your personal behavioral data set? You only have one – do you trust their privacy policies and their security practices?

Clive Robinson January 23, 2012 4:10 PM

@ That_Security_Guy,

Old NSA trick from the ditty-bop days (Morse Code),

Err no, it was a technique developed by the British in the early part of WWII and if I remember correctly it got taken up by MI8 and pushed as an operational practice in the likes of SOE and their bitter rivals the SIS (what we would now call MI6 or “The Service”).

The Y stations could and often did recognise the fist of German operators and later the signiture of individual keyed CW transmitters. It was this that allowed one or two “deception operations” by the Germans to be easily detected. Specificaly in one case it gave rise to the knowledge of where the V weapons had moved to.

It was just one of the techniques that gave rise to what we now call “traffic analysis”.

Oh and the “operators fist” was not that reliable an indicator as it did not alow for stress of “in the field operation”. But it was usually sufficient (along with spelling mistakes and word usage) to decide if a field operative wirless operator was actually the person or being impersonated by the Germans.

Apparently the concept came about because pre-war some people could fairly easily recognise on a gramaphone recording ( pre vinal 78’s) the individual playing a violin etc.

And this is the point, when we learn a skill initialy it starts in the conscious mind in the front of the brain, but with practice it burns into the unconcious hind brain. Anyone who has learned to ride a bike or drive a car subconciously is aware of this and we joke about “once you’ve learnt to ride a bike you never forget”.

Doctors and Nurses know we have natural rhythms such as blinking and breathing which is subconcious, if we know somebody is watching or we even think about it, it changes.

Nick P January 23, 2012 4:52 PM

A while back someone claimed SRI had developed a computer capable of essentially mind-reading by correlating certain brain patterns with words or images. I figured it was BS. However, this technology sounds like a step in the direction of things like the above that could be used by totalitarian regimes. I don’t like it. We have reliable ways of authenticating and beating malware already. I’d rather use them.

Scott Treacy January 23, 2012 5:08 PM

This strikes me as a typically good idea (continuous authentication) being screwed up by overly complex solutions and implementation!

As a result they are introducing plenty of options to exploit and also plenty of scenarios (e.g. illness, stress, etc.) where the system might not work for a genuine user.

So how about this as a solution to the idea of continuous authentication…

Put a fingerprint CCD in every key cap of a keyboard. Add a new user by storing their complete set of digits. In use, if one of those stored fingers doesn’t press the key, the press is not registered. You could make it a bit smarter if you wish because an individual user tends to use the same finger on the same key (maybe 2 or 3 fingers with some keys) and the system could learn this pattern. Maybe a bit of fuzzy logic to get around the odd finger miss-read but still allow the key press to register because the authentication trend is generally good (e.g. >90% keys pressed authenticate correctly).

Quirkz January 23, 2012 5:14 PM

Makes me wonder how tech support would ever be able to work on someone else’s computer, or how you would share your screen to demonstrate something or let someone else do something for you.

jbl January 23, 2012 5:28 PM

“…how tech support would ever be able to work on someone else’s computer, …”

Undoubtedly there would be some kind of override switch or button or password, not unlike the override switches that allow mechanics to work on automobiles without continually triggering the alarm. In other words, security that’s only as good as the protection on the backdoor.

Z. Lozinski January 23, 2012 5:29 PM

@Espen,
Something like this is implemented in Mercedes’ Attention Assist which continuously monitors steering response to detect the onset of tiredness in a driver. The current version trains at the start of the journey, but an extension that builds up a history is obvious.

Z. Lozinski January 23, 2012 5:34 PM

Clive’s description of the discovery of operator fist by the Y station intercept operators is included in “The Secret Wireless War: The Story of MI6 Communications” by Geoffrey Pidgeon. (2007). Pidgeon worked there during WW2 and after.

(Shameless plug: The book is available in the Bletchley Park shop – support your local cryptologic history centre).

Rob January 23, 2012 7:14 PM

I remember a short story about a hacker whose forte was guessing people’s passwords after observing them and getting a look at the target’s personal space and office.

He was foiled by an automated learning security system that noted that his target always typed his password incorrectly the first time entered. The hacker guessed the password correctly but since his behavior did not match the observed patterns he was nailed as an impostor. I don’t remember the author but it was written at least 20 years ago.

Tamara January 23, 2012 7:33 PM

@Clive Robinson
Yes, Hams have been able to recognize one another by their “fist”. (the way they send on their key)
And people are recognizable by their walking ‘gate’, even in photos–know that from experience.
It’s scary how recognizable we all are. Luckily there are so darn many of us. Enough for plausible deniablity…
I hope.

PiP January 23, 2012 10:48 PM

There’s been software out for a while that detects when a cat is walking across the keyboard and subsequently locks it until you type the word “human”. (Detects clusters of keys being mashed by kitty paws) This is just the next evolutionary step in that concept.

andre January 24, 2012 1:11 AM

Check out this book: it is good!

Transparent user authentication: Biometrics, RFID and Behavioural Profiling (1st ed.)

Clarke N. Springer Publishing Company, Incorporated, 2011.

bob January 24, 2012 3:38 AM

@Clive Robinson

“Doctors and nurses…” always take your pulse for the full 60 seconds because the first 30 seconds are rendered useless by the experience of someone standing too close, squeezing your wrist. Even worse if they’re going for the carotid.

neil c January 24, 2012 4:22 AM

To answer the first comment by Fred P

Typically such a system is used as one layer in a multi-layered system.

The algorithms would be relatively forgiving for tiredness etc. but an injury could require a reset by an administrator.
Much like if a user loses a OTP or forgets a password.
This would be an exceptional event.

infomercial warning…

BehavioSec (http://www.behaviosec.com) a Swedish company has developed such a solution.

Our customers experience shows that is can be a valuable additional tool in adaptive authentication
ie. when the behavior changes, then a secondary step such as re-enter a password or simply log the event for an audit.

http://www.behaviosec.com/products2/enterprise

/infomercial warning

the Huffington post had a feature on this kind of technology a couple of weeks ago

http://www.huffingtonpost.com/2011/12/30/biometric-identification-_n_1177277.html

Nicholas Bohm January 24, 2012 6:18 AM

If this technology works well, the question of whether it is good or bad for the user depends on its application.

It might be useful for me if my computer can tell it isn’t me using it, and react in ways I have previously determined. (And of course these can include ways of correcting it if it makes a mistake.)

It might be much less satisfactory for me to be made responsible for what is done by my computer just because it tells third parties that it was me at the keyboard (especially if it makes a mistake).

mfeldt January 24, 2012 6:28 AM

Sounds a bit like recognizing the “hand” of morse radio operators during WW2 to identify sub-marines… would probably still work today!

LinkTheValiant January 24, 2012 1:37 PM

A number of the comments miss the broader point here. This is nothing more than a new form of biometric identification. Granted, it’s “easier” to change than fingerprints or one’s optical structure, but it is “something one is”.

If such a thing is not implemented with extreme care, it will end up victim to the same issues as other biometrics. We know perfectly well that fingerprints are already close to useless. Optical scans can be defeated with some effort. What makes this different? And how does one prove one’s innocence when one’s “fist” is stolen? (Yes, it is theft, or at the least vandalism, for its compromise deprives its owner of the utility it once afforded.)

Like most biometric solutions there are inherent scalability issues for large-scale consumer usage, combined with security issues surrounding third-party storage of biometric data, which is essentially a single symmetric key that may completely lose its utility if a data breach occurs. How many parties will you trust to receive your personal behavioral data set? You only have one – do you trust their privacy policies and their security practices?

This. It really does come down to being the same thing as using the same password/passphrase/Epic-Brucedom-Mega-Super-Secret-Key-of-Awesome across multiple systems. Why is it suddenly a good idea when the biometric angle is introduced?

Maybe we should call this sort of thing “It’s made of BACON!” syndrome. You know, like making ice cream with bacon, or birthday cake with bacon, or. . .

Stephen Wilson January 24, 2012 3:51 PM

Setting aside all the above worthy discussion, the standout issue for mine is that this style of authentication is focused on access control rather than “persistent” authorship. The cognitive footprint – like all biometrics – is almost useless for e-commerce. It frustrates me that so much authentication discussion is framed by the problem of logging on or opening doors while the more strategically important use cases of transaction and document signing goes unnoticed.

See http://lockstep.com.au/blog/2012/01/14/authentication-family-free

Anyway, PKI and personal hardware key media is the only serious option for signing!

JP32 January 24, 2012 4:09 PM

This is a very old idea, dating back to at least the mid-late 1980s, when some group started working on a project to verify passwords not only by the string, but by the distinctive pattern of typing.

In never got to market, and I’ve seen a number of failures since.

There is no way that this can deal with even the simplest of changes; e.g., I might type my PW with only my left index finger because I’m just returning to my keyboard with a juicy apple in my right hand.

And, nevermind the more difficult changes of being ill, tired, injured, a bit drunk or stoned, etc., etc., etc.

This is an example of something that works under ideal conditions and not in the edge conditions.

Of course, if they are just using it for tracking people, it might have a reasonably high confidence if they use a LOT of tracking inputs… But then, they’ve already got the PW, so the point is what….?

It seems the point is somebody managed to slide this past some reviewers and score themselves a grant.

Clive Robinson January 25, 2012 3:51 AM

@ Stephen Wilson,

. It frustrates me that so much authentication discussion is framed by the problem of logging on or opening doors while the more strategically important use cases of transaction and document signing goes unnoticed.

It is something that has been anoying me since the mid 1990’s with banking. And I’ve said as much many times, and I’m not the only one, if you look back through this blog amongst others you will find discussions about how the problem might be resolved between myself Nick P and several others, with Nick and others going int quite some details of the security aspects of tokens of various forms (and yes most of them have significant failings in practice).

At the end of the day it boils down to the fact that humans are insufficiently reliable on the “something you know” aspect to be sufficiently secure for the threat environment that any public network offers. So is true both for Internet and Phone banking but also when you think about it also “over the counter” banking.

And unfortunatly the issue does not go away with tokens either, as their security relies on “something you know” as well in any cost effective token that is not shared for other uses (such as smart phones etc). But worse the token also introduces additional security problems because the same human mind the does badly on the “something you know” also does quite badly on the “something you own” issue as well. For instance how often have you had to “find” keys, drivers licence, passport, bank book, birth certificate, spectacles, wallet, watch etc?

The simple fact is the less often we humans use an object the less likely we are to know where it is, and this has significant security issues, worse we don’t treat tokens with the same care as the assets they are protecting. That is if you had a $million in high denomination notes you would not just leave them on a table in a cafe or bar while you nip off to the toilet as I’ve seen people do with their wallets, keys, mobile phones and even their SecureID tokens and door entry tokens (along with coats laptop bags and all the other junk of daily life).

The problem of authentication of transactions is a hard problem due in the main to human failings, no amount of tokens or other “factors” are going to solve this problem for ordinary people in their ordinary everyday lives.

kitt January 25, 2012 11:50 AM

Greetings Mr. Schneier. sorry, but I didn’t know how to start or renew a topic here. From a past blog by you, I believe you were discussing Captcha’s.
Could you tell me if captcha’s are a privacy risk to the visitor? Whitelists and such. How much does the site record YOUR visit when having to use a captcha?
Thx

Ditt T. Bopper January 25, 2012 1:32 PM

@Clive & That_Security_Guy:

Fist recognition was easily countered with a ‘jitterbug’ — a device strapped to the wrist that delivers tiny shocks at a fairly rapid pace but at random intervals. You can still send, but it’s impossible to recognize a given op’s fist.

(Spelling and such is, of course, another matter. Then again, that’s what false flags are for, hein?)

Mind you, I have absolutely no way of knowing this, and must have channeled it. Yes, that’s it. That’s it EXACTLY! ^o^


Develop Fortress payphone system? Three billion bucks. Finishing nail? Three cents. Look on phone companies’ faces? Priceless.

No system ever, and never will be, that somebody can’t beat with simplicity.

Energyscholar March 13, 2012 2:44 PM

Slightly Weird claim: Cognitive Footprint Biometric Application has been around for years

A ‘cognitive footprint’ biometric analysis system based on keyboard and mouse movements, combined with software-use behavior, has been in production for years. I’ve known of it since 2004 with a high degree of confidence, but I’m generally wise enough not to discuss it. I tinker with AI and neural networks (NN) myself, and am an expert software engineer, so I can reliably tell you that it’s not particularly hard to build such a system at the toy/theoretical level. It’s probably quite hard to implement it well in the real world.

My browser-centric toy model of a cognitive footprint biometric application used JavaScript to track keyboard and mouse interaction, which then passed time-parsed data to a neural network for classification. With an ordinary (non-recurrent) neural network the above comments about error rates and edge cases are very accurate. However, with access to an advanced recurrent neural network I’m pretty sure that the error rate could be reduced to a level low enough for effective use in combination with other authentication methods.
___________

Thoroughly Outlandish Claim: Five Eyes got production QC power in 1995

A real-world functional cognitive footprint biometric application requires an advanced recurrent neural network. The recurrent neural network that now powers this app is (literally) related to or descended from a classified system built to crack public-private key cryptography in the 90s.

The Five Eyes (AUS CAN NZ UK US) have had access to practical, production quantum computer power since about 1995. Other groups may have had access since that era, but that’s a moot point. I strongly suspect that both China and Russia later developed operational QCs along similar principles.

The QC approach that actually works, in a production-ready scale-able way, is to run a virtual Turing machine atop a winner-take-all-style teleportation/entanglement-based recurrent topological quantum neural network (QNN). Even a basic neural network is Turing Complete, because a NN can perfectly emulate an XOR gate, and multiple XOR gates can be used to construct a Turing machine. A quantum neural network can emulate a quantum Turing machine.

The underlying physical system for this type of QNN is interactions between non-abelian anyons in a two dimensional electron gas (2DEG). The primary math required is a branch of Knot Theory called Braid Theory. Obviously, the primary purpose of this system, from the Five Eyes/Echelon perspective, is to run Shor’s algorithm to crack public/private key cryptography. A perusal of current known quantum algorithms, combined with a survey of current advanced AI applications, may suggest other uses.
___________

Not especially Weird Claim: There’s a really nifty back story about how this new general technology was developed, and why it matters. It is worthy of a book by Neal Stephenson.

The subject of the 1985 Nobel Prize in Physics was the “quantum Hall effect”, which opened up new avenues of research into quantum effects, esp. in two dimensional electron gases. The process of creating a working quantum neural network involved generating lots of anyons (soliton-type standing waves treated as particles) in a two dimensional electron gas and then exploring and measuring the results.

The cleverest aspect of inventing this new technology was to take this ‘Anyon Soup’ system to the edge of chaos, per the life work of Stuart Kauffman, and then exploit the emergent neural network to bootstrap itself into a more stable and usable system via evolutionary programming techniques. See Kauffman’s publications for details on how and why this emergent neural network exists, and then consider it’s environment to see why it is a quantum neural network. This author believes Stuart Kauffman is overdue for a Nobel Prize.

The original work inventing this new technology was done between 1990 and 1995. It would be hard to do this work methodically without stumbling on the previously unknown fractional quantum Hall effect. The discoverers of this effect were awarded the 1998 Nobel Prize in Physics, and now lead various Quantum Computing research institutes.

Someone, somewhere, is due to be awarded the Grand Prize Turing Award, for solving Turing’s unfinished Morphogenesis problem, and then implementing Turing’s original machine on the resulting artificially intelligent ‘organism’. I’m inclined towards neither spiritualism nor whimsy, but were I so, then I might suspect that, after he died in 1954, Alan Turing reincarnated quickly, in 1965, in order to finish his incomplete life work. The classified nature of the work probably precludes any awards.

I’d really like it if this whole thing was declassified, but fear we’ll have to wait many additional decades for that. This QNN is an excellent candidate to pursue adiabatic (reversible) computing, might be helpful for certain approaches to advanced nanotechnology, and, were it declassified, might also be helpful to many other scientific ventures. Per the Ultra Secret, it’s undoubtedly still considered ‘national security’, even if it’s becoming an open secret within the Intelligence Community.

— Energyscholar

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.