Detecting Lies through Mouse Movements

Interesting research: “The detection of faked identity using unexpected questions and mouse dynamics,” by Merulin Monaro, Luciano Gamberini, and Guiseppe Sartori.

Abstract: The detection of faked identities is a major problem in security. Current memory-detection techniques cannot be used as they require prior knowledge of the respondent’s true identity. Here, we report a novel technique for detecting faked identities based on the use of unexpected questions that may be used to check the respondent identity without any prior autobiographical information. While truth-tellers respond automatically to unexpected questions, liars have to “build” and verify their responses. This lack of automaticity is reflected in the mouse movements used to record the responses as well as in the number of errors. Responses to unexpected questions are compared to responses to expected and control questions (i.e., questions to which a liar also must respond truthfully). Parameters that encode mouse movement were analyzed using machine learning classifiers and the results indicate that the mouse trajectories and errors on unexpected questions efficiently distinguish liars from truth-tellers. Furthermore, we showed that liars may be identified also when they are responding truthfully. Unexpected questions combined with the analysis of mouse movement may efficiently spot participants with faked identities without the need for any prior information on the examinee.

Boing Boing post.

Posted on May 25, 2018 at 6:25 AM21 Comments

Comments

asdsada May 25, 2018 7:48 AM

Oh great. More smoke and mirrors.

I’m sure that between all of: touch pads, left handed people, bad UX, slow internet, fat fingers, tired users, mentally ill people, people who use English as their second language, old or impaired people with worse motor control, people with parkinsons, blind people with screen readers, privacy savy users, confused users, annoyed users, USB and Bluetooth mouses and so on this will work JUST great when idiotic SV companies will start to use this as the sole mechanism deciding whether you’re a good user or literally Osama Bin Laden the Second that threatens their multi billion dollar website with their casual use of it.

We really need more pseudo science to join bite shapes, lie detectors, people reading and so on. If you don’t wanna play along and do everything we say – you’re evil.

And clearly we need more crappy bots to add to PayPal, Google services, YouTube, Facebook, et al. to automatically judge people as “liars” and destroy their accounts with no ability to appeal unless they have 100k followers on social media that make a stink over it.

I’m not impaired and I’m a human (although SV crowd loves to deny that with their shitty bots) and I often have problems with Google’s HORRIBLE image captcha on a certain website that loves to spam them every 5 minutes, shadow bans for ‘suspicious’ behavior (like linking to a better competitor website..) and such. Things such as putting a link in the comment or skipping bots are considered ‘suspicious’. But bots are everywhere on that website and pasting child porn, viruses and such, despite all this ‘advanced protection’.

It’s like the Futurama scene with more trains. Technology sucks, doesn’t stop the baddies, destroys normal users, burns resources needlessly – send in more shitty rushed technology.

de la Boetie May 25, 2018 8:12 AM

Does this mean I have to stop being born on 1st January?

Similar things have been achieved with reaction times on the Implicit Associations Tests.

Countermeasures to this are obvious – delay all responses, whether true or false, or make them very variable; or just bone up on your persona a bit. Interrogators would already have used this kind of test in the past for face-to-face interactions. Not that I’m advocating lying of course, sometimes it’s justified to spread some chaff against the unethical market/corporations.

As noted above, false positives are going to be such joy for the innocent.

Seth May 25, 2018 8:30 AM

@asdsada, the paper reports accuracy around 90% so you’re probably right about the usability online. It seems like google is especially bad at recognizing humans if the browser is blocking ads or some javascript.

The research was conducted with university students, and all right handed at that, so there’s some biases there. It also seems like it would be easy enough for a liar who knew they were being tested to mimic the “correct” mouse movements. Still, interesting research, it seems like a digital version of lie detector tests.

K.S. May 25, 2018 9:08 AM

“While truth-tellers respond automatically to unexpected questions”

Only foolish truth-tellers respond automatically to unexpected questions. The rest pause to consider if such unexpected question has to be answered, and security implications of providing an answer.

Oliver May 25, 2018 9:37 AM

This is BULLSHIT of the highest grade.
Don’t believe a bit of it.

As above already mentioned, all smoke and mirrors.

echo May 25, 2018 9:46 AM

@K.S.

Oh yes indeedy espcially if you are within a toxic corporate culture where standards have slipped.

Steve May 25, 2018 10:17 AM

Perhaps we should hold our presidential candidate debates using a computer interface.

This might be my name May 25, 2018 10:29 AM

And one should note that what was tested were people told to pretend to lie. Whether they were lying or acting is not resolved.

asdsada May 25, 2018 11:13 AM

@Seth, my harshness is not without a good reason. Internet is already full of horror stories of ML handing out unappealable bans and the tech community at large has no problem with it (“using $SOMEWEBSITE is not a human right, they can ban anyone they want on their website, you’re just entitled, free market, blablabla”). Add to that all the privacy violations, ads, dark patterns, abuses, common security holes and the seeming impunity even at Equifax scale of fuck ups and it’s clear where I’m coming from.

vas pup May 25, 2018 11:28 AM

Those articles address AI and ML (the letter was used in this research)problems which confirm skepticism of our respected bloggers:
Can AI be free of bias?
Studies have shown that artificial intelligence can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” rights groups have called on decision-makers to address algorithmic bias.
http://www.dw.com/en/can-ai-be-free-of-bias/a-43910804
“If we collectively decide to develop and use AI and machine learning technologies, we must acknowledge and address the risks of these technologies reinforcing or even institutionalizing inequalities and discrimination, if safeguards are not put in place,” Masse told DW.
NB: Kloumann noted that the technology could be used by “any engineer” to “evaluate their algorithms for bias” to ensure that they “don’t have a disparate impact on people” before they’re launched. But Kliegr told DW that other challenges remain, including better defining bias. “When some piece of knowledge is supported by data, but is considered ‘politically’ incorrect, is it a bias?”
This cyberwar just got real:
http://www.dw.com/en/this-cyberwar-just-got-real/a-43908697
David Petraeus, a retired US General and (some say disgraced) former intelligence chief says the internet has created an entirely distinct domain of warfare, one which he calls “netwar.” And that’s the kind being waged by terrorists.
Then there’s another kind, and technically any hacker with enough computer skills can do it — whatever the motivation. This kind takes out electricity grids, or jams emergency phone lines with fake calls powered by artificial intelligence and machine learning technology.
As far as UK Attorney General Jeremy Wright is concerned, there should be no difference in the way those attacks are treated, compared to physical attacks, by international law.
AI or ML will have a deeper impact on our lives than cyberspace itself, certainly in the area of misinformation through fake content like photos, videos, and audio. These automated, self-training technologies may hold a lot of promise for medicine and other facets of research. But they also threaten to demolish all human trust — perhaps in the most unsuspected areas of life. “AI can be used to falsify audio evidence and get people sent to jail, or have disagreements about contracts,” Avin told DW. “And another thing we don’t think about much is our ability to have a real-world denial of service.” Someone could create a 100 percent fake radio interview and have a state leader ‘say’ something to annoy or insult another leader — who wouldn’t know it was all a set-up. If that happened it would be a mere hop to a Franz Ferdinand style outbreak of war. “Sentiments can be seen as a cause of war: Things like ‘you hurt my feelings’ or ‘you insult my president and I attack you,'” Klimburg said. “That is extremely worrying because at the end of the day, if that happens, there is no such thing as free speech, and there’s no such thing as democracy.” And that’s the thorny issue of knowing exactly who just attacked you. Even if you can recognize a cyber attack as an act of misinformation — and you’ve spotted the fake — you may not be able to attribute blame, because the lines between the actions of governments and those of individuals are starting to blur. In his Chatham House speech, Wright said this was one of the biggest challenges for international law: “Without clearly identifying who is responsible for hostile cyber activity, it is impossible to take responsible action in response.” “You can get to the point where — pixel to pixel or waveform to waveform — there is just no difference, there is no statistical difference between the real world and the forgery,” says Avin, “even when you take context into account, and if that were the case, then an AI won’t help you because there’s no pattern for it to detect.”At that point, says Avin, you need to move to cryptographic measures for guaranteeing the authenticity and the source of photos, for example, “and that means we need different hardware out there and different international standards.”

nwildner May 25, 2018 12:55 PM

My dad makes circles with the mouse to “speed up” pages and loading. Should I be worried? Is he a fake? LOL

echo May 25, 2018 1:29 PM

@vas pup

The UK Attorney General is overlooking how psychological abuse and mental healthcare remains beneath parity for physical ailments.

I am fairly confident that a competent AI let loose on a database of public policy may be an effective corruption detection tool especially when coupled with a “full data capture system” recording every written and verbal communication between citizens and the state, and behidn closed doors meetings by state sector workers when concerning issues of public interest and the welfare of citizens. (Some precedent for this exists as NHS trust contract negotiations areaudio recording and verbatim transcripts made because the boss class of the state doesn’t even trust the boss class of state to play fair. The legal framework which supports this use is the Police and Crime Evidence Act and if you read the fine print is an act which also authorises citizens.

A number of UK healthcare protocols on the surface are called “gold standard” when the reality is the protocols are a political fix to protect job titles, cut costs (a.k.a cheapen the product even if it harms X percentage of patients), and in some cases continue to obscure decades of scientific fraud (a.k.a doctors making things up in closed meetings without minutes, or promoting known bad research because they wrote it) and patient abuse and excuses to ruin the careers of doctors who threaten their rice bowl.

Alyer Babtu May 25, 2018 1:36 PM

Perhaps this new technology can solve the problem discussed in the Shoplifting thread, e.g. require customers to answer the question “Did we steal something today ?” with a mouse-click.

Jeremy May 25, 2018 2:52 PM

From Abstract: “Furthermore, we showed that liars may be identified also when they are responding truthfully.”

This seems to illuminate some worrisome unstated assumptions. They identify “liars”. But what qualifies someone as a “liar”?

Does “liar” mean someone who is lying right now? Obviously not, because “liars may be identified also when they are responding truthfully,” which would be a contradictory case under such a definition.

Does “liar” mean someone who would ever lie about anything? Obviously not, because that would presumably include 100% of their subjects.

What precisely is being identified, then?

From Paper: “An example of an expected question would be one’s date of birth, and a corresponding unexpected question would be the zodiac corresponding to the date of birth. While truth-tellers easily verify questions involving the zodiac, liars do not have the zodiac immediately available, and they have to compute it for a correct verification.”

So if I don’t know my frickin zodiac sign off the top of my head, that makes me a liar? Egad. The only reason I ever learned my sign at all is because it was relevant to one video game I played two decades ago.

I don’t even know my own age without computing it anymore; it keeps changing, and I don’t use it often enough to keep it fresh.

Then they describe how they assigned false identities to participants and required them to rehearse them, and “unexpected questions” were ones whose answers were not provided in their prep materials. So whatever their experimental accuracy rate is, that’s for the case where you know in advance exactly what questions were or were not rehearsed by the liars. I bet that’s a scenario that comes up frequently in real life!

albert May 25, 2018 3:30 PM

@Seth,
“…Still, interesting research, it seems like a digital version of lie detector tests….”

And like lie detector tests, will ever remain in the ‘can’t be used in a court of law’ category.

What is ‘interesting’ is that this sort of foolishness continues. ‘Depressing’ may be more appropriate.

@Steve,
“…Perhaps we should hold our presidential candidate debates using a computer interface….”

Maybe we’ll get younger candidates:)

@Jeremy,
“…I bet that’s a scenario that comes up frequently in real life!…”

It does in TV interviews with movie stars and politicians. Okay, that’s not ‘real life’, but…

. .. . .. — ….

echo May 25, 2018 4:12 PM

@Jeremy

As they say “Take a theoretically perfect ball and roll it in a theoretically straight line.”

Coyne Tibbets May 26, 2018 12:02 PM

So, conceding an advance my motions wouldn’t probably be that violent, does this mean I would be ranked as a liar because I have Parkinson’s and don’t move the mouse quickly and reliably to where I want to go?

justinacolmena May 28, 2018 8:56 PM

by Merulin Monaro, Luciano Gamberini, and Guiseppe Sartori.

They say two’s company and three’s a crowd, but this crowd is getting to be a quite a mob.

While truth-tellers respond automatically to unexpected questions, liars have to “build” and verify their responses.

No. Just no. Thinking before you speak is not an indication of lying. Slick Willie responded very adroitly off-the-cuff to unexpected questions, but he was not telling the truth.

casey May 29, 2018 12:52 PM

There is nothing wrong with conducting this experiment and it looks like an earnest effort, but wooof. The premise is as thin as you can get. Bouncer asks guy with fake ID “What year did you start high school?”. If you answer straight away you might justify that the person would only remember that if they experienced it. If not, you cannot really be sure if that is evidence that the ID is fake because most people don’t comprehensively mark odd bits of data.

Andre Amorim May 30, 2018 12:47 AM

Bruce,
Quick question is polygraph still widely used in US ? because using electronics devices in criminal investigations reminds this scene in the TV Series “Lie to Me”

Lie To Me’s Take on the Hand-held Lie Detector
AntiPolygraph
Published on 8 Mar 2009
https://www.youtube.com/watch?v=oEZTt_Ciiws

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.