Entries Tagged "behavioral detection"

Page 1 of 4

Detecting Phishing Emails

Research paper: Rick Wash, “How Experts Detect Phishing Scam Emails“:

Abstract: Phishing scam emails are emails that pretend to be something they are not in order to get the recipient of the email to undertake some action they normally would not. While technical protections against phishing reduce the number of phishing emails received, they are not perfect and phishing remains one of the largest sources of security risk in technology and communication systems. To better understand the cognitive process that end users can use to identify phishing messages, I interviewed 21 IT experts about instances where they successfully identified emails as phishing in their own inboxes. IT experts naturally follow a three-stage process for identifying phishing emails. In the first stage, the email recipient tries to make sense of the email, and understand how it relates to other things in their life. As they do this, they notice discrepancies: little things that are “off” about the email. As the recipient notices more discrepancies, they feel a need for an alternative explanation for the email. At some point, some feature of the email — usually, the presence of a link requesting an action — triggers them to recognize that phishing is a possible alternative explanation. At this point, they become suspicious (stage two) and investigate the email by looking for technical details that can conclusively identify the email as phishing. Once they find such information, then they move to stage three and deal with the email by deleting it or reporting it. I discuss ways this process can fail, and implications for improving training of end users about phishing.

Posted on November 6, 2020 at 6:28 AMView Comments

Detecting Shoplifting Behavior

This system claims to detect suspicious behavior that indicates shoplifting:

Vaak, a Japanese startup, has developed artificial intelligence software that hunts for potential shoplifters, using footage from security cameras for fidgeting, restlessness and other potentially suspicious body language.

The article has no detail or analysis, so we don’t know how well it works. But this kind of thing is surely the future of video surveillance.

Posted on March 7, 2019 at 1:48 PMView Comments

Detecting Fake Videos

This story nicely illustrates the arms race between technologies to create fake videos and technologies to detect fake videos:

These fakes, while convincing if you watch a few seconds on a phone screen, aren’t perfect (yet). They contain tells, like creepily ever-open eyes, from flaws in their creation process. In looking into DeepFake’s guts, Lyu realized that the images that the program learned from didn’t include many with closed eyes (after all, you wouldn’t keep a selfie where you were blinking, would you?). “This becomes a bias,” he says. The neural network doesn’t get blinking. Programs also might miss other “physiological signals intrinsic to human beings,” says Lyu’s paper on the phenomenon, such as breathing at a normal rate, or having a pulse. (Autonomic signs of constant existential distress are not listed.) While this research focused specifically on videos created with this particular software, it is a truth universally acknowledged that even a large set of snapshots might not adequately capture the physical human experience, and so any software trained on those images may be found lacking.

Lyu’s blinking revelation revealed a lot of fakes. But a few weeks after his team put a draft of their paper online, they got anonymous emails with links to deeply faked YouTube videos whose stars opened and closed their eyes more normally. The fake content creators had evolved.

I don’t know who will win this arms race, if there ever will be a winner. But the problem with fake videos goes deeper: they affect people even if they are later told that they are fake, and there always will be people that will believe they are real, despite any evidence to the contrary.

Posted on October 26, 2018 at 9:01 AMView Comments

Android Ad-Fraud Scheme

BuzzFeed is reporting on a scheme where fraudsters buy legitimate Android apps, track users’ behavior in order to mimic it in a way that evades bot detectors, and then uses bots to perpetuate an ad-fraud scheme.

After being provided with a list of the apps and websites connected to the scheme, Google investigated and found that dozens of the apps used its mobile advertising network. Its independent analysis confirmed the presence of a botnet driving traffic to websites and apps in the scheme. Google has removed more than 30 apps from the Play store, and terminated multiple publisher accounts with its ad networks. Google said that prior to being contacted by BuzzFeed News it had previously removed 10 apps in the scheme and blocked many of the websites. It continues to investigate, and published a blog post to detail its findings.

The company estimates this operation stole close to $10 million from advertisers who used Google’s ad network to place ads in the affected websites and apps. It said the vast majority of ads being placed in these apps and websites came via other major ad networks.

Lots of details in both the BuzzFeed and the Google links.

The Internet advertising industry is rife with fraud, at all levels. This is just one scheme among many.

Posted on October 25, 2018 at 6:49 AMView Comments

People Who Need to Pee Are Better at Lying

No, really.

Abstract: The Inhibitory-Spillover-Effect (ISE) on a deception task was investigated. The ISE occurs when performance in one self-control task facilitates performance in another (simultaneously conducted) self-control task. Deceiving requires increased access to inhibitory control. We hypothesized that inducing liars to control urination urgency (physical inhibition) would facilitate control during deceptive interviews (cognitive inhibition). Participants drank small (low-control) or large (high-control) amounts of water. Next, they lied or told the truth to an interviewer. Third-party observers assessed the presence of behavioral cues and made true/lie judgments. In the high-control, but not the low-control condition, liars displayed significantly fewer behavioral cues to deception, more behavioral cues signaling truth, and provided longer and more complex accounts than truth-tellers. Accuracy detecting liars in the high-control condition was significantly impaired; observers revealed bias toward perceiving liars as truth-tellers. The ISE can operate in complex behaviors. Acts of deception can be facilitated by covert manipulations of self-control.

News article.

Posted on September 25, 2015 at 5:54 AMView Comments

TSA Behavioral Detection Statistics

Interesting data from the U.S. Government Accountability Office:

But congressional auditors have questions about other efficiencies as well, like having 3,000 “behavior detection” officers assigned to question passengers. The officers sidetracked 50,000 passengers in 2010, resulting in the arrests of 300 passengers, the GAO found. None turned out to be terrorists.

Yet in the same year, behavior detection teams apparently let at least 16 individuals allegedly involved in six subsequent terror plots slip through eight different airports. GAO said the individuals moved through protected airports on at least 23 different occasions.

I don’t believe the second paragraph. We haven’t had six terror plots between 2010 and today. And even if we did, how would the auditors know? But I’m sure the first paragraph is correct: the behavioral detection program is 0% effective at preventing terrorism.

The rest of the article is pretty depressing. The TSA refuses to back down on any of its security theater measures. At the same time, its budget is being cut and more people are flying. The result: longer waiting times at security.

Posted on April 20, 2012 at 6:19 AMView Comments

TSA Administrator John Pistole on the Future of Airport Security

There’s a lot here that’s worth watching. He talks about expanding behavioral detection. He talks about less screening for “trusted travelers.”

So, what do the next 10 years hold for transportation security? I believe it begins with TSA’s continued movement toward developing and implementing a more risk-based security system, a phrase you may have heard the last few months. When I talk about risk-based, intelligence-driven security it’s important to note that this is not about a specific program per se, or a limited initiative being evaluated at a handful of airports.

On the contrary, risk-based security is much more comprehensive. It means moving further away from what may have seemed like a one-size-fits-all approach to security. It means focusing our agency’s resources on those we know the least about, and using intelligence in better ways to inform the screening process.


Another aspect of our risk-based, intelligence-driven security system is the trusted traveler proof-of-concept that will begin this fall. As part of this proof-of-concept, we are looking at how to expedite the screening process for travelers we know and trust the most, and travelers who are willing to voluntarily share more information with us before they travel. Doing so will then allow our officers to more effectively prioritize screening and focus our resources on those passengers we know the least about and those of course on watch lists.


We’re also working with airlines already testing a known-crewmember concept, and we are evaluating changes to the security screening process for children 12-and-under. Both of these concepts reflect the principles of risk-based security, considering that airline pilots are among our country’s most trusted travelers and the preponderance of intelligence indicates that children 12-and-under pose little risk to aviation security.

Finally, we are also evaluating the value of expanding TSA’s behavior detection program, to help our officers identify people exhibiting signs that may indicate a potential threat. This reflects an expansion of the agency’s existing SPOT program, which was developed by adapting global best practices. This effort also includes additional, specialized training for our organization’s Behavior Detection Officers and is currently being tested at Boston’s Logan International airport, where the SPOT program was first introduced.

Posted on September 14, 2011 at 6:55 AMView Comments

Interview with TSA Administrator John Pistole

He’s more realistic than one normally hears:

So if they get through all those defenses, they get to Reagan [National Airport] over here, and they’ve got an underwear bomb, they got a body cavity bomb — what’s reasonable to expect TSA to do? Hopefully our behavior detection people will see somebody sweating, or they’re dancing on their shoes or something, or they’re fiddling with something. Our explosives specialists, they’ll do something – they do hand swabs at random, unpredictably. If that doesn’t work then they go through (the enhanced scanner). And these machines give the best opportunity to detect a non-metallic device, but they’re not foolproof.


We’re not in the risk elimination business. The only way you can eliminate car accidents from happening is by not driving. OK, that’s not acceptable. The only way you can eliminate the risk of planes blowing up is nobody flies.

He still ducks some of the hard questions.

I am reminded my own interview from 2007 with then-TSA Administrator Kip Hawley.

Posted on December 22, 2010 at 12:27 PMView Comments

Behavioral Profiling at Airports

There’s a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George’s behaviour, and why he was detained. The TSA’s parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George’s behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS’s SPOT program: “Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges.”

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You’d think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security — concentrate on the person — and the idea that trained officers notice if someone is acting “hinky”: both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they’re supposed to “spot terrorists as they walk through a corridor,” or possibly after a few questions. That’s very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don’t work, and the problem with the Israeli security model is that it doesn’t scale.

Posted on June 14, 2010 at 6:23 AMView Comments

1 2 3 4

Sidebar photo of Bruce Schneier by Joe MacInnis.