Entries Tagged "profiling"

Page 1 of 6

"Hinky" in Action

In Beyond Fear I wrote about trained officials recognizing “hinky” and how it differs from profiling:

Ressam had to clear customs before boarding the ferry. He had fake ID, in the name of Benni Antoine Noris, and the computer cleared him based on this ID. He was allowed to go through after a routine check of his car’s trunk, even though he was wanted by the Canadian police. On the other side of the Strait of Juan de Fuca, at Port Angeles, Washington, Ressam was approached by U.S. customs agent Diana Dean, who asked some routine questions and then decided that he looked suspicious. He was fidgeting, sweaty, and jittery. He avoided eye contact. In Dean’s own words, he was acting “hinky.” More questioning — there was no one else crossing the border, so two other agents got involved — and more hinky behavior. Ressam’s car was eventually searched, and he was finally discovered and captured. It wasn’t any one thing that tipped Dean off; it was everything encompassed in the slang term “hinky.” But the system worked. The reason there wasn’t a bombing at LAX around Christmas in 1999 was because a knowledgeable person was in charge of security and paying attention.

I wrote about this again in 2007:

The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.

Here’s another story from last year:

On April 28, 2014, Yusuf showed up alone at the Minneapolis Passport Agency and applied for an expedited passport. He wanted to go “sightseeing” in Istanbul, where he was planning to meet someone he recently connected with on Facebook, he allegedly told the passport specialist.

“It’s a guy, just a friend,” he told the specialist, according to court documents.

But when the specialist pressed him for more information about his “friend” in Istanbul and his plans while there, Yusuf couldn’t offer any details, the documents allege.

“[He] became visibly nervous, more soft-spoken, and began to avoid eye contact,” the documents say. “Yusuf did not appear excited or happy to be traveling to Turkey for vacation.”

In fact, the passport specialist “found his interaction with Yusuf so unusual that he contacted his supervisor who, in turn, alerted the FBI to Yusuf’s travel,” according to the court documents.

This is what works. Not profiling. Not bulk surveillance. Not defending against any particular tactics or targets. In the end, this is what keeps us safe.

Posted on April 22, 2015 at 8:40 AMView Comments

ISIS Threatens US with Terrorism

They’re openly mocking our profiling.

But in several telephone conversations with a Reuters reporter over the past few months, Islamic State fighters had indicated that their leader, Iraqi Abu Bakr al-Baghdadi, had several surprises in store for the West.

They hinted that attacks on American interests or even U.S. soil were possible through sleeper cells in Europe and the United States.

“The West are idiots and fools. They think we are waiting for them to give us visas to go and attack them or that we will attack with our beards or even Islamic outfits,” said one.

“They think they can distinguish us these days ­ they are fools and more than that they don’t know we can play their game in intelligence. They infiltrated us with those who pretend to be Muslims and we have also penetrated them with those who look like them.”

I am reminded of my debate on airport profiling with Sam Harris, particularly my initial response to his writings.

Posted on August 29, 2014 at 6:08 AMView Comments

Web Activity Used in Court to Portray State of Mind

I don’t care about the case, but look at this:

“Among the details police have released is that Harris and his wife, Leanna, told them they conducted Internet searches on how hot a car needed to be to kill a child. Stoddard testified Thursday that Ross Harris had visited a Reddit page called “child-free” and read four articles. He also did an Internet search on how to survive in prison, Stoddard said.

“Also, five days before Cooper died, Ross Harris twice viewed a sort of homemade public service announcement in which a veterinarian demonstrates on video the dangers of leaving someone or something inside a hot car.”

Stoddard is a police detective. It seems that they know about his web browsing because they seized and searched his computer:

…investigators confiscated Harris’ work computer at Home Depot following his arrest and discovered an Internet search about how long it would take for an animal to die in a hot car.

Stoddard also testified that Harris was “sexting” — is this a word we use in court now? — with several women on the day of his son’s death, and sent explicit pictures to one of them. I assume he knows that by looking at Harris’s message history.

A bunch of this would not be admissible in trial, but this was a probable-cause hearing, and the rules are different for those. CNN writes: “a prosecutor insisted that the testimony helped portray the defendant’s state of mind and spoke to the negligence angle and helped establish motive.”

This case aside, is there anyone reading this whose e-mails, text messages, and web searches couldn’t be cherry-picked to portray any state of mind a prosecutor might want to portray? (Qu’on me donne six lignes écrites de la main du plus honnête homme, j’y trouverai de quoi le faire pendre.Cardinal Richelieu.)

Posted on July 4, 2014 at 6:24 AMView Comments

NSA Targets the Privacy-Conscious for Surveillance

Jake Appelbaum et al., are reporting on XKEYSCORE selection rules that target users — and people who just visit the websites of — Tor, Tails, and other sites. This isn’t just metadata; this is “full take” content that’s stored forever.

This code demonstrates the ease with which an XKeyscore rule can analyze the full content of intercepted connections. The fingerprint first checks every message using the “email_address” function to see if the message is to or from “bridges@torproject.org”. Next, if the address matched, it uses the “email_body” function to search the full content of the email for a particular piece of text – in this case, “https://bridges.torproject.org/”. If the “email_body” function finds what it is looking for, it passes the full email text to a C++ program which extracts the bridge addresses and stores them in a database.


It is interesting to note that this rule specifically avoids fingerprinting users believed to be located in Five Eyes countries, while other rules make no such distinction. For instance, the following fingerprint targets users visiting the Tails and Linux Journal websites, or performing certain web searches related to Tails, and makes no distinction about the country of the user.


There are also rules that target users of numerous other privacy-focused internet services, including HotSpotShield, FreeNet, Centurian, FreeProxies.org, MegaProxy, privacy.li and an anonymous email service called MixMinion as well as its predecessor MixMaster. The appid rule for MixMinion is extremely broad as it matches all traffic to or from the IP address, a server located on the MIT campus.

It’s hard to tell how extensive this is. It’s possible that anyone who clicked on this link — with the embedded torproject.org URL above — is currently being monitored by the NSA. It’s possible that this only will happen to people who receive the link in e-mail, which will mean every Crypto-Gram subscriber in a couple of weeks. And I don’t know what else the NSA harvests about people who it selects in this manner.

Whatever the case, this is very disturbing.

EDITED TO ADD (7/3): The BoingBoing story says that this was first published on Tagesschau. Can someone who can read German please figure out where this originated.

And, since Cory said it, I do not believe that this came from the Snowden documents. I also don’t believe the TAO catalog came from the Snowden documents. I think there’s a second leaker out there.

EDITED TO ADD (7/3): More news stories. Thread on Reddit. I don’t expect this to get much coverage in the US mainstream media.

EDITED TO ADD (7/3): Here is the code. In part:

These variables define terms and websites relating to the TAILs (The Amnesic
Incognito Live System) software program, a comsec mechanism advocated by
extremists on extremist forums.

$TAILS_terms=word(‘tails’ or ‘Amnesiac Incognito Live System’) and
or ‘ USB ‘ or ‘ CD ‘ or ‘secure desktop’ or ‘ IRC ‘ or ‘truecrypt’ or ‘
tor ‘);
$TAILS_websites=(‘tails.boum.org/’) or (‘linuxjournal.com/content/linux*’);

This fingerprint identifies users searching for the TAILs (The Amnesic
Incognito Live System) software program, viewing documents relating to
or viewing websites that detail TAILs.
fingerprint(‘documents/comsec/tails_doc’) or web_search($TAILS_terms) or
url($TAILS_websites) or html_title($TAILS_websites);

Hacker News and Slashdot threads. ArsTechnica and Wired articles.

EDITED TO ADD (7/4): EFF points out that it is illegal to target someone for surveillance solely based on their reading:

The idea that it is suspicious to install, or even simply want to learn more about, tools that might help to protect your privacy and security underlies these definitions — and it’s a problem. Everyone needs privacy and security, online and off. It isn’t suspicious to buy curtains for your home or lock your front door. So merely reading about curtains certainly shouldn’t qualify you for extra scrutiny.

Even the U.S. Foreign Intelligence Surveillance Court recognizes this, as the FISA prohibits targeting people or conducting investigations based solely on activities protected by the First Amendment. Regardless of whether the NSA is relying on FISA to authorize this activity or conducting the spying overseas, it is deeply problematic.

Posted on July 3, 2014 at 11:01 AMView Comments

Another Perspective on the Value of Privacy

A philosophical perspective:

But while Descartes’s overall view has been rightly rejected, there is something profoundly right about the connection between privacy and the self, something that recent events should cause us to appreciate. What is right about it, in my view, is that to be an autonomous person is to be capable of having privileged access (in the two senses defined above) to information about your psychological profile ­ your hopes, dreams, beliefs and fears. A capacity for privacy is a necessary condition of autonomous personhood.

To get a sense of what I mean, imagine that I could telepathically read all your conscious and unconscious thoughts and feelings — I could know about them in as much detail as you know about them yourself — and further, that you could not, in any way, control my access. You don’t, in other words, share your thoughts with me; I take them. The power I would have over you would of course be immense. Not only could you not hide from me, I would know instantly a great amount about how the outside world affects you, what scares you, what makes you act in the ways you do. And that means I could not only know what you think, I could to a large extent control what you do.

That is the political worry about the loss of privacy: it threatens a loss of freedom. And the worry, of course, is not merely theoretical. Targeted ad programs, like Google’s, which track your Internet searches for the purpose of sending you ads that reflect your interests can create deeply complex psychological profiles — especially when one conducts searches for emotional or personal advice information: Am I gay? What is terrorism? What is atheism? If the government or some entity should request the identity of the person making these searches for national security purposes, we’d be on the way to having a real-world version of our thought experiment.

But the loss of privacy doesn’t just threaten political freedom. Return for a moment to our thought experiment where I telepathically know all your thoughts whether you like it or not From my perspective, the perspective of the knower — your existence as a distinct person would begin to shrink. Our relationship would be so lopsided that there might cease to be, at least to me, anything subjective about you. As I learn what reactions you will have to stimuli, why you do what you do, you will become like any other object to be manipulated. You would be, as we say, dehumanized.

Posted on July 9, 2013 at 6:24 AMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy — only a 1% false positive rate — even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

The Importance of Security Engineering

In May, neuroscientist and popular author Sam Harris and I debated the issue of profiling Muslims at airport security. We each wrote essays, then went back and forth on the issue. I don’t recommend reading the entire discussion; we spent 14,000 words talking past each other. But what’s interesting is how our debate illustrates the differences between a security engineer and an intelligent layman. Harris was uninterested in the detailed analysis required to understand a security system and unwilling to accept that security engineering is a specialized discipline with a body of knowledge and relevant expertise. He trusted his intuition.

Many people have researched how intuition fails us in security: Paul Slovic and Bill Burns on risk perception, Daniel Kahneman on cognitive biases in general, Rick Walsh on folk computer-security models. I’ve written about the psychology of security, and Daniel Gartner has written more. Basically, our intuitions are based on things like antiquated fight-or-flight models, and these increasingly fail in our technological world.

This problem isn’t unique to computer security, or even security in general. But this misperception about security matters now more than it ever has. We’re no longer asking people to make security choices only for themselves and their businesses; we need them to make security choices as a matter of public policy. And getting it wrong has increasingly bad consequences.

Computers and the Internet have collided with public policy. The entertainment industry wants to enforce copyright. Internet companies want to continue freely spying on users. Law-enforcement wants its own laws imposed on the Internet: laws that make surveillance easier, prohibit anonymity, mandate the removal of objectionable images and texts, and require ISPs to retain data about their customers’ Internet activities. Militaries want laws regarding cyber weapons, laws enabling wholesale surveillance, and laws mandating an Internet kill switch. “Security” is now a catch-all excuse for all sorts of authoritarianism, as well as for boondoggles and corporate profiteering.

Cory Doctorow recently spoke about the coming war on general-purpose computing. I talked about it in terms of the entertainment industry and Jonathan Zittrain discussed it more generally, but Doctorow sees it as a much broader issue. Preventing people from copying digital files is only the first skirmish; just wait until the DEA wants to prevent chemical printers from making certain drugs, or the FBI wants to prevent 3D printers from making guns.

I’m not here to debate the merits of any of these policies, but instead to point out that people will debate them. Elected officials will be expected to understand security implications, both good and bad, and will make laws based on that understanding. And if they aren’t able to understand security engineering, or even accept that there is such a thing, the result will be ineffective and harmful policies.

So what do we do? We need to establish security engineering as a valid profession in the minds of the public and policy makers. This is less about certifications and (heaven forbid) licensing, and more about perception — and cultivating a security mindset. Amateurs produce amateur security, which costs more in dollars, time, liberty, and dignity while giving us less — or even no — security. We need everyone to know that.

We also need to engage with real-world security problems, and apply our expertise to the variety of technical and socio-technical systems that affect broader society. Everything involves computers, and almost everything involves the Internet. More and more, computer security is security.

Finally, and perhaps most importantly, we need to learn how to talk about security engineering to a non-technical audience. We need to convince policy makers to follow a logical approach instead of an emotional one — an approach that includes threat modeling, failure analysis, searching for unintended consequences, and everything else in an engineer’s approach to design. Powerful lobbying forces are attempting to force security policies on society, largely for non-security reasons, and sometimes in secret. We need to stand up for security.

A shorter version of this essay appeared in the September/October 2012 issue of IEEE Security & Privacy.

Posted on August 28, 2012 at 10:38 AMView Comments

Apple Patents Data-Poisoning

It’s not a new idea, but Apple Computer has received a patent on “Techniques to pollute electronic profiling”:

Abstract: Techniques to pollute electronic profiling are provided. A cloned identity is created for a principal. Areas of interest are assigned to the cloned identity, where a number of the areas of interest are divergent from true interests of the principal. One or more actions are automatically processed in response to the assigned areas of interest. The actions appear to network eavesdroppers to be associated with the principal and not with the cloned identity.

Claim 1:

A device-implemented method, comprising: cloning, by a device, an identity for a principal to form a cloned identity; configuring, by the device, areas of interest to be associated with the cloned identity, the areas of interest are divergent from true areas of interest for a true identity for the principal; and automatically processing actions associated with the areas of interest for the cloned identity over a network to pollute information gathered by eavesdroppers performing dataveillance on the principal and refraining from processing the actions when the principal is detected as being logged onto the network and also refraining from processing the actions when the principal is unlikely to be logged onto the network.

EDITED TO ADD (7/12): Similar technology and concept has already been developed by Breadcrumbs Solutions, and will be out as a free beta software in a few months.

Posted on June 21, 2012 at 5:51 AMView Comments

The Trouble with Airport Profiling

Why do otherwise rational people think it’s a good idea to profile people at airports? Recently, neuroscientist and best-selling author Sam Harris related a story of an elderly couple being given the twice-over by the TSA, pointed out how these two were obviously not a threat, and recommended that the TSA focus on the actual threat: “Muslims, or anyone who looks like he or she could conceivably be Muslim.”

This is a bad idea. It doesn’t make us any safer — and it actually puts us all at risk.

The right way to look at security is in terms of cost-benefit trade-offs. If adding profiling to airport checkpoints allowed us to detect more threats at a lower cost, than we should implement it. If it didn’t, we’d be foolish to do so. Sometimes profiling works. Consider a sheep in a meadow, happily munching on grass. When he spies a wolf, he’s going to judge that individual wolf based on a bunch of assumptions related to the past behavior of its species. In short, that sheep is going to profile…and then run away. This makes perfect sense, and is why evolution produced sheep — and other animals — that react this way. But this sort of profiling doesn’t work with humans at airports, for several reasons.

First, in the sheep’s case the profile is accurate, in that all wolves are out to eat sheep. Maybe a particular wolf isn’t hungry at the moment, but enough wolves are hungry enough of the time to justify the occasional false alarm. However, it isn’t true that almost all Muslims are out to blow up airplanes. In fact, almost none of them are. Post 9/11, we’ve had 2 Muslim terrorists on U.S airplanes: the shoe bomber and the underwear bomber. If you assume 0.8% (that’s one estimate of the percentage of Muslim Americans) of the 630 million annual airplane fliers are Muslim and triple it to account for others who look Semitic, then the chances any profiled flier will be a Muslim terrorist is 1 in 80 million. Add the 19 9/11 terrorists — arguably a singular event — that number drops to 1 in 8 million. Either way, because the number of actual terrorists is so low, almost everyone selected by the profile will be innocent. This is called the “base rate fallacy,” and dooms any type of broad terrorist profiling, including the TSA’s behavioral profiling.

Second, sheep can safely ignore animals that don’t look like the few predators they know. On the other hand, to assume that only Arab-appearing people are terrorists is dangerously naive. Muslims are black, white, Asian, and everything else — most Muslims are not Arab. Recent terrorists have been European, Asian, African, Hispanic, and Middle Eastern; male and female; young and old. Underwear bomber Umar Farouk Abdul Mutallab was Nigerian. Shoe bomber Richard Reid was British with a Jamaican father. One of the London subway bombers, Germaine Lindsay, was Afro-Caribbean. Dirty bomb suspect Jose Padilla was Hispanic-American. The 2002 Bali terrorists were Indonesian. Both Timothy McVeigh and the Unabomber were white Americans. The Chechen terrorists who blew up two Russian planes in 2004 were female. Focusing on a profile increases the risk that TSA agents will miss those who don’t match it.

Third, wolves can’t deliberately try to evade the profile. A wolf in sheep’s clothing is just a story, but humans are smart and adaptable enough to put the concept into practice. Once the TSA establishes a profile, terrorists will take steps to avoid it. The Chechens deliberately chose female suicide bombers because Russian security was less thorough with women. Al Qaeda has tried to recruit non-Muslims. And terrorists have given bombs to innocent — and innocent-looking — travelers. Randomized secondary screening is more effective, especially since the goal isn’t to catch every plot but to create enough uncertainty that terrorists don’t even try.

And fourth, sheep don’t care if they offend innocent wolves; the two species are never going to be friends. At airports, though, there is an enormous social and political cost to the millions of false alarms. Beyond the societal harms of deliberately harassing a minority group, singling out Muslims alienates the very people who are in the best position to discover and alert authorities about Muslim plots before the terrorists even get to the airport. This alone is reason enough not to profile.

I too am incensed — but not surprised — when the TSA singles out four-year old girls, children with cerebral palsy, pretty women, the elderly, and wheelchair users for humiliation, abuse, and sometimes theft. Any bureaucracy that processes 630 million people per year will generate stories like this. When people propose profiling, they are really asking for a security system that can apply judgment. Unfortunately, that’s really hard. Rules are easier to explain and train. Zero tolerance is easier to justify and defend. Judgment requires better-educated, more expert, and much-higher-paid screeners. And the personal career risks to a TSA agent of being wrong when exercising judgment far outweigh any benefits from being sensible.

The proper reaction to screening horror stories isn’t to subject only “those people” to it; it’s to subject no one to it. (Can anyone even explain what hypothetical terrorist plot could successfully evade normal security, but would be discovered during secondary screening?) Invasive TSA screening is nothing more than security theater. It doesn’t make us safer, and it’s not worth the cost. Even more strongly, security isn’t our society’s only value. Do we really want the full power of government to act out our stereotypes and prejudices? Have we Americans ever done something like this and not been ashamed later? This is what we have a Constitution for: to help us live up to our values and not down to our fears.

This essay previously appeared on Forbes.com and Sam Harris’s blog.

Posted on May 14, 2012 at 6:19 AMView Comments

1 2 3 6

Sidebar photo of Bruce Schneier by Joe MacInnis.