Entries Tagged "lies"

Page 5 of 7

Leaders Make Better Liars

According to new research:

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars—the leaders—resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars—the subordinates—showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

[…]

Carney emphasizes that these results don’t mean that all people in high positions find lying easier: people need only feel powerful, regardless of the real power they have or their position in a hierarchy. “There are plenty of CEOs who act like low-power people and there are plenty of people at every level in organizations who feel very high power,” Carney says. “It can cross rank, every strata of society, any job.”

Posted on March 30, 2010 at 1:59 PMView Comments

The Problems with Unscientific Security

From the Open Access Journal of Forensic Psychology, by a whole list of authors: “A Call for Evidence-Based Security Tools“:

Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.

We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.

One excerpt:

In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one’s preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens’ safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).

Article on the paper.

Posted on November 5, 2009 at 6:11 AMView Comments

Developments in Lie Detection

Interesting:

Scientists looking for better ways to detect lies have found a promising one: increasing suspects’ “cognitive load.” For a host of reasons, their theory goes, lying is more mentally taxing than telling the truth. Per­forming an extra task while lying or telling the truth should therefore affect the liars more.

To test this idea, deception researchers led by psychologist Aldert Vrij of the University of Portsmouth in England asked one group to lie convincingly and another group to tell the truth about a staged theft scenario that only the truth-tellers had experienced. A second pair of groups had to do the same but with a crucial twist: both the liars and the truth-tellers had to maintain eye contact while telling their stories.

Later, as researchers watched videotapes of the suspects’ accounts, they tallied verbal signs of cognitive load (such as fewer spatial details in the suspects’ stories) and nonverbal ones (such as fewer eyeblinks). The eyeblinks are particularly interesting because whereas rapid blinking suggests nervousness, fewer blinks are a sign of cognitive load, Vrij explains—and contrary to what police are taught, liars tend to blink less. Although the effect was subtle, the instruction to maintain eye contact did magnify the differences between the truth-tellers and the liars.

So do these differences actually make it easier for others to distinguish liars from truth-tellers? They do—but although students watching the videos had an easier time spotting a liar in the eye-contact condition, their accuracy rates were still poor. Any group differences between liars and truth-tellers were dwarfed by differences between individual participants. (For example, some people blink far less than others whether or not they are lying—and some are simply better able to carry a higher cognitive load.)

Posted on August 20, 2009 at 6:59 AMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

Detecting Liars by Content

Interesting:

Kevin Colwell, a psychologist at Southern Connecticut State University, has advised police departments, Pentagon officials and child protection workers, who need to check the veracity of conflicting accounts from parents and children. He says that people concocting a story prepare a script that is tight and lacking in detail.

“It’s like when your mom busted you as a kid, and you made really obvious mistakes,” Dr. Colwell said. “Well, now you’re working to avoid those.”

By contrast, people telling the truth have no script, and tend to recall more extraneous details and may even make mistakes. They are sloppier.

[…]

In several studies, Dr. Colwell and Dr. Hiscock-Anisman have reported one consistent difference: People telling the truth tend to add 20 to 30 percent more external detail than do those who are lying. “This is how memory works, by association,” Dr. Hiscock-Anisman said. “If you’re telling the truth, this mental reinstatement of contexts triggers more and more external details.”

Not so if you’ve got a concocted story and you’re sticking to it. “It’s the difference between a tree in full flower in the summer and a barren stick in winter,” said Dr. Charles Morgan, a psychiatrist at the National Center for Post-Traumatic Stress Disorder, who has tested it for trauma claims and among special-operations soldiers.

This is new research, and there are limitations to the approach, but it’s interesting.

Posted on May 14, 2009 at 1:30 PMView Comments

Lie Detector Charlatans

This is worth reading:

Five years ago I wrote a Language Log post entitled “BS conditional semantics and the Pinocchio effect” about the nonsense spouted by a lie detection company, Nemesysco. I was disturbed by the marketing literature of the company, which suggested a 98% success rate in detecting evil intent of airline passengers, and included crap like this:

The LVA uses a patented and unique technology to detect “Brain activity finger prints” using the voice as a “medium” to the brain and analyzes the complete emotional structure of your subject. Using wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!) we can learn about any anomaly in the brain activity, and furthermore, classify it accordingly. Stress (“fight or flight” paradigm) is only a small part of this emotional structure

The 98% figure, as I pointed out, and as Mark Liberman made even clearer in a follow up post, is meaningless. There is no type of lie detector in existence whose performance can reasonably be compared to the performance of finger printing. It is meaningless to talk about someone’s “complete emotional structure”, and there is no interesting sense in which any current technology can analyze it. It is not the case that looking at speech will provide information about “any anomaly in the brain activity”: at most it will tell you about some anomalies. Oh, the delicious irony, a lie detector company that engages in wanton deception.

So, ok, Nemesysco, as I said in my earlier post, is clearly trying to pull the wool over people’s eyes. Disturbing, yes, but it doesn’t follow from the fact that its marketing is wildly misleading that the company’s technology is of no merit. However, we now know that the company’s technology is, in fact, of no merit. How do we know? Because two phoneticians, Anders Eriksson and Francisco Lacerda, studied the company’s technology, based largely on the original patent, and and provided a thorough analysis in a 2007 article Charlatanry in forensic speech science: A problem to be taken seriously, which appeared in the International Journal of Speech Language and the Law (IJSLL), vol 14.2 2007, 169–­193, Equinox Publishing. Eriksson and Lacerda conclude, regarding the original technology on which Nemesysco’s products are based, Layered Voice Analysis (LVA), that:

Any qualified speech scientist with some computer background can see at a glance, by consulting the documents, that the methods on which the program is based have no scientific validity.

Most of the lie detector industry is based on, well, lies.

EDITED TO ADD (5/13): The paper is available here. More details here. Nemesyco’s systems are being used to bully people out of receiving government aid in the UK.

Posted on May 6, 2009 at 12:14 PMView Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer—it’s out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AMView Comments

India Using Brain Scans to Prove Guilt in Court

This seems like a whole lot of pseudo-science:

The technologies, generally regarded as promising but unproved, have yet to be widely accepted as evidence—except in India, where in recent years judges have begun to admit brain scans. But it was only in June, in a murder case in Pune, in Maharashtra State, that a judge explicitly cited a scan as proof that the suspect’s brain held “experiential knowledge” about the crime that only the killer could possess, sentencing her to life in prison.

[…]

This latest Indian attempt at getting past criminals—defenses begins with an electroencephalogram, or EEG, in which electrodes are placed on the head to measure electrical waves. The suspect sits in silence, eyes shut. An investigator reads aloud details of the crime—as prosecutors see it—and the resulting brain images are processed using software built in Bangalore.

The software tries to detect whether, when the crime’s details are recited, the brain lights up in specific regions—the areas that, according to the technology’s inventors, show measurable changes when experiences are relived, their smells and sounds summoned back to consciousness. The inventors of the technology claim the system can distinguish between people’s memories of events they witnessed and between deeds they committed.

EDITED TO ADD (10/13): An expert committee said it is unscientific, but their findings weren’t accepted.

Posted on September 22, 2008 at 6:10 AMView Comments

New TSA ID Requirement

The TSA has a new photo ID requirement:

Beginning Saturday, June 21, 2008 passengers that willfully refuse to provide identification at security checkpoint will be denied access to the secure area of airports. This change will apply exclusively to individuals that simply refuse to provide any identification or assist transportation security officers in ascertaining their identity.

This new procedure will not affect passengers that may have misplaced, lost or otherwise do not have ID but are cooperative with officers. Cooperative passengers without ID may be subjected to additional screening protocols, including enhanced physical screening, enhanced carry-on and/or checked baggage screening, interviews with behavior detection or law enforcement officers and other measures.

That’s right; people who refuse to show ID on principle will not be allowed to fly, but people who claim to have lost their ID will. I feel well-protected against terrorists who can’t lie.

I don’t think any further proof is needed that the ID requirement has nothing to do with security, and everything to do with control.

EDITED TO ADD (6/11): Daniel Solove comments.

Posted on June 11, 2008 at 1:42 PMView Comments

Crossing Borders with Laptops and PDAs

Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you’re entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.

But the US is not alone. British customs agents search laptops for pornography. And there are reports on the internet of this sort of thing happening at other borders, too. You might not like it, but it’s a fact. So how do you protect yourself?

Encrypting your entire hard drive, something you should certainly do for security in case your computer is lost or stolen, won’t work here. The border agent is likely to start this whole process with a “please type in your password”. Of course you can refuse, but the agent can search you further, detain you longer, refuse you entry into the country and otherwise ruin your day.

You’re going to have to hide your data. Set a portion of your hard drive to be encrypted with a different key – even if you also encrypt your entire hard drive – and keep your sensitive data there. Lots of programs allow you to do this. I use PGP Disk . TrueCrypt is also good, and free.

While customs agents might poke around on your laptop, they’re unlikely to find the encrypted partition. (You can make the icon invisible, for some added protection.) And if they download the contents of your hard drive to examine later, you won’t care.

Be sure to choose a strong encryption password. Details are too complicated for a quick tip, but basically anything easy to remember is easy to guess. (My advice is here.) Unfortunately, this isn’t a perfect solution. Your computer might have left a copy of the password on the disk somewhere, and (as I also describe at the above link) smart forensic software will find it.

So your best defence is to clean up your laptop. A customs agent can’t read what you don’t have. You don’t need five years’ worth of email and client data. You don’t need your old love letters and those photos (you know the ones I’m talking about). Delete everything you don’t absolutely need. And use a secure file erasure program to do it. While you’re at it, delete your browser’s cookies, cache and browsing history. It’s nobody’s business what websites you’ve visited. And turn your computer off – don’t just put it to sleep – before you go through customs; that deletes other things. Think of all this as the last thing to do before you stow your electronic devices for landing. Some companies now give their employees forensically clean laptops for travel, and have them download any sensitive data over a virtual private network once they’ve entered the country. They send any work back the same way, and delete everything again before crossing the border to go home. This is a good idea if you can do it.

If you can’t, consider putting your sensitive data on a USB drive or even a camera memory card: even 16GB cards are reasonably priced these days. Encrypt it, of course, because it’s easy to lose something that small. Slip it in your pocket, and it’s likely to remain unnoticed even if the customs agent pokes through your laptop. If someone does discover it, you can try saying: “I don’t know what’s on there. My boss told me to give it to the head of the New York office.” If you’ve chosen a strong encryption password, you won’t care if he confiscates it.

Lastly, don’t forget your phone and PDA. Customs agents can search those too: emails, your phone book, your calendar. Unfortunately, there’s nothing you can do here except delete things.

I know this all sounds like work, and that it’s easier to just ignore everything here and hope you don’t get searched. Today, the odds are in your favour. But new forensic tools are making automatic searches easier and easier, and the recent US court ruling is likely to embolden other countries. It’s better to be safe than sorry.

This essay originally appeared in The Guardian.

Some other advice here.

EDITED TO ADD (5/18): Many people have pointed out to me that I advise people to lie to a government agent. That is, of course, illegal in the U.S. and probably most other countries—and probably not the best advice for me to be on record as giving. So be sure you clear your story first with both your boss and the New York office.

Posted on May 16, 2008 at 6:10 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.