Entries Tagged "lies"

Page 2 of 7

"How Stories Deceive"

Fascinating New Yorker article about Samantha Azzopardi, serial con artist and deceiver.

The article is really about how our brains allow stories to deceive us:

Stories bring us together. We can talk about them and bond over them. They are shared knowledge, shared legend, and shared history; often, they shape our shared future. Stories are so natural that we don’t notice how much they permeate our lives. And stories are on our side: they are meant to delight us, not deceive us—an ever-present form of entertainment.

That’s precisely why they can be such a powerful tool of deception. When we’re immersed in a story, we let down our guard. We focus in a way we wouldn’t if someone were just trying to catch us with a random phrase or picture or interaction. (“He has a secret” makes for a far more intriguing proposition than “He has a bicycle.”) In those moments of fully immersed attention, we may absorb things, under the radar, that would normally pass us by or put us on high alert. Later, we may find ourselves thinking that some idea or concept is coming from our own brilliant, fertile minds, when, in reality, it was planted there by the story we just heard or read.

Posted on January 8, 2016 at 12:54 PMView Comments

People Who Need to Pee Are Better at Lying

No, really.

Abstract: The Inhibitory-Spillover-Effect (ISE) on a deception task was investigated. The ISE occurs when performance in one self-control task facilitates performance in another (simultaneously conducted) self-control task. Deceiving requires increased access to inhibitory control. We hypothesized that inducing liars to control urination urgency (physical inhibition) would facilitate control during deceptive interviews (cognitive inhibition). Participants drank small (low-control) or large (high-control) amounts of water. Next, they lied or told the truth to an interviewer. Third-party observers assessed the presence of behavioral cues and made true/lie judgments. In the high-control, but not the low-control condition, liars displayed significantly fewer behavioral cues to deception, more behavioral cues signaling truth, and provided longer and more complex accounts than truth-tellers. Accuracy detecting liars in the high-control condition was significantly impaired; observers revealed bias toward perceiving liars as truth-tellers. The ISE can operate in complex behaviors. Acts of deception can be facilitated by covert manipulations of self-control.

News article.

Posted on September 25, 2015 at 5:54 AMView Comments

The Secrecy of the Snowden Documents

Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It’s a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden’s actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?

I believe the answer is certainly yes, but that it’s almost certainly not Snowden’s fault.

Snowden has claimed that he gave nothing to China while he was in Hong Kong, and brought nothing to Russia. He has said that he encrypted the documents in such a way that even he no longer has access to them, and that he did this before the US government stranded him in Russia. I have no doubt he did as he said, because A) it’s the smart thing to do, and B) it’s easy. All he would have had to do was encrypt the file with a long random key, break the encrypted text up into a few parts and mail them to trusted friends around the world, then forget the key. He probably added some security embellishments, but—regardless—the first sentence of the Times story simply makes no sense: “Russia and China have cracked the top-secret cache of files…”

But while cryptography is strong, computer security is weak. The vulnerability is not Snowden; it’s everyone who has access to the files.

First, the journalists working with the documents. I’ve handled some of the Snowden documents myself, and even though I’m a paranoid cryptographer, I know how difficult it is to maintain perfect security. It’s been open season on the computers of the journalists Snowden shared documents with since this story broke in July 2013. And while they have been taking extraordinary pains to secure those computers, it’s almost certainly not enough to keep out the world’s intelligence services.

There is a lot of evidence for this belief. We know from other top-secret NSA documents that as far back as 2008, the agency’s Tailored Access Operations group has extraordinary capabilities to hack into and “exfiltrate” data from specific computers, even if those computers are highly secured and not connected to the Internet.

These NSA capabilities are not unique, and it’s reasonable to assume both that other countries had similar capabilities in 2008 and that everyone has improved their attack techniques in the seven years since then. Last week, we learned that Israel had successfully hacked a wide variety of networks, including that of a major computer antivirus company. We also learned that China successfully hacked US government personnel databases. And earlier this year, Russia successfully hacked the White House’s network. These sorts of stories are now routine.

Which brings me to the second potential source of these documents to foreign intelligence agencies: the US and UK governments themselves. I believe that both China and Russia had access to all the files that Snowden took well before Snowden took them because they’ve penetrated the NSA networks where those files reside. After all, the NSA has been a prime target for decades.

Those government hacking examples above were against unclassified networks, but the nation-state techniques we’re seeing work against classified and unconnected networks as well. In general, it’s far easier to attack a network than it is to defend the same network. This isn’t a statement about willpower or budget; it’s how computer and network security work today. A former NSA deputy director recently said that if we were to score cyber the way we score soccer, the tally would be 462­456 twenty minutes into the game. In other words, it’s all offense and no defense.

In this kind of environment, we simply have to assume that even our classified networks have been penetrated. Remember that Snowden was able to wander through the NSA’s networks with impunity, and that the agency had so few controls in place that the only way they can guess what has been taken is to extrapolate based on what has been published. Does anyone believe that Snowden was the first to take advantage of that lax security? I don’t.

This is why I find allegations that Snowden was working for the Russians or the Chinese simply laughable. What makes you think those countries waited for Snowden? And why do you think someone working for the Russians or the Chinese would go public with their haul?

I am reminded of a comment made to me in confidence by a US intelligence official. I asked him what he was most worried about, and he replied: “I know how deep we are in our enemies’ networks without them having any idea that we’re there. I’m worried that our networks are penetrated just as deeply.”

Seems like a reasonable worry to me.

The open question is which countries have sophisticated enough cyberespionage operations to mount a successful attack against one of the journalists or against the intelligence agencies themselves. And while I have my own mental list, the truth is that I don’t know. But certainly Russia and China are on the list, and it’s just as certain they didn’t have to wait for Snowden to get access to the files. While it might be politically convenient to blame Snowden because, as the Sunday Times reported an anonymous source saying, “we have now seen our agents and assets being targeted,” the NSA and GCHQ should first take a look into their mirrors.

This essay originally appeared on Wired.com.

EDITED TO ADD: I wrote about this essay on Lawfare:

A Twitter user commented: “Surely if agencies accessed computers of people Snowden shared with then is still his fault?”

Yes, that’s right. Snowden took the documents out of the well-protected NSA network and shared with people who don’t have those levels of computer security. Given what we’ve seen of the NSA’s hacking capabilities, I think the odds are zero that other nations were unable to hack at least one of those journalists’ computers. And yes, Snowden has to own that.

The point I make in the article is that those nations didn’t have to wait for Snowden. More specifically, GCHQ claims that “we have now seen our agents and assets being targeted.” One, agents and assets are not discussed in the Snowden documents. Two, it’s two years after Snowden handed those documents to reporters. Whatever is happening, it’s unlikely to be related to Snowden.

EDITED TO ADD: Slashdot thread. Hacker News thread.

EDITED TO ADD (7/13): Two threads on Reddit.

EDITED TO ADD (7/14): Another refutation.

Posted on June 22, 2015 at 6:13 AMView Comments

Defending Against Liar Buyer Fraud

It’s a common fraud on sites like eBay: buyers falsely claim that they never received a purchased item in the mail. Here’s a paper on defending against this fraud through basic psychological security measures. It’s preliminary research, but probably worth experimental research.

We have tested a collection of possible user-interface enhancements aimed at reducing liar buyer fraud. We have found that showing users in the process of filing a dispute that (1) their computer is recognized, and (2) that their location is known dramatically reduces the willingness to file false claims. We believe the reason for the reduction is that the would-be liars can visualize their lack of anonymity at a time when they are deciding whether to perform a fraudulent action. Interestingly, we also showed that users were not affected by knowing that their computer was recognized, but without their location being pin-pointed, or the other way around. We also determined that a reasonably accurate map was necessary—but that an inaccurate map does not seem to increase the willingness to lie.

Posted on January 21, 2015 at 6:31 AMView Comments

Fidgeting as Lie Detection

Sophie Van Der Zee and colleagues have a new paper on using body movement as a lie detector:

Abstract: We present a new robust signal for detecting deception: full body motion. Previous work on detecting deception from body movement has relied either on human judges or on specific gestures (such as fidgeting or gaze aversion) that are coded or rated by humans. The results are characterized by inconsistent and often contradictory findings, with small-stakes lies under lab conditions detected at rates only slightly better than guessing. Building on previous work that uses automatic analysis of facial videos and rhythmic body movements to diagnose stress, we set out to see whether a full body motion capture suit, which records the position, velocity and orientation of 23 points in the subject’s body, could yield a better signal of deception. Interviewees of South Asian (n = 60) or White British culture (n = 30) were required to either tell the truth or lie about two experienced tasks while being interviewed by somebody from their own (n = 60) or different culture (n = 30). We discovered that full body motion—the sum of joint displacements—was indicative of lying approximately 75% of the time. Furthermore, movement was guilt-related, and occurred independently of anxiety, cognitive load and cultural background. Further analyses indicate that including individual limb data in our full bodymotion measurements, in combination with appropriate questioning strategies, can increase its discriminatory power to around 82%. This culture-sensitive study provides an objective and inclusive view on how people actually behave when lying. It appears that full body motion can be a robust nonverbal indicator of deceit, and suggests that lying does not cause people to freeze. However, should full body motion capture become a routine investigative technique, liars might freeze in order not to give themselves away; but this in itself should be a telltale.

This is a first research study, and the results might not be robust. But it certainly is interesting.

Blog post. News article. Slashdot thread.

Posted on January 6, 2015 at 2:44 PMView Comments

More Crypto Wars II

FBI Director James Comey again called for an end to secure encryption by putting in a backdoor. Here’s his speech:

There is a misconception that building a lawful intercept solution into a system requires a so-called “back door,” one that foreign adversaries and hackers may try to exploit.

But that isn’t true. We aren’t seeking a back-door approach. We want to use the front door, with clarity and transparency, and with clear guidance provided by law. We are completely comfortable with court orders and legal process—front doors that provide the evidence and information we need to investigate crime and prevent terrorist attacks.

Cyber adversaries will exploit any vulnerability they find. But it makes more sense to address any security risks by developing intercept solutions during the design phase, rather than resorting to a patchwork solution when law enforcement comes knocking after the fact. And with sophisticated encryption, there might be no solution, leaving the government at a dead end—all in the name of privacy and network security.

I’m not sure why he believes he can have a technological means of access that somehow only works for people of the correct morality with the proper legal documents, but he seems to believe that’s possible. As Jeffrey Vagle and Matt Blaze point out, there’s no technical difference between Comey’s “front door” and a “back door.”

As in all of these sorts of speeches, Comey gave examples of crimes that could have been solved had only the police been able to decrypt the defendant’s phone. Unfortunately, none of the three stories is true. The Intercept tracked down each story, and none of them is actually a case where encryption foiled an investigation, arrest, or conviction:

In the most dramatic case that Comey invoked—the death of a 2-year-old Los Angeles girl—not only was cellphone data a non-issue, but records show the girl’s death could actually have been avoided had government agencies involved in overseeing her and her parents acted on the extensive record they already had before them.

In another case, of a Louisiana sex offender who enticed and then killed a 12-year-old boy, the big break had nothing to do with a phone: The murderer left behind his keys and a trail of muddy footprints, and was stopped nearby after his car ran out of gas.

And in the case of a Sacramento hit-and-run that killed a man and his girlfriend’s four dogs, the driver was arrested in a traffic stop because his car was smashed up, and immediately confessed to involvement in the incident.

[…]

His poor examples, however, were reminiscent of one cited by Ronald T. Hosko, a former assistant director of the FBI’s Criminal Investigative Division, in a widely cited—and thoroughly debunked—Washington Post opinion piece last month.

In that case, the Post was eventually forced to have Hosko rewrite the piece, with the following caveat appended:

Editors note: This story incorrectly stated that Apple and Google’s new encryption rules would have hindered law enforcement’s ability to rescue the kidnap victim in Wake Forest, N.C. This is not the case. The piece has been corrected.

Hadn’t Comey found anything better since then? In a question-and-answer session after his speech, Comey both denied trying to use scare stories to make his point—and admitted that he had launched a nationwide search for better ones, to no avail.

This is important. All the FBI talk about “going dark” and losing the ability to solve crimes is absolute bullshit. There is absolutely no evidence, either statistically or even anecdotally, that criminals are going free because of encryption.

So why are we even discussing the possibility to forcing companies to provide insecure encryption to their users and customers?

The EFF points out that companies are protected by law from being required to provide insecure security to make the FBI happy.

Sadly, I don’t think this is going to go away anytime soon.

My first post on these new Crypto Wars is here.

Posted on October 21, 2014 at 6:17 AMView Comments

How Did the Feds Identity Dread Pirate Roberts?

Last month, I wrote that the FBI identified Ross W. Ulbricht as the Silk Road’s Dread Pirate Roberts through a leaky CAPTCHA. Seems that story doesn’t hold water:

The FBI claims that it found the Silk Road server by examining plain text Internet traffic to and from the Silk Road CAPTCHA, and that it visited the address using a regular browser and received the CAPTCHA page. But [Nicholas] Weaver says the traffic logs from the Silk Road server (PDF) that also were released by the government this week tell a different story.

“The server logs which the FBI provides as evidence show that, no, what happened is the FBI didn’t see a leakage coming from that IP,” he said. “What happened is they contacted that IP directly and got a PHPMyAdmin configuration page.” See this PDF file for a look at that PHPMyAdmin page. Here is the PHPMyAdmin server configuration.

But this is hardly a satisfying answer to how the FBI investigators located the Silk Road servers. After all, if the FBI investigators contacted the PHPMyAdmin page directly, how did they know to do that in the first place?

“That’s still the $64,000 question,” Weaver said. “So both the CAPTCHA couldn’t leak in that configuration, and the IP the government visited wasn’t providing the CAPTCHA, but instead a PHPMyAdmin interface. Thus, the leaky CAPTCHA story is full of holes.”

My guess is that the NSA provided the FBI with this information. We know that the NSA provides surveillance data to the FBI and the DEA, under the condition that they lie about where it came from in court.

NSA whistleblower William Binney explained how it’s done:

…when you can’t use the data, you have to go out and do a parallel construction, [which] means you use what you would normally consider to be investigative techniques, [and] go find the data. You have a little hint, though. NSA is telling you where the data is…

Posted on October 20, 2014 at 6:19 AMView Comments

Building an Online Lie Detector

There’s an interesting project to detect false rumors on the Internet.

The EU-funded project aims to classify online rumours into four types: speculation—such as whether interest rates might rise; controversy—as over the MMR vaccine; misinformation, where something untrue is spread unwittingly; and disinformation, where it’s done with malicious intent.

The system will also automatically categorise sources to assess their authority, such as news outlets, individual journalists, experts, potential eye witnesses, members of the public or automated ‘bots’. It will also look for a history and background, to help spot where Twitter accounts have been created purely to spread false information.

It will search for sources that corroborate or deny the information, and plot how the conversations on social networks evolve, using all of this information to assess whether it is true or false. The results will be displayed to the user in a visual dashboard, to enable them to easily see whether a rumour is taking hold.

I have no idea how well it will work, or even whether it will work, but I like research in this direction. Of the three primary Internet mechanisms for social control, surveillance and censorship have received a lot more attention than propaganda. Anything that can potentially detect propaganda is a good thing.

Three news articles.

Posted on February 21, 2014 at 8:34 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.