Entries Tagged "lies"

Page 4 of 7

The Office of the Director of National Intelligence Defends NSA Surveillance Programs

Here’s a transcript of a panel discussion about NSA surveillance. There’s a lot worth reading here, but I want to quote Bob Litt’s opening remarks. He’s the General Counsel for ODNI, and he has a lot to say about the programs revealed so far in the Snowden documents.

I’m reminded a little bit of a quote that, like many quotes, is attributed to Mark Twain but in fact is not Mark Twain’s, which is that a lie can get halfway around the world before the truth gets its boots on. And unfortunately, there’s been a lot of misinformation that’s come out about these programs. And what I would like to do in the next couple of minutes is actually go through and explain what the programs are and what they aren’t.

I particularly want to emphasize that I hope you come away from this with the understanding that neither of the programs that have been leaked to the press recently are indiscriminate sweeping up of information without regard to privacy or constitutional rights or any kind of controls. In fact, from my boss, the director of national intelligence, on down through the entire intelligence community, we are in fact sensitive to privacy and constitutional rights. After all, we are citizens of the United States. These are our rights too.

So as I said, we’re talking about two types of intelligence collection programs. I want to start discussing them by making the point that in order to target the emails or the phone calls or the communications of a United States citizen or a lawful permanent resident of the United States, wherever that person is located, or of any person within the United States, we need to go to court, and we need to get an individual order based on probable cause, the equivalent of an electronic surveillance warrant.

That does not mean and nobody has ever said that that means we never acquire the contents of an email or telephone call to which a United States person is a party. Whenever you’re doing any collection of information, you’re going to—you can’t avoid some incidental acquisition of information about nontargeted persons. Think of a wiretap in a criminal case. You’re wiretapping somebody, and you intercept conversations that are innocent as well as conversations that are inculpatory. If we seize somebody’s computer, there’s going to be information about innocent people on that. This is just a necessary incident.

What we do is we impose controls on the use of that information. But what we cannot do—and I’m repeating this—is go out and target the communications of Americans for collection without an individual court order.

So the first of the programs that I want to talk about that was leaked to the press is what’s been called Section 215, or business record collection. It’s called Section 215 because that was the section of the Patriot Act that put the current version of that statute into place. And under that ­ this statute, we collect telephone metadata, using a court order which is authorized by the Foreign Intelligence Surveillance Act, under a provision which allows a government to obtain business records for intelligence and counterterrorism purposes. Now, by metadata, in this context, I mean data that describes the phone calls, such as the telephone number making the call, the telephone number dialed, the data and time the call was made and the length of the call. These are business records of the telephone companies in question, which is why they can be collected under this provision.

Despite what you may have read about this program, we do not collect the content of any communications under this program. We do not collect the identity of any participant to any communication under this program. And while there seems to have been some confusion about this as recently as today, I want to make perfectly clear we do not collect cellphone location information under this program, either GPS information or cell site tower information. I’m not sure why it’s been so hard to get people to understand that because it’s been said repeatedly.

When the court approves collection under this statute, it issues two orders. One order, which is the one that was leaked, is an order to providers directing them to turn the relevant information over to the government. The other order, which was not leaked, is the order that spells out the limitations on what we can do with the information after it’s been collected, who has access, what purposes they can access it for and how long it can be retained.

Some people have expressed concern, which is quite a valid concern in the abstract, that if you collect large quantities of metadata about telephone calls, you could subject it to sophisticated analysis, and using those kind of analytical tools, you can derive a lot of information about people that would otherwise not be discoverable.

The fact is, we are specifically not allowed to do that kind of analysis of this data, and we don’t do it. The metadata that is acquired and kept under this program can only be queried when there is reasonable suspicion, based on specific, articulable facts, that a particular telephone number is associated with specified foreign terrorist organizations. And the only purpose for which we can make that query is to identify contacts. All that we get under this program, all that we collect, is metadata. So all that we get back from one of these queries is metadata.

Each determination of a reasonable suspicion under this program must be documented and approved, and only a small portion of the data that is collected is ever actually reviewed, because the vast majority of that data is never going to be responsive to one of these terrorism-related queries.

In 2012 fewer than 300 identifiers were approved for searching this data. Nevertheless, we collect all the data because if you want to find a needle in the haystack, you need to have the haystack, especially in the case of a terrorism-related emergency, which is—and remember that this database is only used for terrorism-related purposes.

And if we want to pursue any further investigation as a result of a number that pops up as a result of one of these queries, we have to do, pursuant to other authorities and in particular if we want to conduct electronic surveillance of any number within the United States, as I said before, we have to go to court, we have to get an individual order based on probable cause.

That’s one of the two programs.

The other program is very different. This is a program that’s sometimes referred to as PRISM, which is a misnomer. PRISM is actually the name of a database. The program is collection under Section 702 of the Foreign Intelligence Surveillance Act, which is a public statute that is widely known to everybody. There’s really no secret about this kind of collection.

This permits the government to target a non-U.S. person, somebody who’s not a citizen or a permanent resident alien, located outside of the United States, for foreign intelligence purposes without obtaining a specific warrant for each target, under the programmatic supervision of the FISA Court.

And it’s important here to step back and note that historically and at the time FISA was originally passed in 1978, this particular kind of collection, targeting non-U.S. persons outside of the United States for foreign intelligence purposes, was not intended to be covered by FISA as ­ at all. It was totally outside of the supervision of the FISA Court and totally within the prerogative of the executive branch. So in that respect, Section 702 is properly viewed as an expansion of FISA Court authority, rather than a contraction of that authority.

So Section 702, as I—as I said, it’s—is limited to targeting foreigners outside the United States to acquire foreign intelligence information. And there is a specific provision in this statute that prohibits us from making an end run about this, about—on this requirement, because we are expressly prohibited from targeting somebody outside of the United States in order to obtain some information about somebody inside the United States. That is to say, if we know that somebody outside of the United States is communicating with Spike Bowman, and we really want to get Spike Bowman’s communications, we’ve got to get an electronic surveillance order on Spike Bowman. We cannot target the out ­ the person outside of the United States to collect on Spike.

In order to use Section 702, the government has to obtain approval from the FISA Court for the plan it intends to use to conduct the collection. This plan includes, first of all, identification of the foreign intelligence purposes of the collection; second, the plan and the procedures for ensuring that the individuals targeted for collection are in fact non-U.S. persons who are located outside of the United States. These are referred to as targeting procedures. And in addition, we have to get approval of the government’s procedures for what it will do with information about a U.S. person or someone inside the United States if we get that information through this collection. These procedures, which are called minimization procedures, determine what we can keep and what we can disseminate to other government agencies and impose limitations on that. And in particular, dissemination of information about U.S. persons is expressly prohibited unless that information is necessary to understand foreign intelligence or to assess its importance or is evidence of a crime or indicates a—an imminent threat of death or serious bodily harm.

And again, these procedures, the targeting and minimization procedures, have to be approved by the FISA court as consistent with the statute and consistent with the Fourth Amendment. And that’s what the Section 702 collection is.

The last thing I want to talk about a little bit is the myth that this is sort of unchecked authority, because we have extensive oversight and control over the collection, which involves all three branches of government. First, NSA has extensive technological processes, including segregated databases, limited access and audit trails, and they have extensive internal oversight, including their own compliance officer, who oversees compliance with the rules.

Second, the Department of Justice and my office, the Office of the Director of National Intelligence, are specifically charged with overseeing NSA’s activities to make sure that there are no compliance problems. And we report to the Congress twice a year on the use of these collection authorities and compliance problems. And if we find a problem, we correct it. Inspectors general, independent inspectors general, who, as you all know, also have an independent reporting responsibility to Congress, also are charged with undertaking a review of how these surveillance programs are carried out.

Any time that information is collected in violation of the rules, it’s reported immediately to the FISA court and is also reported to the relevant congressional oversight committees. It doesn’t matter how small the—or technical the violation is. And information that’s collected in violation of the rules has to be purged, with very limited exceptions.

Both the FISA court and the congressional oversight committees, which are Intelligence and Judiciary, take a very active role in overseeing this program and ensuring that we adhere to the requirements of the statutes and the court orders. And let me just stop and say that the suggestion that the FISA court is a rubber stamp is a complete canard, as anybody who’s ever had the privilege of appearing before Judge Bates or Judge Walton can attest.

Now, this is a complex system, and like any complex system, it’s not error free. But as I said before, every time we have found a mistake, we’ve fixed it. And the mistakes are self-reported. We find them ourselves in the exercise of our oversight. No one has ever found that there has ever been—and by no one, I mean the people at NSA, the people at the Department of Justice, the people at the Office of the Director of National Intelligence, the inspectors general, the FISA court and the congressional oversight committees, all of whom have visibility into this—nobody has ever found that there has ever been any intentional effort to violate the law or any intentional misuse of these tools.

As always, the fundamental issue is trust. If you believe Litt, this is all very comforting. If you don’t, it’s more lies and misdirection. Taken at face value, it explains why so many tech executives were able to say they had never heard of PRISM: it’s the internal NSA name for the database, and not the name of the program. I also note that Litt uses the word “collect” to mean what it actually means, and not the way his boss, Director of National Intelligence James Clapper, Jr., used it to deliberately lie to Congress.

Posted on July 4, 2013 at 7:07 AMView Comments

New Details on Skype Eavesdropping

This article, on the cozy relationship between the commercial personal-data industry and the intelligence industry, has new information on the security of Skype.

Skype, the Internet-based calling service, began its own secret program, Project Chess, to explore the legal and technical issues in making Skype calls readily available to intelligence agencies and law enforcement officials, according to people briefed on the program who asked not to be named to avoid trouble with the intelligence agencies.

Project Chess, which has never been previously disclosed, was small, limited to fewer than a dozen people inside Skype, and was developed as the company had sometimes contentious talks with the government over legal issues, said one of the people briefed on the project. The project began about five years ago, before most of the company was sold by its parent, eBay, to outside investors in 2009. Microsoft acquired Skype in an $8.5 billion deal that was completed in October 2011.

A Skype executive denied last year in a blog post that recent changes in the way Skype operated were made at the behest of Microsoft to make snooping easier for law enforcement. It appears, however, that Skype figured out how to cooperate with the intelligence community before Microsoft took over the company, according to documents leaked by Edward J. Snowden, a former contractor for the N.S.A. One of the documents about the Prism program made public by Mr. Snowden says Skype joined Prism on Feb. 6, 2011.

Reread that Skype denial from last July, knowing that at the time the company knew that they were giving the NSA access to customer communications. Notice how it is precisely worded to be technically accurate, yet leave the reader with the wrong conclusion. This is where we are with all the tech companies right now; we can’t trust their denials, just as we can’t trust the NSA—or the FBI—when it denies programs, capabilities, or practices.

Back in January, we wondered whom Skype lets spy on their users. Now we know.

Posted on June 20, 2013 at 2:42 PMView Comments

Why We Lie

This, by Judge Kozinski, is from a Federal court ruling about false statements and First Amendment protection

Saints may always tell the truth, but for mortals living means lying. We lie to protect our privacy (“No, I don’t live around here”); to avoid hurt feelings (“Friday is my study night”); to make others feel better (“Gee you’ve gotten skinny”); to avoid recriminations (“I only lost $10 at poker”); to prevent grief (“The doc says you’re getting better”); to maintain domestic tranquility (“She’s just a friend”); to avoid social stigma (“I just haven’t met the right woman”); for career advancement (“I’m sooo lucky to have a smart boss like you”); to avoid being lonely (“I love opera”); to eliminate a rival (“He has a boyfriend”); to achieve an objective (“But I love you so much”); to defeat an objective (“I’m allergic to latex”); to make an exit (“It’s not you, it’s me”); to delay the inevitable (“The check is in the mail”); to communicate displeasure (“There’s nothing wrong”); to get someone off your back (“I’ll call you about lunch”); to escape a nudnik (“My mother’s on the other line”); to namedrop (“We go way back”); to set up a surprise party (“I need help moving the piano”); to buy time (“I’m on my way”); to keep up appearances (“We’re not talking divorce”); to avoid taking out the trash (“My back hurts”); to duck an obligation (“I’ve got a headache”); to maintain a public image (“I go to church every Sunday”); to make a point (“Ich bin ein Berliner“); to save face (“I had too much to drink”); to humor (“Correct as usual, King Friday”); to avoid embarrassment (“That wasn’t me”); to curry favor (“I’ve read all your books”); to get a clerkship (“You’re the greatest living jurist”); to save a dollar (“I gave at the office”); or to maintain innocence (“There are eight tiny reindeer on the rooftop”)….

An important aspect of personal autonomy is the right to shape one’s public and private persona by choosing when to tell the truth about oneself, when to conceal, and when to deceive. Of course, lies are often disbelieved or discovered, and that, too, is part of the push and pull of social intercourse. But it’s critical to leave such interactions in private hands, so that we can make choices about who we are. How can you develop a reputation as a straight shooter if lying is not an option?

Two books on the evolutionary psychology of lying are related: David Livingstone Smith’s Why We Lie, and Dan Ariely’s The Honest Truth about Dishonesty.

Posted on May 30, 2013 at 6:31 AMView Comments

Detecting Deception in Conference Calls

Research paper: Detecting Deceptive Discussions in Conference Calls, by David F. Larcker and Anastasia A. Zakolyukina.

Abstract: We estimate classification models of deceptive discussions during quarterly earnings conference calls. Using data on subsequent financial restatements (and a set of criteria to identify especially serious accounting problems), we label the Question and Answer section of each call as “truthful” or “deceptive”. Our models are developed with the word categories that have been shown by previous psychological and linguistic research to be related to deception. Using conservative statistical tests, we find that the out-of-sample performance of the models that are based on CEO or CFO narratives is significantly better than random by 4% – 6% (with 50% – 65% accuracy) and provides a significant improvement to a model based on discretionary accruals and traditional controls. We find that answers of deceptive executives have more references to general knowledge, fewer non-extreme positive emotions, and fewer references to shareholders value and value creation. In addition, deceptive CEOs use significantly fewer self-references, more third person plural and impersonal pronouns, more extreme positive emotions, fewer extreme negative emotions, and fewer certainty and hesitation words.

Posted on August 26, 2010 at 6:15 AMView Comments

Behavioral Profiling at Airports

There’s a long article in Nature on the practice:

It remains unclear what the officers found anomalous about George’s behaviour, and why he was detained. The TSA’s parent agency, the Department of Homeland Security (DHS), has declined to comment on his case because it is the subject of a federal lawsuit that was filed on George’s behalf in February by the American Civil Liberties Union. But the incident has brought renewed attention to a burgeoning controversy: is it possible to know whether people are being deceptive, or planning hostile acts, just by observing them?

Some people seem to think so. At London’s Heathrow Airport, for example, the UK government is deploying behaviour-detection officers in a trial modelled in part on SPOT. And in the United States, the DHS is pursuing a programme that would use sensors to look at nonverbal behaviours, and thereby spot terrorists as they walk through a corridor. The US Department of Defense and intelligence agencies have expressed interest in similar ideas.

Yet a growing number of researchers are dubious ­ not just about the projects themselves, but about the science on which they are based. “Simply put, people (including professional lie-catchers with extensive experience of assessing veracity) would achieve similar hit rates if they flipped a coin,” noted a 2007 report from a committee of credibility-assessment experts who reviewed research on portal screening.

“No scientific evidence exists to support the detection or inference of future behaviour, including intent,” declares a 2008 report prepared by the JASON defence advisory group. And the TSA had no business deploying SPOT across the nation’s airports “without first validating the scientific basis for identifying suspicious passengers in an airport environment”, stated a two-year review of the programme released on 20 May by the Government Accountability Office (GAO), the investigative arm of the US Congress.

Commentary from the MindHacks blog.

Also, the GAO has published a report on the U.S. DHS’s SPOT program: “Aviation Security: Efforts to Validate TSA’s Passenger Screening Behavior Detection Program Underway, but Opportunities Exist to Strengthen Validation and Address Operational Challenges.”

As of March 2010, TSA deployed about 3,000 BDOs at an annual cost of about $212 million; this force increased almost fifteen-fold between March 2007 and July 2009. BDOs have been selectively deployed to 161 of the 457 TSA-regulated airports in the United States at which passengers and their property are subject to TSA-mandated screening procedures.

It seems pretty clear that the program only catches criminals, and no terrorists. You’d think there would be more important things to spend $200 million a year on.

EDITED TO ADD (6/14): In the comments, a couple of people asked how this compares with the Israeli model of airport security—concentrate on the person—and the idea that trained officers notice if someone is acting “hinky”: both things that I have written favorably about.

The difference is the experience of the detecting officer and the amount of time they spend with each person. If you read about the programs described above, they’re supposed to “spot terrorists as they walk through a corridor,” or possibly after a few questions. That’s very different from what happens when you check into a flight an Ben Gurion Airport.

The problem with fast detection programs is that they don’t work, and the problem with the Israeli security model is that it doesn’t scale.

Posted on June 14, 2010 at 6:23 AMView Comments

Leaders Make Better Liars

According to new research:

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars—the leaders—resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars—the subordinates—showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

[…]

Carney emphasizes that these results don’t mean that all people in high positions find lying easier: people need only feel powerful, regardless of the real power they have or their position in a hierarchy. “There are plenty of CEOs who act like low-power people and there are plenty of people at every level in organizations who feel very high power,” Carney says. “It can cross rank, every strata of society, any job.”

Posted on March 30, 2010 at 1:59 PMView Comments

The Problems with Unscientific Security

From the Open Access Journal of Forensic Psychology, by a whole list of authors: “A Call for Evidence-Based Security Tools“:

Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.

We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.

One excerpt:

In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one’s preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens’ safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).

Article on the paper.

Posted on November 5, 2009 at 6:11 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.