Entries Tagged "trust"

Page 6 of 16

The Effect of Money on Trust

Money reduces trust in small groups, but increases it in larger groups. Basically, the introduction of money allows society to scale.

The team devised an experiment where subjects in small and large groups had the option to give gifts in exchange for tokens.

They found that there was a social cost to introducing this incentive. When all tokens were “spent”, a potential gift-giver was less likely to help than they had been in a setting where tokens had not yet been introduced.

The same effect was found in smaller groups, who were less generous when there was the option of receiving a token.

“Subjects basically latched on to monetary exchange, and stopped helping unless they received immediate compensation in a form of an intrinsically worthless object [a token].

“Using money does help large societies to achieve larger levels of co-operation than smaller societies, but it does so at a cost of displacing normal of voluntary help that is the bread and butter of smaller societies, in which everyone knows each other,” said Prof Camera.

But he said that this negative result was not found in larger anonymous groups of 32, instead co-operation increased with the use of tokens.

“This is exciting because we introduced something that adds nothing to the economy, but it helped participants converge on a behaviour that is more trustworthy.”

He added that the study reflected monetary exchange in daily life: “Global interaction expands the set of trade opportunities, but it dilutes the level of information about others’ past behaviour. In this sense, one can view tokens in our experiment as a parable for global monetary exchange.”

Posted on September 5, 2013 at 1:57 PMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Protecting Against Leakers

Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government’s security failures. Yet the debacle illustrates the challenge with trusting people in any organization.

The problem is easy to describe. Organizations require trusted people, but they don’t necessarily know whether those people are trustworthy. These individuals are essential, and can also betray organizations.

So how does an organization protect itself?

Securing trusted people requires three basic mechanisms (as I describe in my book Beyond Fear). The first is compartmentalization. Trust doesn’t have to be all or nothing; it makes sense to give relevant workers only the access, capabilities and information they need to accomplish their assigned tasks. In the military, even if they have the requisite clearance, people are only told what they “need to know.” The same policy occurs naturally in companies.

This isn’t simply a matter of always granting more senior employees a higher degree of trust. For example, only authorized armored-car delivery people can unlock automated teller machines and put money inside; even the bank president can’t do so. Think of an employee as operating within a sphere of trust—a set of assets and functions he or she has access to. Organizations act in their best interest by making that sphere as small as possible.

The idea is that if someone turns out to be untrustworthy, he or she can only do so much damage. This is where the NSA failed with Snowden. As a system administrator, he needed access to many of the agency’s computer systems—and he needed access to everything on those machines. This allowed him to make copies of documents he didn’t need to see.

The second mechanism for securing trust is defense in depth: Make sure a single person can’t compromise an entire system. NSA Director General Keith Alexander has said he is doing this inside the agency by instituting what is called two-person control: There will always be two people performing system-administration tasks on highly classified computers.

Defense in depth reduces the ability of a single person to betray the organization. If this system had been in place and Snowden’s superior had been notified every time he downloaded a file, Snowden would have been caught well before his flight to Hong Kong.

The final mechanism is to try to ensure that trusted people are, in fact, trustworthy. The NSA does this through its clearance process, which at high levels includes lie-detector tests (even though they don’t work) and background investigations. Many organizations perform reference and credit checks and drug tests when they hire new employees. Companies may refuse to hire people with criminal records or noncitizens; they might hire only those with a particular certification or membership in certain professional organizations. Some of these measures aren’t very effective—it’s pretty clear that personality profiling doesn’t tell you anything useful, for example—but the general idea is to verify, certify and test individuals to increase the chance they can be trusted.

These measures are expensive. It costs the U.S. government about $4,000 to qualify someone for top-secret clearance. Even in a corporation, background checks and screenings are expensive and add considerable time to the hiring process. Giving employees access to only the information they need can hamper them in an agile organization in which needs constantly change. Security audits are expensive, and two-person control is even more expensive: it can double personnel costs. We’re always making trade-offs between security and efficiency.

The best defense is to limit the number of trusted people needed within an organization. Alexander is doing this at the NSA—albeit too late—by trying to reduce the number of system administrators by 90 percent. This is just a tiny part of the problem; in the U.S. government, as many as 4 million people, including contractors, hold top-secret or higher security clearances. That’s far too many.

More surprising than Snowden’s ability to get away with taking the information he downloaded is that there haven’t been dozens more like him. His uniqueness—along with the few who have gone before him and how rare whistle-blowers are in general—is a testament to how well we normally do at building security around trusted people.

Here’s one last piece of advice, specifically about whistle-blowers. It’s much harder to keep secrets in a networked world, and whistle-blowing has become the civil disobedience of the information age. A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper. This may come as a shock in a market-based system, in which morally dubious behavior is often rewarded as long as it’s legal and illegal activity is rewarded as long as you can get away with it.

No organization, whether it’s a bank entrusted with the privacy of its customer data, an organized-crime syndicate intent on ruling the world, or a government agency spying on its citizens, wants to have its secrets disclosed. In the information age, though, it may be impossible to avoid.

This essay previously appeared on Bloomberg.com.

EDITED TO ADD 8/22: A commenter on the Bloomberg site added another security measure: pay your people more. Better paid people are less likely to betray the organization that employs them. I should have added that, especially since I make that exact point in Liars and Outliers.

Posted on August 26, 2013 at 1:19 PMView Comments

Lavabit E-Mail Service Shut Down

Lavabit, the more-secure e-mail service that Edward Snowden—among others—used, has abruptly shut down. From the message on their homepage:

I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot….

This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States.

In case something happens to the homepage, the full message is recorded here.

More about the public/private surveillance partnership. And another news article.

Also yesterday, Silent Circle shut down its email service:

We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now.

More news stories.

This illustrates the difference between a business owned by a person, and a public corporation owned by shareholders. Ladar Levison can decide to shutter Lavabit—a move that will personally cost him money—because he believes it’s the right thing to do. I applaud that decision, but it’s one he’s only able to make because he doesn’t have to answer to public shareholders. Could you imagine what would happen if Mark Zuckerberg or Larry Page decided to shut down Facebook or Google rather than answer National Security Letters? They couldn’t. They would be fired.

When the small companies can no longer operate, it’s another step in the consolidation of the surveillance society.

Posted on August 9, 2013 at 11:45 AMView Comments

Restoring Trust in Government and the Internet

In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company’s blog to deny it.

Turns out that wasn’t quite true.

Or at least he—or the company’s lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn’t changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.

At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn’t collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word “collect,” an excuse that didn’t even pass the laugh test.

As Edward Snowden’s documents reveal more about the NSA’s activities, it’s becoming clear that we can’t trust anything anyone official says about these programs.

Google and Facebook insist that the NSA has no “direct access” to their servers. Of course not; the smart way for the NSA to get all the data is through sniffers.

Apple says it’s never heard of PRISM. Of course not; that’s the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they’ve received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama’s justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.

NSA Director Gen. Keith Alexander has claimed that the NSA’s massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of “helped.” We’re not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of “terrorist plots.” An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.

Left alone, few were likely to have accomplished much of anything.

Both government agencies and corporations have cloaked themselves in so much secrecy that it’s impossible to verify anything they say; revelation after revelation demonstrates that they’ve been lying to us regularly and tell the truth only when there’s no alternative.

There’s much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden’s files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said “trust but verify.” That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It’s no wonder that most people are ignoring the story; it’s just too much cognitive dissonance to try to cope with it.

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

In the same way, democracy can’t work unless voters know what the government is doing in their name. That’s why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.

Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I’m expected to know which companies protect their users’ privacy and which encryption programs the NSA can’t break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It’s a lousy answer, but until our government starts down the path of regaining our trust, it’s the only thing we can do.

This essay originally appeared on CNN.com.

EDITED TO ADD (8/7): Two more links describing how the US government lies about NSA surveillance.

Posted on August 7, 2013 at 6:29 AMView Comments

Secret Information Is More Trusted

This is an interesting, if slightly disturbing, result:

In one experiment, we had subjects read two government policy papers from 1995, one from the State Department and the other from the National Security Council, concerning United States intervention to stop the sale of fighter jets between foreign countries.

The documents, both of which were real papers released through the Freedom of Information Act, argued different sides of the issue. Depending on random assignment, one was described as having been previously classified, the other as being always public. Most people in the study thought that whichever document had been “classified” contained more accurate and well-reasoned information than the public document.

In another experiment, people read a real government memo from 1978 written by members of the National Security Council about the sale of fighter jets to Taiwan; we then explained that the council used the information to make decisions. Again, depending on random assignment, some people were told that the document had been secret and for exclusive use by the council, and that it had been recently declassified under the Freedom of Information Act. Others were told that the document had always been public.

As we expected, people who thought the information was secret deemed it more useful, important and accurate than did those who thought it was public. And people judged the National Security Council’s actions based on the information as more prudent and wise when they believed the document had been secret.

[…]

Our study helps explain the public’s support for government intelligence gathering. A recent poll by the Pew Research Center for the People and the Press reported that a majority of Americans thought it was acceptable for the N.S.A. to track Americans’ phone activity to investigate terrorism. Some frustrated commentators have concluded that Americans have much less respect for their own privacy than they should.

But our research suggests another conclusion: the secret nature of the program itself may lead the public to assume that the information it gathers is valuable, without even examining what that information is or how it might be used.

Original paper abstract; the full paper is behind a paywall.

Posted on July 26, 2013 at 6:25 AMView Comments

How the FISA Court Undermines Trust

This is a succinct explanation of how the secrecy of the FISA court undermines trust.

Surveillance types make a distinction between secrecy of laws, secrecy of procedures and secrecy of operations. The expectation is that the laws that empower or limit the government’s surveillance powers are always public. The programs built atop those laws are often secret. And the individual operations are almost always secret. As long as the public knows about and agreed to the law, the thinking goes, it’s okay for the government to build a secret surveillance architecture atop it.

But the FISA court is, in effect, breaking the first link in that chain. The public no longer knows about the law itself, and most of Congress may not know, either. The courts have remade the law, but they’ve done so secretly, without public comment or review.

Reminds me of the two types of secrecy I wrote about last month.

Posted on July 23, 2013 at 1:00 PMView Comments

Violence as a Source of Trust in Criminal Societies

This is interesting:

If I know that you have committed a violent act, and you know that I have committed a violent act, we each have information on each other that we might threaten to use if relations go sour (Schelling notes that one of the most valuable rights in business relations is the right to be sued—this is a functional equivalent).

Abstract of original paper; full paper is behind a paywall.

Posted on July 22, 2013 at 6:36 AMView Comments

The Office of the Director of National Intelligence Defends NSA Surveillance Programs

Here’s a transcript of a panel discussion about NSA surveillance. There’s a lot worth reading here, but I want to quote Bob Litt’s opening remarks. He’s the General Counsel for ODNI, and he has a lot to say about the programs revealed so far in the Snowden documents.

I’m reminded a little bit of a quote that, like many quotes, is attributed to Mark Twain but in fact is not Mark Twain’s, which is that a lie can get halfway around the world before the truth gets its boots on. And unfortunately, there’s been a lot of misinformation that’s come out about these programs. And what I would like to do in the next couple of minutes is actually go through and explain what the programs are and what they aren’t.

I particularly want to emphasize that I hope you come away from this with the understanding that neither of the programs that have been leaked to the press recently are indiscriminate sweeping up of information without regard to privacy or constitutional rights or any kind of controls. In fact, from my boss, the director of national intelligence, on down through the entire intelligence community, we are in fact sensitive to privacy and constitutional rights. After all, we are citizens of the United States. These are our rights too.

So as I said, we’re talking about two types of intelligence collection programs. I want to start discussing them by making the point that in order to target the emails or the phone calls or the communications of a United States citizen or a lawful permanent resident of the United States, wherever that person is located, or of any person within the United States, we need to go to court, and we need to get an individual order based on probable cause, the equivalent of an electronic surveillance warrant.

That does not mean and nobody has ever said that that means we never acquire the contents of an email or telephone call to which a United States person is a party. Whenever you’re doing any collection of information, you’re going to—you can’t avoid some incidental acquisition of information about nontargeted persons. Think of a wiretap in a criminal case. You’re wiretapping somebody, and you intercept conversations that are innocent as well as conversations that are inculpatory. If we seize somebody’s computer, there’s going to be information about innocent people on that. This is just a necessary incident.

What we do is we impose controls on the use of that information. But what we cannot do—and I’m repeating this—is go out and target the communications of Americans for collection without an individual court order.

So the first of the programs that I want to talk about that was leaked to the press is what’s been called Section 215, or business record collection. It’s called Section 215 because that was the section of the Patriot Act that put the current version of that statute into place. And under that ­ this statute, we collect telephone metadata, using a court order which is authorized by the Foreign Intelligence Surveillance Act, under a provision which allows a government to obtain business records for intelligence and counterterrorism purposes. Now, by metadata, in this context, I mean data that describes the phone calls, such as the telephone number making the call, the telephone number dialed, the data and time the call was made and the length of the call. These are business records of the telephone companies in question, which is why they can be collected under this provision.

Despite what you may have read about this program, we do not collect the content of any communications under this program. We do not collect the identity of any participant to any communication under this program. And while there seems to have been some confusion about this as recently as today, I want to make perfectly clear we do not collect cellphone location information under this program, either GPS information or cell site tower information. I’m not sure why it’s been so hard to get people to understand that because it’s been said repeatedly.

When the court approves collection under this statute, it issues two orders. One order, which is the one that was leaked, is an order to providers directing them to turn the relevant information over to the government. The other order, which was not leaked, is the order that spells out the limitations on what we can do with the information after it’s been collected, who has access, what purposes they can access it for and how long it can be retained.

Some people have expressed concern, which is quite a valid concern in the abstract, that if you collect large quantities of metadata about telephone calls, you could subject it to sophisticated analysis, and using those kind of analytical tools, you can derive a lot of information about people that would otherwise not be discoverable.

The fact is, we are specifically not allowed to do that kind of analysis of this data, and we don’t do it. The metadata that is acquired and kept under this program can only be queried when there is reasonable suspicion, based on specific, articulable facts, that a particular telephone number is associated with specified foreign terrorist organizations. And the only purpose for which we can make that query is to identify contacts. All that we get under this program, all that we collect, is metadata. So all that we get back from one of these queries is metadata.

Each determination of a reasonable suspicion under this program must be documented and approved, and only a small portion of the data that is collected is ever actually reviewed, because the vast majority of that data is never going to be responsive to one of these terrorism-related queries.

In 2012 fewer than 300 identifiers were approved for searching this data. Nevertheless, we collect all the data because if you want to find a needle in the haystack, you need to have the haystack, especially in the case of a terrorism-related emergency, which is—and remember that this database is only used for terrorism-related purposes.

And if we want to pursue any further investigation as a result of a number that pops up as a result of one of these queries, we have to do, pursuant to other authorities and in particular if we want to conduct electronic surveillance of any number within the United States, as I said before, we have to go to court, we have to get an individual order based on probable cause.

That’s one of the two programs.

The other program is very different. This is a program that’s sometimes referred to as PRISM, which is a misnomer. PRISM is actually the name of a database. The program is collection under Section 702 of the Foreign Intelligence Surveillance Act, which is a public statute that is widely known to everybody. There’s really no secret about this kind of collection.

This permits the government to target a non-U.S. person, somebody who’s not a citizen or a permanent resident alien, located outside of the United States, for foreign intelligence purposes without obtaining a specific warrant for each target, under the programmatic supervision of the FISA Court.

And it’s important here to step back and note that historically and at the time FISA was originally passed in 1978, this particular kind of collection, targeting non-U.S. persons outside of the United States for foreign intelligence purposes, was not intended to be covered by FISA as ­ at all. It was totally outside of the supervision of the FISA Court and totally within the prerogative of the executive branch. So in that respect, Section 702 is properly viewed as an expansion of FISA Court authority, rather than a contraction of that authority.

So Section 702, as I—as I said, it’s—is limited to targeting foreigners outside the United States to acquire foreign intelligence information. And there is a specific provision in this statute that prohibits us from making an end run about this, about—on this requirement, because we are expressly prohibited from targeting somebody outside of the United States in order to obtain some information about somebody inside the United States. That is to say, if we know that somebody outside of the United States is communicating with Spike Bowman, and we really want to get Spike Bowman’s communications, we’ve got to get an electronic surveillance order on Spike Bowman. We cannot target the out ­ the person outside of the United States to collect on Spike.

In order to use Section 702, the government has to obtain approval from the FISA Court for the plan it intends to use to conduct the collection. This plan includes, first of all, identification of the foreign intelligence purposes of the collection; second, the plan and the procedures for ensuring that the individuals targeted for collection are in fact non-U.S. persons who are located outside of the United States. These are referred to as targeting procedures. And in addition, we have to get approval of the government’s procedures for what it will do with information about a U.S. person or someone inside the United States if we get that information through this collection. These procedures, which are called minimization procedures, determine what we can keep and what we can disseminate to other government agencies and impose limitations on that. And in particular, dissemination of information about U.S. persons is expressly prohibited unless that information is necessary to understand foreign intelligence or to assess its importance or is evidence of a crime or indicates a—an imminent threat of death or serious bodily harm.

And again, these procedures, the targeting and minimization procedures, have to be approved by the FISA court as consistent with the statute and consistent with the Fourth Amendment. And that’s what the Section 702 collection is.

The last thing I want to talk about a little bit is the myth that this is sort of unchecked authority, because we have extensive oversight and control over the collection, which involves all three branches of government. First, NSA has extensive technological processes, including segregated databases, limited access and audit trails, and they have extensive internal oversight, including their own compliance officer, who oversees compliance with the rules.

Second, the Department of Justice and my office, the Office of the Director of National Intelligence, are specifically charged with overseeing NSA’s activities to make sure that there are no compliance problems. And we report to the Congress twice a year on the use of these collection authorities and compliance problems. And if we find a problem, we correct it. Inspectors general, independent inspectors general, who, as you all know, also have an independent reporting responsibility to Congress, also are charged with undertaking a review of how these surveillance programs are carried out.

Any time that information is collected in violation of the rules, it’s reported immediately to the FISA court and is also reported to the relevant congressional oversight committees. It doesn’t matter how small the—or technical the violation is. And information that’s collected in violation of the rules has to be purged, with very limited exceptions.

Both the FISA court and the congressional oversight committees, which are Intelligence and Judiciary, take a very active role in overseeing this program and ensuring that we adhere to the requirements of the statutes and the court orders. And let me just stop and say that the suggestion that the FISA court is a rubber stamp is a complete canard, as anybody who’s ever had the privilege of appearing before Judge Bates or Judge Walton can attest.

Now, this is a complex system, and like any complex system, it’s not error free. But as I said before, every time we have found a mistake, we’ve fixed it. And the mistakes are self-reported. We find them ourselves in the exercise of our oversight. No one has ever found that there has ever been—and by no one, I mean the people at NSA, the people at the Department of Justice, the people at the Office of the Director of National Intelligence, the inspectors general, the FISA court and the congressional oversight committees, all of whom have visibility into this—nobody has ever found that there has ever been any intentional effort to violate the law or any intentional misuse of these tools.

As always, the fundamental issue is trust. If you believe Litt, this is all very comforting. If you don’t, it’s more lies and misdirection. Taken at face value, it explains why so many tech executives were able to say they had never heard of PRISM: it’s the internal NSA name for the database, and not the name of the program. I also note that Litt uses the word “collect” to mean what it actually means, and not the way his boss, Director of National Intelligence James Clapper, Jr., used it to deliberately lie to Congress.

Posted on July 4, 2013 at 7:07 AMView Comments

Preventing Cell Phone Theft through Benefit Denial

Adding a remote kill switch to cell phones would deter theft.

Here we can see how the rise of the surveillance state permeates everything about computer security. On the face of it, this is a good idea. Assuming it works—that 1) it’s not possible for thieves to resurrect phones in order to resell them, and 2) that it’s not possible to turn this system into a denial-of-service attack tool—it would deter crime. The general category of security is “benefit denial,” like ink tags attached to garments in retail stores and car radios that no longer function if removed. But given what we now know, do we trust that the government wouldn’t abuse this system and kill phones for other reasons? Do we trust that media companies won’t kill phones it decided were sharing copyrighted materials? Do we trust that phone companies won’t kill phones from delinquent customers? What might have been a straightforward security system becomes a dangerous tool of control, when you don’t trust those in power.

Posted on June 28, 2013 at 1:37 PMView Comments

1 4 5 6 7 8 16

Sidebar photo of Bruce Schneier by Joe MacInnis.