Essays in the Category “National Security Policy”
The Lessons of WannaCry
There is plenty of blame to go around for the WannaCry ransomware that spread throughout the Internet earlier this month, disrupting work at hospitals, factories, businesses, and universities. First, there are the writers of the malicious software, which blocks victims' access to their computers until they pay a fee. Then there are the users who didn't install the Windows security patch that would have prevented an attack. A small portion of the blame falls on Microsoft, which wrote the insecure code in the first place.
On Monday, the TSA announced a peculiar new security measure to take effect within 96 hours. Passengers flying into the US on foreign airlines from eight Muslim countries would be prohibited from carrying aboard any electronics larger than a smartphone. They would have to be checked and put into the cargo hold. And now the UK is following suit.
Solutions require both corporate regulation and international cooperation
This essay appeared on Time.com as part of a special section called Let's Talk About the Issues.
On today's Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual.
The National Security Agency is lying to us. We know that because of data stolen from an NSA server was dumped on the internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others' computers. Those vulnerabilities aren't being reported, and aren't getting fixed, making your computers and networks unsafe.
When Johns Hopkins discovered a different security flaw, it notified Apple so the problem could be fixed. The FBI is keeping its newly found breach a secret from everyone.
The FBI's legal battle with Apple is over, but the way it ended may not be good news for anyone.
Federal agents had been seeking to compel Apple to break the security of an iPhone 5c that had been used by one of the San Bernardino, Calif., terrorists. Apple had been fighting a court order to cooperate with the FBI, arguing that the authorities' request was illegal and that creating a tool to break into the phone was itself harmful to the security of every iPhone user worldwide.
Last week, the FBI told the court it had learned of a possible way to break into the phone using a third party's solution, without Apple's help.
Both the "going dark" metaphor of FBI Director James Comey and the contrasting "golden age of surveillance" metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.
It's a myopic framing that focuses only on one threat—criminals, including domestic terrorists—and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.
When the National Security Administration (NSA)—or any government agency—discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others' computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks.
News that the Transportation Security Administration missed a whopping 95% of guns and bombs in recent airport security "red team" tests was justifiably shocking. It's clear that we're not getting value for the $7 billion we're paying the TSA annually.
But there's another conclusion, inescapable and disturbing to many, but good news all around: We don't need $7 billion worth of airport security. These results demonstrate that there isn't much risk of airplane terrorism, and we should ratchet security down to pre-9/11 levels.
Last month, Moscow-based security software maker Kaspersky Labs published detailed information on what it calls the Equation Group and how the U.S. National Security Agency and their U.K. counterpart, GCHQ, have figure how to embed spyware deep inside computers, gaining almost total control of those computers to eavesdrop on most of the world's computers, even in the face of reboots, operating system reinstalls, and commercial anti-virus products. The details are impressive, and I urge anyone interested in tech to read the Kaspersky documents, or these very detailed articles.
American history is littered with examples of classified information pointing us towards aggression against other countries—think WMDs—only to later learn that the evidence was wrong
When you're attacked by a missile, you can follow its trajectory back to where it was launched from. When you're attacked in cyberspace, figuring out who did it is much harder. The reality of international aggression in cyberspace will change how we approach defense.
Many of us in the computer-security field are skeptical of the U.S.
The Intercept has published an article—based on the Snowden documents—about AURORAGOLD, an NSA surveillance operation against cell phone network operators and standards bodies worldwide. This is not a typical NSA surveillance operation where agents identify the bad guys and spy on them. This is an operation where the NSA spies on people designing and building a general communications infrastructure, looking for weaknesses and vulnerabilities that will allow it to spy on the bad guys at some later date.
In that way, AURORAGOLD is similar to the NSA's program to hack sysadmins around the world, just in case that access will be useful at some later date; and to the GCHQ's hacking of the Belgian phone company Belgacom.
Chinese hacking of American computer networks is old news. For years we've known about their attacks against U.S. government and corporate targets. We've seen detailed reports of how they hacked The New York Times.
There's a debate going on about whether the U.S. government—specifically, the NSA and United States Cyber Command—should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.
A software vulnerability is a programming mistake that allows an adversary access into that system.
According to NSA documents published in Glenn Greenwald's new book "No Place to Hide," we now know that the NSA spies on embassies and missions all over the world, including those of Brazil, Bulgaria, Colombia, the European Union, France, Georgia, Greece, India, Italy, Japan, Mexico, Slovakia, South Africa, South Korea, Taiwan, Venezuela and Vietnam.
This will certainly strain international relations, as happened when it was revealed that the United States is eavesdropping on German Chancellor Angela Merkel's cell phone—but is anyone really surprised? Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA spies on entire populations.
In addition to turning the Internet into a worldwide surveillance platform, the NSA has surreptitiously weakened the products, protocols, and standards we all use to protect ourselves. By doing so, it has destroyed the trust that underlies the Internet. We need that trust back.
Trust is inherently social.
Back when we first started getting reports of the Chinese breaking into U.S. computer networks for espionage purposes, we described it in some very strong language. We called the Chinese actions cyber-attacks. We sometimes even invoked the word cyberwar, and declared that a cyber-attack was an act of war.
Ever since reporters began publishing stories about NSA activities, based on documents provided by Edward Snowden, we've been repeatedly assured by government officials that it's "only metadata." This might fool the average person, but it shouldn't fool those of us in the security field. Metadata equals surveillance data, and collecting metadata on people means putting them under surveillance.
An easy thought experiment demonstrates this. Imagine that you hired a private detective to eavesdrop on a subject.
Increasingly, we are watched not by people but by algorithms. Amazon and Netflix track the books we buy and the movies we stream, and suggest other books and movies based on our habits. Google and Facebook watch what we do and what we say, and show us advertisements based on our behavior. Google even modifies our web search results based on our previous behavior.
The NSA has become too big and too powerful. What was supposed to be a single agency with a dual mission—protecting the security of U.S. communications and eavesdropping on the communications of our enemies—has become unbalanced in the post-Cold War, all-terrorism-all-the-time era.
Putting the U.S.
Giving it to private companies will only make privacy intrusion worse.
One of the recommendations by the president's Review Group on Intelligence and Communications Technologies on reforming the National Security Agency—No. 5, if you're counting—is that the government should not collect and store telephone metadata. Instead, a private company—either the phone companies themselves or some other third party—should store the metadata and provide it to the government only upon a court order.
This isn't a new idea. Over the past decade, several countries have enacted mandatory data retention laws, in which companies are required to save Internet or telephony data about customers for a specified period of time, in case the government needs it for an investigation.
Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar's news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.
Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency's programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.
Secret NSA eavesdropping is still in the news. Details about once secret programs continue to leak. The Director of National Intelligence has recently declassified additional information, and the President's Review Group has just released its report and recommendations.
With all this going on, it's easy to become inured to the breadth and depth of the NSA's activities.
In the Information Age, it's easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.
When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.
Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing.
The basic government defense of the NSA's bulk-collection programs—whether it be the list of all the telephone calls you made, your email address book and IM buddy list, or the messages you send your friends—is that what the agency is doing is perfectly legal, and doesn't really count as surveillance, until a human being looks at the data.
It's what Director of National Intelligence James R. Clapper meant when he lied to Congress. When asked, "Does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" he replied, "No sir, not wittingly." To him, the definition of "collect" requires that a human look at it. So when the NSA collects—using the dictionary definition of the word—data on hundreds of millions of Americans, it's not really collecting it, because only computers process it.
Historically, surveillance was difficult and expensive.
Over the decades, as technology advanced, surveillance became easier and easier. Today, we find ourselves in a world of ubiquitous surveillance, where everything is collected, saved, searched, correlated and analyzed.
But while technology allowed for an increase in both corporate and government surveillance, the private and public sectors took very different paths to get there.
The National Security Agency has made repeated attempts to develop attacks against people using Tor, a popular tool designed to protect online anonymity, despite the fact the software is primarily funded and promoted by the US government itself.
Top-secret NSA documents, disclosed by whistleblower Edward Snowden, reveal that the agency's current successes against Tor rely on identifying users and then attacking vulnerable software on their computers. One technique developed by the agency targeted the Firefox web browser used with Tor, giving the agency full control over targets' computers, including access to files, all keystrokes and all online activity.
But the documents suggest that the fundamental security of the Tor service remains intact.
Secret servers and a privileged position on the internet's backbone used to identify users and attack target computers
The online anonymity network Tor is a high-priority target for the National Security Agency. The work of attacking Tor is done by the NSA's application vulnerabilities branch, which is part of the systems intelligence directorate, or SID. The majority of NSA employees work in SID, which is tasked with collecting data from communications systems around the world.
According to a top-secret NSA presentation provided by the whistleblower Edward Snowden, one successful technique the NSA has developed involves exploiting the Tor browser bundle, a collection of programs designed to make it easy for people to install and use the software.
By reporting on the agency's actions, the vulnerabilities in our computer systems can be fixed. It's the only way to force change
Today, the Guardian is reporting on how the NSA targets Tor users, along with details of how it uses centrally placed servers on the internet to attack individual computers. This builds on a Brazilian news story from last week that, in part, shows that the NSA is impersonating Google servers to users; a German story on how the NSA is hacking into smartphones; and a Guardian story from two weeks ago on how the NSA is deliberately weakening common security algorithms, protocols, and products.
The common thread among these stories is that the NSA is subverting the internet and turning it into a massive surveillance tool. The NSA's actions are making us all less safe, because its eavesdropping mission is degrading its ability to protect the US.
As I report in The Guardian today, the NSA has secret servers on the Internet that hack into other computers, codename FOXACID. These servers provide an excellent demonstration of how the NSA approaches risk management, and exposes flaws in how the agency thinks about the secrecy of its own programs.
Here are the FOXACID basics: By the time the NSA tricks a target into visiting one of those servers, it already knows exactly who that target is, who wants him eavesdropped on, and the expected value of the data it hopes to receive. Based on that information, the server can automatically decide what exploit to serve the target, taking into account the risks associated with attacking the target, as well as the benefits of a successful attack.
The nation can survive the occasional terrorist attack, but our freedoms can't survive an invulnerable leader like Keith Alexander operating within inadequate constraints.
Leaks from the whistleblower Edward Snowden have catapulted the NSA into newspaper headlines and demonstrated that it has become one of the most powerful government agencies in the country. From the secret court rulings that allow it to collect data on all Americans to its systematic subversion of the entire Internet as a surveillance platform, the NSA has amassed an enormous amount of power.
There are two basic schools of thought about how this came to pass. The first focuses on the agency's power.
We recently learned that U.S. intelligence agencies had at least three days' warning that Syrian President Bashar al-Assad was preparing to launch a chemical attack on his own people, but wasn't able to stop it. At least that's what an intelligence briefing from the White House reveals. With the combined abilities of our national intelligence apparatus—the CIA, National Security Agency, National Reconnaissance Office and all the rest—it's not surprising that we had advance notice.
The NSA has huge capabilities – and if it wants in to your computer, it's in. With that in mind, here are five ways to stay safe
Now that we have enough details about how the NSA eavesdrops on the internet, including today's disclosures of the NSA's deliberate weakening of cryptographic systems, we can finally start to figure out how to protect ourselves.
For the past two weeks, I have been working with the Guardian on NSA stories, and have read hundreds of top-secret NSA documents provided by whistleblower Edward Snowden. I wasn't part of today's story—it was in process well before I showed up—but everything I read confirms what the Guardian is reporting.
At this point, I feel I can provide some advice for keeping secure against such an adversary.
The NSA has undermined a fundamental social contract. We engineers built the internet – and now we have to fix it
Government and industry have betrayed the internet, and us.
By subverting the internet at every level to make it a vast, multi-layered and robust surveillance platform, the NSA has undermined a fundamental social contract. The companies that build and manage our internet infrastructure, the companies that create and sell us our hardware and software, or the companies that host our data: we can no longer trust them to be ethical internet stewards.
This is not the internet the world needs, or the internet its creators envisioned.
Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.
Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government.
I've recently seen two articles speculating on the NSA's capability, and practice, of spying on members of Congress and other elected officials. The evidence is all circumstantial and smacks of conspiracy thinking—and I have no idea whether any of it is true or not—but it's a good illustration of what happens when trust in a public institution fails.
The NSA has repeatedly lied about the extent of its spying program. James R. Clapper, the director of national intelligence, has lied about it to Congress.
We Need Protection from Intelligence-Gathering Run Amok
This essay also appeared in the Livingston Daily and the Daily Journal.
If there's any confirmation that the U.S. government has commandeered the Internet for worldwide surveillance, it is what happened with Lavabit earlier this month.
Lavabit is—well, was—an e-mail service that offered more privacy than the typical large-Internet-corporation services that most of us use.
The scariest explanation of all? That the NSA and GCHQ are just showing they don't want to be messed with.
Last Sunday, David Miranda was detained while changing planes at London Heathrow Airport by British authorities for nine hours under a controversial British law—the maximum time allowable without making an arrest. There has been much made of the fact that he's the partner of Glenn Greenwald, the Guardian reporter whom Edward Snowden trusted with many of his NSA documents and the most prolific reporter of the surveillance abuses disclosed in those documents. There's less discussion of what I feel was the real reason for Miranda's detention. He was ferrying documents between Greenwald and Laura Poitras, a filmmaker and his co-reporter on Snowden and his information.
Technology companies have to fight for their users, or they'll eventually lose them.
It turns out that the NSA's domestic and world-wide surveillance apparatus is even more extensive than we thought. Bluntly: The government has commandeered the Internet. Most of the largest Internet companies provide information to the NSA, betraying their users. Some, as we've learned, fight and lose.
In one Maryland county, SWAT teams were deployed once a day on average in 2009, most often to serve search or arrest warrants.
War as a rhetorical concept is firmly embedded in American culture. Over the past several decades, federal and local law enforcement has been enlisted in a war on crime, a war on drugs and a war on terror. These wars are more than just metaphors designed to rally public support and secure budget appropriations. They change the way we think about what the police do.
In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company's blog to deny it.
Turns out that wasn't quite true.
Or at least he—or the company's lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn't changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.
At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn't collect data on hundreds of millions of Americans.
This essay also appeared in The Memphis Commercial Appeal, Stuff, The Guardian Comment Is Free, and Veterans Today.
Imagine the government passed a law requiring all citizens to carry a tracking device. Such a law would immediately be found unconstitutional. Yet we all carry mobile phones.
Whenever national cybersecurity policy is discussed, the same stories come up again and again. Whether the examples are called acts of cyberwar, cyberespionage, hacktivism, or cyberterrorism, they all affect national interest, and there is a corresponding call for some sort of national cyberdefence.
Unfortunately, it is very difficult to identify attackers and their motivations in cyberspace. As a result, nations are classifying all serious cyberattacks as cyberwar.
NSA apologists say spying is only used for menaces like "weapons of mass destruction" and "terror." But those terms have been radically redefined.
One of the assurances I keep hearing about the U.S. government's spying on American citizens is that it's only used in cases of terrorism. Terrorism is, of course, an extraordinary crime, and its horrific nature is supposed to justify permitting all sorts of excesses to prevent it. But there's a problem with this line of reasoning: mission creep.
Today, the United States is conducting offensive cyberwar actions around the world.
More than passively eavesdropping, we're penetrating and damaging foreign networks for both espionage and to ready them for attack. We're creating custom-designed Internet weapons, pre-targeted and ready to be "fired" against some piece of another country's electronic infrastructure on a moment's notice.
This is much worse than what we're accusing China of doing to us.
Edward Snowden broke the law by releasing classified information. This isn't under debate; it's something everyone with a security clearance knows. It's written in plain English on the documents you have to sign when you get a security clearance, and it's part of the culture. The law is there for a good reason, and secrecy has an important role in military defense.
The NSA's surveillance of cell-phone calls show how badly we need to protect the whistle-blowers who provide transparency and accountability.
Yesterday, we learned that the NSA received all calling records from Verizon customers for a three-month period starting in April. That's everything except the voice content: who called who, where they were, how long the call lasted—for millions of people, both Americans and foreigners. This "metadata" allows the government to track the movements of everyone during that period, and build a detailed picture of who talks to whom. It's exactly the same data the Justice Department collected about AP journalists.
Terrorism causes fear, and we overreact to that fear. Our brains aren't very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are, and we fear them more than probability indicates we should.
As part of the fallout of the Boston bombings, we're probably going to get some new laws that give the FBI additional investigative powers. As with the Patriot Act after 9/11, the debate over whether these new laws are helpful will be minimal, but the effects on civil liberties could be large. Even though most people are skeptical about sacrificing personal freedoms for security, it's hard for politicians to say no to the FBI right now, and it's politically expedient to demand that something be done.
If our leaders can't say no—and there's no reason to believe they can—there are two concepts that need to be part of any new counterterrorism laws, and investigative laws in general: transparency and accountability.
The FBI and the CIA are being criticized for not keeping better track of Tamerlan Tsarnaev in the months before the Boston Marathon bombings. How could they have ignored such a dangerous person? How do we reform the intelligence community to ensure this kind of failure doesn't happen again?
It's an old song by now, one we heard after the 9/11 attacks in 2001 and after the Underwear Bomber's failed attack in 2009.
It is easy to feel scared and powerless in the wake of attacks like those at the Boston Marathon. But it also plays into the perpetrators' hands.
As the details about the bombings in Boston unfold, it'd be easy to be scared. It'd be easy to feel powerless and demand that our elected leaders do something—anything—to keep us safe.
It'd be easy, but it'd be wrong. We need to be angry and empathize with the victims without being scared.
A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.
The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage.
Cyber-espionage is old news. What's new is the rhetoric, which is reaching a fever pitch right now.
For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We've started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.
I see this as an effect of all the cyberwar saber-rattling that's going on right now.
Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35.
Security is both a feeling and a reality, and the two are different things. People can feel secure when they’re actually not, and they can be secure even when they believe otherwise.
This discord explains much of what passes for our national discourse on security policy.
We're in the early years of a cyberwar arms race. It's expensive, it's destabilising and it threatens the very fabric of the internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.
If you read the press and listen to government leaders, we're already in the middle of a cyberwar.
A lot of the debate around President Obama's cybersecurity initiative center on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.
It's difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta's recent comments are just the latest; search the Internet for "cyber 9/11," "cyber Peal-Harbor," "cyber Katrina," or -- my favorite -- "cyber Armageddon."
There's an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations.
We're in the early years of a cyberwar arms race. It's expensive, it's destabilizing, and it threatens the very fabric of the Internet we use every day. Cyberwar treaties, as imperfect as they might be, are the only way to contain the threat.
If you read the press and listen to government leaders, we're already in the middle of a cyberwar.
The Department of Homeland Security is getting rid of the color-coded threat level system. It was introduced after 9/11, and was supposed to tell you how likely a terrorist attack might be. Except that it never did.
Attacks happened more often when the level was yellow ("significant risk") than when it was orange ("high risk").
A heavily edited version of this essay appeared in the New York Daily News.
Securing the Washington Monument from terrorism has turned out to be a surprisingly difficult job. The concrete fence around the building protects it from attacking vehicles, but there's no visually appealing way to house the airport-level security mechanisms the National Park Service has decided are a must for visitors. It is considering several options, but I think we should close the monument entirely. Let it stand, empty and inaccessible, as a monument to our fears.
The world is gearing up for cyberwar. The US Cyber Command became operational in November. Nato has enshrined cyber security among its new strategic priorities. The head of Britain's armed forces said recently that boosting cyber capability is now a huge priority for the UK.
Last month, Sen. Joe Lieberman, I-Conn., introduced a bill that might -- we're not really sure -- give the president the authority to shut down all or portions of the Internet in the event of an emergency. It's not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly.
There's a power struggle going on in the U.S. government right now.
It's about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.
This essay appeared as the second half of a point/counterpoint with Marcus Ranum. Marcus's half is here.
Information technology is increasingly everywhere, and it's the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping.
Google made headlines when it went public with the fact that Chinese hackers had penetrated some of its services, such as Gmail, in a politically motivated attempt at intelligence gathering. The news here isn't that Chinese hackers engage in these activities or that their attempts are technically sophisticated -- we knew that already -- it's that the U.S. government inadvertently aided the hackers.
In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts.
President Obama in his speech last week rightly focused on fixing the intelligence failures that resulted in Umar Farouk Abdulmutallab being ignored, rather than on technologies targeted at the details of his underwear-bomb plot. But while Obama's instincts are right, reforming intelligence for this new century and its new threats is a more difficult task than he might like.
We don't need new technologies, new laws, new bureaucratic overlords, or - for heaven's sake - new agencies. What prevents information sharing among intelligence organizations is the culture of the generation that built those organizations.
There are two kinds of profiling. There's behavioral profiling based on how someone acts, and there's automatic profiling based on name, nationality, method of ticket purchase, and so on. The first one can be effective, but is very hard to do right. The second one makes us all less safe.
Sometimes mediocre encryption is better than strong encryption, and sometimes no encryption is better still.
The Wall Street Journal reported this week that Iraqi, and possibly also Afghan, militants are using commercial software to eavesdrop on U.S. Predators, other unmanned aerial vehicles, or UAVs, and even piloted planes. The systems weren't "hacked" -- the insurgents can’t control them -- but because the downlink is unencrypted, they can watch the same video stream as the coalition troops on the ground.
It's been months since the Transportation Security Administration has had a permanent director. If, during the job interview (no, I didn't get one), President Obama asked me how I'd fix airport security in one sentence, I would reply: "Get rid of the photo ID check, and return passenger screening to pre-9/11 levels."
Okay, that's a joke. While showing ID, taking your shoes off and throwing away your water bottles isn't making us much safer, I don't expect the Obama administration to roll back those security measures anytime soon. Airport security is more about CYA than anything else: defending against what the terrorists did last time.
A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?
I discounted the exercise at the time, calling it "embarrassing." I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers.
This essay appeared as part of a round table about Obama's cybersecurity speech on The New York Times's Room for Debate blog.
I am optimistic about President Obama’s new cybersecurity policy and the appointment of a new “cybersecurity coordinator,” though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.
His principles were all welcome — securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he’s overly optimistic that legislation won’t be required. I was especially heartened to hear his commitment to funding research.
Terrorists attacking our food supply is a nightmare scenario that has been given new life during the recent swine flu outbreak. Although it seems easy to do, understanding why it hasn't happened is important. GR Dalziel, at the Nanyang Technological University in Singapore, has written a report chronicling every confirmed case of malicious food contamination in the world since 1950: 365 cases in all, plus 126 additional unconfirmed cases. What he found demonstrates the reality of terrorist food attacks.
U.S. government cybersecurity is an insecure mess, and fixing it is going to take considerable attention and resources. Trying to make sense of this, President Barack Obama ordered a 60-day review of government cybersecurity initiatives. Meanwhile, the U.S.
An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.
I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision.
This essay also appeared in The Hindu, Brisbane Times, and The Sydney Morning Herald.
It regularly comes as a surprise to people that our own infrastructure can be used against us. And in the wake of terrorist attacks or plots, there are fear-induced calls to ban, disrupt or control that infrastructure. According to officials investigating the Mumbai attacks, the terrorists used images from Google Earth to help learn their way around.
As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt's immigration status. And in November, Verizon employees peeked at his cellphone records.
When he becomes president, Barack Obama will have to give up his BlackBerry. Aides are concerned that his unofficial conversations would become part of the presidential record, subject to subpoena and eventually made public as part of the country's historical record.
This reality of the information age might be particularly stark for the president, but it's no less true for all of us. Conversation used to be ephemeral.
Since the UK's Criminal Records Bureau (CRB) was established in 2002, an ever-increasing number of people are required to undergo a "CRB check" before they can interact with children. It's not only teachers and daycare providers, but football coaches, scoutmasters and Guiders, church volunteers, bus drivers, and school janitors -- 3.4 million checks in 2007, 15 million since 2002. In 2009, it will include anyone who works or volunteers in a position where he or she comes into contact with children: 11.3 million people, or a quarter of the adult population.
This might make sense if it worked, but it doesn't.
This essay also appeared in the Taipei Times.
Airport security found a bottle of saline in my luggage at Heathrow Airport last month. It was a 4oz bottle, slightly above the 100 ml limit. Airport security in the United States lets me through with it all the time, but UK security was stricter.
It's not true that no one worries about terrorists attacking chemical plants. It's just that our politics seem to leave us unable to deal with the threat. Toxins such as ammonia, chlorine, propane and flammable mixtures are being produced or stored as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar.
No-fly lists and photo IDs are supposed to help protect the flying public from terrorists. Except that they don't work.
The TSA is tightening its photo ID rules at airport security. Previously, people with expired IDs or who claimed to have lost their IDs were subjected to secondary screening. Then the Transportation Security Administration realized that meant someone on the government's no-fly list -- the list that is supposed to keep our planes safe from terrorists -- could just fly with no ID.
Now, people without ID must also answer personal questions from their credit history to ascertain their identity.
Obama has a cybersecurity plan.
It's basically what you would expect: Appoint a national cybersecurity adviser, invest in math and science education, establish standards for critical infrastructure, spend money on enforcement, establish national standards for securing personal data and data-breach disclosure, and work with industry and academia to develop a bunch of needed technologies.
I could comment on the plan, but with security, the devil is always in the details -- and, of course, at this point there are few details. But since he brought up the topic -- McCain supposedly is "working on the issues" as well -- I have three pieces of policy advice for the next president, whoever he is.
Pervasive security cameras don't substantially reduce crime. There are exceptions, of course, and that's what gets the press. Most famously, CCTV cameras helped catch James Bulger's murderers in 1993. And earlier this year, they helped convict Steve Wright of murdering five women in the Ipswich area.
What is it with photographers these days? Are they really all terrorists, or does everyone just think they are?
Since 9/11, there has been an increasing war on photography. Photographers have been harrassed, questioned, detained, arrested or worse, and declared to be unwelcome.
Last month a US court ruled that border agents can search your laptop, or any other electronic device, when you're entering the country. They can take your computer and download its entire contents, or keep it for several days. Customs and Border Patrol has not published any rules regarding this practice, and I and others have written a letter to Congress urging it to investigate and regulate this practice.
But the US is not alone.
On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and -- in many cases -- shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.
It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace.
Book Review of Access Denied
China restricts Internet access by keyword.
In 1993, Internet pioneer John Gilmore said "the net interprets censorship as damage and routes around it", and we believed him. In 1996, cyberlibertarian John Perry Barlow issued his 'Declaration of the Independence of Cyberspace' at the World Economic Forum at Davos, Switzerland, and online. He told governments: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear."
At the time, many shared Barlow's sentiments.
When I write and speak about privacy, I am regularly confronted with the mutual disclosure argument. Explained in books like David Brin's The Transparent Society, the argument goes something like this: In a world of ubiquitous surveillance, you'll know all about me, but I will also know all about you. The government will be watching us, but we'll also be watching the government. This is different than before, but it's not automatically worse.
National ID System Is Not Worth The $23 Billion Price Tag
The argument was so obvious it hardly needed repeating: We would all be safer if we had a better ID card. A good, hard-to-forge national ID is a no-brainer (or so the argument goes), and it's ridiculous that a modern country such as the United States doesn't have one. One result of this line of thinking is the planned Real ID Act, which forces all states to conform to common and more stringent rules for issuing driver's licenses.
But security is always a tradeoff; it must be balanced with the cost.
Many people say that allowing illegal aliens to obtain state driver's licenses helps them and encourages them to remain illegally in this country. Michigan Attorney General Mike Cox late last year issued an opinion that licenses could be issued only to legal state residents, calling it "one more tool in our initiative to bolster Michigan's border and document security."
In reality, we are a much more secure nation if we do issue driver's licenses and/or state IDs to every resident who applies, regardless of immigration status. Issuing them doesn't make us any less secure, and refusing puts us at risk.
The state driver's license databases are the only comprehensive databases of U.S.
If there's a debate that sums up post-9/11 politics, it's security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity?
I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.
Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly futile attempt to save six miners in Utah.)
Perhaps the most amazing aspect of these stories is that there's nothing particularly amazing about it. No matter what the disaster -- hurricane, earthquake, terrorist attack -- the nation's first responders get to the scene soon after.
This essay appeared as part of a point-counterpoint with Marcus Ranum. Marcus's side, to which this is a response, can be found on his website.
Big Brother isn't what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today's information society looks nothing like Orwell's world, and watching and intimidating a population today isn't anything like what Winston Smith experienced.
Last month Marine Gen. James Cartwright told the House Armed Services Committee that the best cyberdefense is a good offense.
As reported in Federal Computer Week, Cartwright said: "History teaches us that a purely defensive posture poses significant risks," and that if "we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests."
The general isn't alone. In 2003, the entertainment industry tried to get a law passed (.pdf) giving it the right to attack any computer suspected of distributing copyright-protected material. And there probably isn't a sysadmin in the world who doesn't want to strike back at computers that are blindly and repeatedly attacking their networks.
This article was published under the title "They're Watching."
If you've traveled abroad recently, you've been investigated. You've been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.
This essay appeared as part of a point-counterpoint with Marcus Ranum.
Regulation is all about economics. Here's the theory. In a capitalist system, companies make decisions based on their own self-interest. This isn't a bad thing; it's actually a very good thing.
This essay also appeared in San Jose Mercury News, Sacramento Bee, Concord Monitor, Fort Worth Star Telegram, Dallas Morning News, Contra Costa Times, Statesman Journal, and The Clarion-Ledger.
If you have a passport, now is the time to renew it -- even if it's not set to expire anytime soon. If you don't have a passport and think you might need one, now is the time to get it. In many countries, including the United States, passports will soon be equipped with RFID chips.
This essay appeared as part of a point-counterpoint with Marcus Ranum. Marcus's side can be found on his website.
If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working then they stop working.
It's a situation that sneaked up on us.
Better to Put People, Not Computers, in Charge of Investigating Potential Plots
Collecting information about every American's phone calls is an example of data mining. The basic idea is to collect as much information as possible on everyone, sift through it with massive computers, and uncover terrorist plots. It's a compelling idea, and convinces many. But it's wrong.
California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.
Except that it won't do the same thing: The federal bill has become so watered down that it won't be very effective.
Does it make sense to surrender management, including security, of six U.S. ports to a Dubai-based company? This question has set off a heated debate between the administration and Congress, as members of both parties condemned the deal.
Most of the rhetoric is political posturing, but there's an interesting security issue embedded in the controversy.
Bush may have bypassed federal wiretap law to deploy more high-tech methods of surveillance.
When President Bush directed the National Security Agency to secretly eavesdrop on American citizens, he transferred an authority previously under the purview of the Justice Department to the Defense Department and bypassed the very laws put in place to protect Americans against widespread government eavesdropping. The reason may have been to tap the NSA's capability for data mining and widespread surveillance.
Illegal wiretapping of Americans is nothing new. In the 1950s and '60s, in a program called "Project Shamrock," the NSA intercepted every single telegram coming in or going out of the United States.
In the weeks after 9/11, while America and the world were grieving, President Bush built a legal rationale for a dictatorship. Then he started using it to avoid the law.
This past Thursday, the New York Times exposed the most significant violation of federal surveillance law in the post-Watergate era. President Bush secretly authorized the National Security Agency to engage in domestic spying, wiretapping thousands of Americans and bypassing the legal procedures regulating this activity.
This isn't about the spying, although that's a major issue in itself. This is about the Fourth Amendment protections against illegal search.
Spying tools are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism.
Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year's Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database -- probably close to a million people overall -- that the FBI's computers analyzed, looking for links to known terrorists.
In 2004, when the U.S. State Department first started talking about embedding RFID chips in passports, the outcry from privacy advocates was huge. When the State Department issued its draft regulation in February, it got 2,335 comments, 98.5 percent negative. In response, the final State Department regulations, issued last week, contain two features that attempt to address security and privacy concerns.
At John Roberts' confirmation hearings last week, there weren't enough discussions about science fiction. Technologies that are science fiction today will become constitutional questions before Roberts retires from the bench. The same goes for technologies that cannot even be conceived of now. And many of these questions involve privacy.
Leaving aside the political posturing and the finger-pointing, how did our nation mishandle Katrina so badly? After spending tens of billions of dollars on homeland security (hundreds of billions, if you include the war in Iraq) in the four years after 9/11, what did we do wrong? Why were there so many failures at the local, state and federal levels?
These are reasonable questions.
Reports are coming in torrents. Criminals are known to have downloaded personal credit information of over 145,000 Americans from ChoicePoint's network. Hackers took over one of Lexis Nexis' databases, gaining access to personal files of 32,000 people. Bank of America Corp. lost computer data tapes that contained personal information on 1.2 million federal employees, including members of the U.S.
Opinion: The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries.
For at least seven months last year, a hacker had access to T-Mobile's customer network. He is known to have accessed information belonging to 400 customers—names, Social Security numbers, voice mail messages, SMS messages, photos—and probably had the ability to access data belonging to any of T-Mobile's 16.3 million U.S. customers.
Much of the political rhetoric surrounding the US presidential election centers around the relative security posturings of President George W. Bush and Senator John Kerry, with each side loudly proclaiming that his opponent will do irrevocable harm to national security.
Terrorism is a serious issue facing our nation in the early 21st century, and the contrasting views of these candidates is important. But this debate obscures another security risk, one much more central to the US: the increasing centralisation of American political power in the hands of the executive branch of the government.
Over 200 years ago, the framers of the US Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies.
How would we know? An essay by one of the world's busiest security experts.
As I read the litany of terror threat warnings that the government has issued in the past three years, the thing that jumps out at me is how vague they are. The careful wording implies everything without actually saying anything. We hear "terrorists might try to bomb buses and rail lines in major U.S.
U.S. Security Blocks Free Exchange of Ideas
Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world's premier cryptography conference. It's held every August in Santa Barbara.
Want to learn how to create and sustain psychosis on a national scale? Look carefully at the public statements made by the Department of Homeland Security.
Here are a few random examples: "Weapons of mass destruction, including those containing chemical, biological or radiological agents or materials, cannot be discounted." "At least one of these attacks could be executed by the end of the summer 2003." "These credible sources suggest the possibility of attacks against the homeland around the holiday season and beyond."
The DHS's threat warnings have been vague, indeterminate, and unspecific. The threat index goes from yellow to orange and back again, although no one is entirely sure what either level means.
Intended as a counterterrorism tool, it doesn't work and tramples on travelers' rights
Imagine a list of suspected terrorists so dangerous that we can't ever let them fly, yet so innocent that we can't arrest them - even under the draconian provisions of the Patriot Act.
This is the federal government's "no-fly" list. First circulated in the weeks after 9/11 as a counterterrorism tool, its details are shrouded in secrecy.
But, because the list is filled with inaccuracies and ambiguities, thousands of innocent, law-abiding Americans have been subjected to lengthy interrogations and invasive searches every time they fly, and sometimes forbidden to board airplanes.
In the wake of the U.S. Department of Homeland Security's awarding of its largest contract, for a system to fingerprint and to keep tabs on foreign visitors in the United States, it makes sense to evaluate our country's response to terrorism. Are we getting good value for all the money that we're spending?
US-VISIT is a government program to help identify the 23 million foreigners who visit the United States every year.
It's been said that all business-to-business sales are motivated by either fear or greed. Traditionally, security products and services have been a fear sell: fear of burglars, murders, kidnappers, and -- more recently -- hackers. Despite repeated attempts by the computer security industry to position itself as a greed sell -- "better Internet security will make your company more profitable because you can better manage your risks" -- fear remains the primary motivator for the purchase of network security products and services.
The problem is that many security risks are not borne by the organization making the purchasing decision.
As the U.S. Supreme Court decides three legal challenges to the Bush administration's legal maneuverings against terrorism, it is important to keep in mind how critical these cases are to our nation's security. Security is multifaceted; there are many threats from many different directions. It includes the security of people against terrorism, and also the security of people against tyrannical government.
National security is a hot political topic right now, as both presidential candidates are asking us to decide which one of them is better fit to secure the country.
Many large and expensive government programs--the CAPPS II airline profiling system, the US-VISIT program that fingerprints foreigners entering our country, and the various data-mining programs in research and development--take as a given the need for more security.
At the end of 2005, when many provisions of the controversial Patriot Act expire, we will again be asked to sacrifice certain liberties for security, as many legislators seek to make those provisions permanent.
As a security professional, I see a vital component missing from the debate.
Posturing, pontifications, and partisan politics aside, the one clear generalization that emerges from the 9/11 hearings is that information--timely, accurate, and free-flowing--is critical in our nation's fight against terrorism. Our intelligence and law-enforcement agencies need this information to better defend our nation, and our citizens need this information to better debate massive financial expenditures for anti-terrorist measures, changes in law that aid law enforcement and diminish civil liberties, and the upcoming Presidential election
The problem is that the current administration has consistently used terrorism information for political gain. Again and again, the Bush administration has exaggerated the terrorist threat for political purposes. They're embarked on a re-election strategy that involves a scared electorate voting for the party that is perceived to be better able to protect them.
This essay also appeared, in a slightly different form, in The Mercury News.
As a security technologist, I regularly encounter people who say the United States should adopt a national ID card. How could such a program not make us more secure, they ask?
The suggestion, when it's made by a thoughtful civic-minded person like Nicholas Kristof (Star-Tribune, March 18), often takes on a tone that is regretful and ambivalent: Yes, indeed, the card would be a minor invasion of our privacy, and undoubtedly it would add to the growing list of interruptions and delays we encounter every day; but we live in dangerous times, we live in a new world ... .
Every day, some 82,000 foreign visitors set foot in the US with a visa, and since early this year, most of them have been fingerprinted and photographed in the name of security. But despite the money spent, the inconveniences suffered, and the international ill will caused, these new measures, like most instituted in the wake of September 11, are mostly ineffectual.
Terrorist attacks are very rare. So rare, in fact, that the odds of being the victim of one in an industrialized country are almost nonexistent.
Last week the Supreme Court let stand the Justice Department's right to secretly arrest noncitizen residents.
Combined with the government's power to designate foreign prisoners of war as "enemy combatants" in order to ignore international treaties regulating their incarceration, and their power to indefinitely detain U.S. citizens without charge or access to an attorney, the United States is looking more and more like a police state.
Since the Sept. 11 attacks, the Justice Department has asked for, and largely received, additional powers that allow it to perform an unprecedented amount of surveillance of American citizens and visitors.
Imagine that you're going on vacation to some exotic country.
You get your visa, plan your trip and take a long flight. How would you feel if, at the border, you were photographed and fingerprinted? How would you feel if your biometrics stayed in that country's computers for years?
A joint congressional intelligence inquiry has concluded that 9/11 could have been prevented if our nation's intelligence agencies shared information better and coordinated more effectively. This is both a trite platitude and a profound proscription.
Intelligence is easy to understand after the fact. With the benefit of hindsight, it's easy to draw lines from people in flight school here, to secret meetings in foreign countries there, over to interesting tips from informants, and maybe to INS records.
Testimony and Statement for the Record of Bruce Schneier
Chief Technical Officer, Counterpane Internet Security, Inc.
Hearing on "Overview of the Cyber Problem-A Nation Dependent and Dealing with Risk"
Before the Subcommittee on Cybersecurity, Science, and Research and Development
Committee on Homeland Security
United States House of Representatives
June 25, 2003
2318 Rayburn House Office Building
Mr. Chairman, members of the Committee, thank you for the opportunity to testify today regarding cybersecurity, particularly in its relation to homeland defense and our nation's critical infrastructure. My name is Bruce Schneier, and I have worked in the field of computer security for my entire career. I am the author of seven books on the topic, including the best-selling Secrets and Lies: Digital Security in a Networked World .
In April 2003, the US Justice Department administratively discharged the FBI of its statutory duty to ensure the accuracy and completeness of the National Crime Information Center (NCIC) database. This enormous database contains over 39 million criminal records and information on wanted persons, missing persons, and gang members, as well as information about stolen cars and boats. More than 80,000 law enforcement agencies have access to this database. On average, the database processes 2.8 million transactions each day.
Forget It: Bland PR Document Has Only Recommendations
AT 60 pages, the White House's National Strategy to Secure Cyberspace is an interesting read, but it won't help to secure cyberspace. It's a product of consensus, so it doesn't make any of the hard choices necessary to radically increase cyberspace security. Consensus doesn't work in security design, and invariably results in bad decisions. It's the compromises that are harmful, because the more parties you have in the discussion, the more interests there are that conflict with security.
In the wake of the devastating attacks on New York's World Trade Center and the Pentagon, Sen. Judd Gregg (R-N.H.), with backing from other high- ranking government officials, quickly seized the opportunity to propose limits on strong encryption and "key-escrow" systems that insure government access. This is a bad move because it will do little to thwart terrorist activities and it will also reduce the security of our critical infrastructure.
As more and more of our nation's critical infrastructure goes digital, cryptography is more important than ever. We need all the digital security we can get; the government shouldn't be doing things that actually reduce it.
The arrest of a Russian computer security researcher was a major setback for computer security research. The FBI nabbed Dmitry Sklyarov after he presented a paper at DefCon, the hacker community convention in Las Vegas, on the strengths and the weaknesses of software to encrypt an electronic book.
Although I'm certain the FBI's case will never hold up in court, it shows that free speech is secondary to the entertainment industry's paranoia about copyright protection.
Sklyarov is accused of violating the Digital Millennium Copyright Act (DMCA), which makes publishing critical research on this technology more serious than publishing design information on nuclear weapons.
Testimony and Statement for the Record of Bruce Schneier
Chief Technical Officer, Counterpane Internet Security, Inc.
Hearing on Internet Security before the Subcommittee on Science, Technology, and Space of the Committee on Commerce, Science and Transportation
United States Senate
July 16, 2001
253 Russell Senate Office Building
My name is Bruce Schneier. I am the founder and Chief Technical Officer of Counterpane Internet Security. Inc. Counterpane was founded to address the immediate need for increased Internet security, and essentially provides burglar alarm services for computer networks.
The author of a pioneering work on the NSA delivers a new book of revelations about the mysterious agency's coverups, eavesdropping and secret missions.
In 1982, James Bamford published "The Puzzle Palace," his first exposé on the National Security Agency. His new exposé on the NSA is called "Body of Secrets." Twenty years makes a lot of difference in the intelligence biz.
During those 20 years, the Reagan military buildup came and went, the Soviet Union fell and the Cold War ended, and a bevy of new military enemies emerged. Electronic communications exploded through faxes, cellphones, the Internet, etc.
One of the stranger justifications of U.S. export controls is that they prevent the spread of cryptographic expertise. Years ago, the Administration argued that there were no cryptographic products available outside the U.S. When several studies proved that there were hundreds of products designed, built, and marketed outside the U.S., the Administration changed its story.
A version of this essay appeared on ZDNet.com.
AES is the Advanced Encryption Standard, the encryption algorithm that will eventually replace DES. In 1997, the U.S. government (NIST, actually), solicited candidate algorithms for this standard. By June 1998 (the submission deadline), NIST received fifteen submissions.
Key recovery is like trying to fit a square peg into a round hole. No matter how much you finagle it, it's simply not going to work.
In the September issue of Information Security, Commerce Undersecretary William Reinsch suggests that U.S. crypto export policy hinges on the concept of "balance" (Q&A: "Crypto's Key Man").
For key recovery policy to be successful, he argues, it must achieve a balance between privacy and access, between the needs of consumers and the requirements of the law-enforcement community.
For those who have followed the key recovery debate, Reinsch's comments will have a familiar ring.
The U.S. State Department recently ruled that some forms of electronic speech are not protected by the First Amendment and can be prohibited from export. This decision raises questions about freedom of speech on the information superhighway. As business communications continue to migrate from paper mail to electronic mail, these questions will become more important.
Good news! The federal government respects and is working to protect your privacy... just as long as you don't want privacy from the government itself.
In April 1994, the Clinton administration, cleaning up old business from the Bush administration, introduced a new cryptography initiative that ensures the government's ability to conduct electronic surveillance.
In April, the Clinton administration, cleaning up business left over from the Bush administration, introduced a cryptography initiative that gives government the ability to conduct electronic surveillance. The first fruit of this initiative is Clipper, a National Security Agency (NSA)-designed, tamper-resistant VLSI chip. The stated purpose of this chip is to secure telecommunications.
Clipper uses a classified encryption algorithm.
Sidebar photo of Bruce Schneier by Joe MacInnis.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Security.