Entries Tagged "DHS"

Page 38 of 38

Altimeter Watches Now a Terrorism Threat

This story is so idiotic that I have trouble believing it’s true. According to MSNBC:

An advisory issued Monday by the Department of Homeland Security and the FBI urges the Transportation Security Administration to have airport screeners keep an eye out for wristwatches containing cigarette lighters or altimeters.

The notice says “recent intelligence suggests al-Qaida has expressed interest in obtaining wristwatches with a hidden butane-lighter function and Casio watches with an altimeter function. Casio watches have been extensively used by al-Qaida and associated organizations as timers for improvised explosive devices. The Casio brand is likely chosen due to its worldwide availability and inexpensive price.”

Clocks and watches definitely make good device timers for remotely triggered bombs. In this scenario, the person carrying the watch is an innocent. (Otherwise he wouldn’t need a remote triggering device; he could set the bomb off himself.) This implies that the bomb is stuffed inside the functional watch. But if you assume a bomb as small as the non-functioning space in a wristwatch can blow up an airplane, you’ve got problems far bigger than one particular brand of wristwatch. This story simply makes no sense.

And, like most of the random “alerts” from the DHS, it’s not based on any real facts:

The advisory notes that there is no specific information indicating any terrorist plans to use the devices, but it urges screeners to watch for them.

I wish the DHS were half as good at keeping people safe as they are at scaring people. (I’ve written more about that here.)

Posted on January 5, 2005 at 12:34 PMView Comments

Airline Passenger Profiling

From an anonymous reader who works for the airline industry in the United States:

There are two initiatives in the works, neither of which leaves me feeling very good about privacy rights.

The first is being put together by the TSA and is called the “Secure Flight Initiative.” An initial test of this program was performed recently and involved each airline listed in the document having to send in passenger information (aka PNR data) for every passenger that “completed a successful domestic trip” during June 2004. A sample of some of the fields that were required to be sent: name, address, phone (if available), itinerary, any comments in the PNR record made by airline personnel, credit card number and expiration date, and any changes made to the booking before the actual flight.

This test data was transmitted to the TSA via physical CD. The requirement was that we “encrypt” it using pkzip (or equivalent) before putting it on the CD. We were to then e-mail the password to the Secure Flight Initiative e-mail address. Although this is far from ideal, it is in fact a big step up. The original process was going to have people simply e-mail the above data to the TSA. They claim to have a secure facility where the data is stored.

As far as the TSA’s retention of the data, the only information we have been given is that as soon as the test phase is over, they will securely delete the data. We were given no choice but had to simply take their word for it.

Rollout of the Secure Flight initiative is scheduled for “next year” sometime. They’re going to start with larger carriers and work their way down to the smaller carriers. It hasn’t been formalized (as far as I know) yet as to what data will be required to be transmitted when. My suspicion is that upon flight takeoff, all PNR data for all passengers on board will be required to be sent. At this point, I still have not heard as to what method will be used for data transmission.

There is another initiative being implemented by the Customs and Border Protection, which is part of the Department of Homeland Security. This (unnamed) initiative is essentially the same thing as the Secure Flight program. That’s right — two government agencies are requiring us to transmit the information separately to each of them. So much for information sharing within the government.

Most larger carriers are complying with this directive by simply allowing the CBP access to their records directly within their
reservation systems (often hosted by folks like Sabre, Worldspan, Galileo, etc). Others (such as the airline I work for) are opting to
only transmit the bare requirements without giving direct access to our system. The data is transmitted over a proprietary data network that is used by the airline industry.

There are a couple of differences between the Secure Flight program and the one being instituted by the CBP. The CBP’s program requires that PNR data for all booked passengers be transmitted:

  • 72 hours before flight time
  • 24 hours before flight time
  • 8 hours before flight time
  • and then again immediately after flight departure

The other major difference is that it looks as though there will be a requirement that we operate in a way that allows them to send a request for data for any flight at any time which we must send back in an automated fashion.

Oh, and just as a kick in the pants, the airlines are expected to pay the costs for all these data transmissions (to the tune of several thousand dollars a month).

Posted on December 22, 2004 at 10:06 AMView Comments

Airline Security and the TSA

Recently I received this e-mail from an anonymous Transportation Security Association employee — those are the guys that screen you at airports — about something I wrote about airline security:

I was going through my email archives and found a link to a story. Apparently you enjoy attacking TSA, and relish in stories where others will do it for you. I work for TSA, and understand that a lot of what they do is little more than “window dressing” (your words). However, very few can argue that they are a lot more effective than the rent-a-cop agencies that were supposed to be securing the airports pre-9/11.

Specifically to the story, it has all the overtones of Urban Legend: overly emotional, details about the event but only giving names of self and “pet,” overly verbose, etc. Bottom line, that the TSA screener and supervisor told our storyteller that the fish was “in no way… allowed to pass through security” is in direct violation of publicly accessible TSA policy. Fish may be unusual, but they’re certainly not forbidden.

I’m disappointed, Bruce. Usually you’re well researched. Your articles and books are very well documented and cross-referenced. However, when it comes to attacking TSA, you seem to take some stories at face value without verifying the facts and TSA policies. I’m also disappointed that you would popularize a story that implicitly tells people to hide their “prohibited items” from security. I have personally witnessed people get arrested for thinking they were clever in hiding something they shouldn’t be carrying anyway.

For those who don’t want to follow the story, it’s about a college student who was told by TSA employees that she could not take her fish on the airplane for security reasons. She then smuggled the fish aboard by hiding it in her carry-on luggage. Final score: fish 1, TSA 0.

To the points in the letter:

  1. You may be right that the story is an urban legend. But it did appear in a respectable newspaper, and I hope the newspaper did at least some fact-checking. I may have been overly optimistic.

  2. You are certainly right that pets are allowed on board airplanes. But just because something is official TSA policy doesn’t mean it’s necessarily followed in the field. There have been many instances of TSA employees inventing rules. It doesn’t surprise me in the least that one of them refused to allow a fish on an airplane.

  3. I am happy to popularize a story that implicitly tells people to hide prohibited items from airline security. I’m even happy to explicitly tell people to hide prohibited items from airline security. A friend of mine recently figured out how to reliably sneak her needlepoint scissors through security — they’re the foldable kind, and she slips them against a loose leaf binder — and I am pleased to publicize that. Hell, I’ve even explained how to fly on someone else’s airline ticket and make your own knife on board an airplane [Beyond Fear, page 85].

  4. I think airline passenger screening is inane. It’s invasive, expensive, time-consuming, and doesn’t make us safer. I think that civil disobedience is a perfectly reasonable reaction.

  5. Honestly, you won’t get arrested if you simply play dumb when caught. Unless, that is, you’re smuggling an actual gun or bomb aboard an aircraft, in which case you probably deserve to get arrested.

Posted on December 6, 2004 at 9:15 AMView Comments

RFID Passports

Since the terrorist attacks of 2001, the Bush administration–specifically, the Department of Homeland Security–has wanted the world to agree on a standard for machine-readable passports. Countries whose citizens currently do not have visa requirements to enter the United States will have to issue passports that conform to the standard or risk losing their nonvisa status.

These future passports, currently being tested, will include an embedded computer chip. This chip will allow the passport to contain much more information than a simple machine-readable character font, and will allow passport officials to quickly and easily read that information. That is a reasonable requirement and a good idea for bringing passport technology into the 21st century.

But the Bush administration is advocating radio frequency identification (RFID) chips for both U.S. and foreign passports, and that’s a very bad thing.

These chips are like smart cards, but they can be read from a distance. A receiving device can “talk” to the chip remotely, without any need for physical contact, and get whatever information is on it. Passport officials envision being able to download the information on the chip simply by bringing it within a few centimeters of an electronic reader.

Unfortunately, RFID chips can be read by any reader, not just the ones at passport control. The upshot of this is that travelers carrying around RFID passports are broadcasting their identity.

Think about what that means for a minute. It means that passport holders are continuously broadcasting their name, nationality, age, address and whatever else is on the RFID chip. It means that anyone with a reader can learn that information, without the passport holder’s knowledge or consent. It means that pickpockets, kidnappers and terrorists can easily–and surreptitiously–pick Americans or nationals of other participating countries out of a crowd.

It is a clear threat to both privacy and personal safety, and quite simply, that is why it is bad idea. Proponents of the system claim that the chips can be read only from within a distance of a few centimeters, so there is no potential for abuse. This is a spectacularly naïve claim. All wireless protocols can work at much longer ranges than specified. In tests, RFID chips have been read by receivers 20 meters away. Improvements in technology are inevitable.

Security is always a trade-off. If the benefits of RFID outweighed the risks, then maybe it would be worth it. Certainly, there isn’t a significant benefit when people present their passport to a customs official. If that customs official is going to take the passport and bring it near a reader, why can’t he go those extra few centimeters that a contact chip–one the reader must actually touch–would require?

The Bush administration is deliberately choosing a less secure technology without justification. If there were a good offsetting reason to choose that technology over a contact chip, then the choice might make sense.

Unfortunately, there is only one possible reason: The administration wants surreptitious access themselves. It wants to be able to identify people in crowds. It wants to surreptitiously pick out the Americans, and pick out the foreigners. It wants to do the very thing that it insists, despite demonstrations to the contrary, can’t be done.

Normally I am very careful before I ascribe such sinister motives to a government agency. Incompetence is the norm, and malevolence is much rarer. But this seems like a clear case of the Bush administration putting its own interests above the security and privacy of its citizens, and then lying about it.

This article originally appeared in the 4 October 2004 edition of the International Herald Tribune.

Posted on October 4, 2004 at 7:20 PMView Comments

Do Terror Alerts Work?

As I read the litany of terror threat warnings that the government has issued in the past three years, the thing that jumps out at me is how vague they are. The careful wording implies everything without actually saying anything. We hear “terrorists might try to bomb buses and rail lines in major U.S. cities this summer,” and there’s “increasing concern about the possibility of a major terrorist attack.” “At least one of these attacks could be executed by the end of the summer 2003.” Warnings are based on “uncorroborated intelligence,” and issued even though “there is no credible, specific information about targets or method of attack.” And, of course, “weapons of mass destruction, including those containing chemical, biological, or radiological agents or materials, cannot be discounted.”

Terrorists might carry out their attacks using cropdusters, helicopters, scuba divers, even prescription drugs from Canada. They might be carrying almanacs. They might strike during the Christmas season, disrupt the “democratic process,” or target financial buildings in New York and Washington.

It’s been more than two years since the government instituted a color-coded terror alert system, and the Department of Homeland Security has issued about a dozen terror alerts in that time. How effective have they been in preventing terrorism? Have they made us any safer, or are they causing harm? Are they, as critics claim, just a political ploy?

When Attorney General John Ashcroft came to Minnesota recently, he said the fact that there had been no terrorist attacks in America in the three years since September 11th was proof that the Bush administration’s anti-terrorist policies were working. I thought: There were no terrorist attacks in America in the three years before September 11th, and we didn’t have any terror alerts. What does that prove?

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans–as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

It’s true too that people want to make their own decisions. Regardless of what the government suggests, people are going to independently assess the situation. They’re going to decide for themselves whether or not changing their behavior seems like a good idea. If there’s no rational information to base their independent assessment on, they’re going to come to conclusions based on fear, prejudice, or ignorance.

We’re already seeing this in the U.S. We see it when Muslim men are assaulted on the street. We see it when a woman on an airplane panics because a Syrian pop group is flying with her. We see it again and again, as people react to rumors about terrorist threats from Al Qaeda and its allies endlessly repeated by the news media.

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared. And maddeningly, the current administration occasionally compromises the intelligence assets it does have, in the interest of politics. It recently released the name of a Pakistani agent working undercover in Al Qaeda, blowing ongoing counterterrorist operations both in Pakistan and the U.K.

Still, ironically, most of the time the administration projects a “just trust me” attitude. And there are those in the U.S. who trust it, and there are those who do not. Unfortunately, there are good reasons not to trust it. There are two reasons government likes terror alerts. Both are self-serving, and neither has anything to do with security.

The first is such a common impulse of bureaucratic self-protection that it has achieved a popular acronym in government circles: CYA. If the worst happens and another attack occurs, the American public isn’t going to be as sympathetic to the current administration as it was last time. After the September 11th attacks, the public reaction was primarily shock and disbelief. In response, the government vowed to fight the terrorists. They passed the draconian USA PATRIOT Act, invaded two countries, and spent hundreds of billions of dollars. Next time, the public reaction will quickly turn into anger, and those in charge will need to explain why they failed. The public is going to demand to know what the government knew and why it didn’t warn people, and they’re not going to look kindly on someone who says: “We didn’t think the threat was serious enough to warn people.” Issuing threat warnings is a way to cover themselves. “What did you expect?” they’ll say. “We told you it was Code Orange.”

The second purpose is even more self-serving: Terror threat warnings are a publicity tool. They’re a method of keeping terrorism in people’s minds. Terrorist attacks on American soil are rare, and unless the topic stays in the news, people will move on to other concerns. There is, of course, a hierarchy to these things. Threats against U.S. soil are most important, threats against Americans abroad are next, and terrorist threats–even actual terrorist attacks–against foreigners in foreign countries are largely ignored.

Since the September 11th attacks, Republicans have made “tough on terror” the centerpiece of their reelection strategies. Study after study has shown that Americans who are worried about terrorism are more likely to vote Republican. In 2002, Karl Rove specifically told Republican legislators to run on that platform, and strength in the face of the terrorist threat is the basis of Bush’s reelection campaign. For that strategy to work, people need to be reminded constantly about the terrorist threat and how the current government is keeping them safe.

It has to be the right terrorist threat, though. Last month someone exploded a pipe bomb in a stem-cell research center near Boston, but the administration didn’t denounce this as a terrorist attack. In April 2003, the FBI disrupted a major terrorist plot in the U.S., arresting William Krar and seizing automatic weapons, pipe bombs, bombs disguised as briefcases, and at least one cyanide bomb–an actual chemical weapon. But because Krar was a member of a white supremacist group and not Muslim, Ashcroft didn’t hold a press conference, Tom Ridge didn’t announce how secure the homeland was, and Bush never mentioned it.

Threat warnings can be a potent tool in the fight against terrorism–when there is a specific threat at a specific moment. There are times when people need to act, and act quickly, in order to increase security. But this is a tool that can easily be abused, and when it’s abused it loses its effectiveness.

It’s instructive to look at the European countries that have been dealing with terrorism for decades, like the United Kingdom, Ireland, France, Italy, and Spain. None of these has a color-coded terror-alert system. None calls a press conference on the strength of “chatter.” Even Israel, which has seen more terrorism than any other nation in the world, issues terror alerts only when there is a specific imminent attack and they need people to be vigilant. And these alerts include specific times and places, with details people can use immediately. They’re not dissimilar from hurricane warnings.

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

Fighting global terrorism is difficult, and it’s not something that should be played for political gain. Countries that have been dealing with terrorism for decades have realized that much of the real work happens outside of public view, and that often the most important victories are the most secret. The elected officials of these countries take the time to explain this to their citizens, who in return have a realistic view of what the government can and can’t do to keep them safe.

By making terrorism the centerpiece of his reelection campaign, President Bush and the Republicans play a very dangerous game. They’re making many people needlessly fearful. They’re attracting the ridicule of others, both domestically and abroad. And they’re distracting themselves from the serious business of actually keeping Americans safe.

This article was originally published in the October 2004 edition of The Rake

Posted on October 4, 2004 at 7:08 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets — keys — but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures — and even routine government operations — secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry — to whom the government is ultimately accountable — is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets — keys — but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures — and even routine government operations — secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry — to whom the government is ultimately accountable — is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

1 36 37 38

Sidebar photo of Bruce Schneier by Joe MacInnis.