Blog: October 2004 Archives

Getting Out the Vote: Why is it so hard to run an honest election?

Four years after the Florida debacle of 2000 and two years after Congress passed the Help America Vote Act, voting problems are again in the news: confusing ballots, malfunctioning voting machines, problems over who’s registered and who isn’t. All this brings up a basic question: Why is it so hard to run an election?

A fundamental requirement for a democratic election is a secret ballot, and that’s the first reason. Computers regularly handle multimillion-dollar financial transactions, but much of their security comes from the ability to audit the transactions after the fact and correct problems that arise. Much of what they do can be done the next day if the system is down. Neither of these solutions works for elections.

American elections are particularly difficult because they’re so complicated. One ballot might have 50 different things to vote on, all but one different in each state and many different in each district. It’s much easier to hold national elections in India, where everyone casts a single vote, than in the United States. Additionally, American election systems need to be able to handle 100 million voters in a single day—an immense undertaking in the best of circumstances.

Speed is another factor. Americans demand election results before they go to sleep; we won’t stand for waiting more than two weeks before knowing who won, as happened in India and Afghanistan this year.

To make matters worse, voting systems are used infrequently, at most a few times a year. Systems that are used every day improve because people familiarize themselves with them, discover mistakes and figure out improvements. It seems as if we all have to relearn how to vote every time we do it.

It should be no surprise that there are problems with voting. What’s surprising is that there aren’t more problems. So how to make the system work better?—Simplicity: This is the key to making voting better. Registration should be as simple as possible. The voting process should be as simple as possible. Ballot designs should be simple, and they should be tested. The computer industry understands the science of user-interface—that knowledge should be applied to ballot design.—Uniformity: Simplicity leads to uniformity. The United States doesn’t have one set of voting rules or one voting system. It has 51 different sets of voting rules—one for every state and the District of Columbia—and even more systems. The more systems are standardized around the country, the more we can learn from each other’s mistakes.—Verifiability: Computerized voting machines might have a simple user interface, but complexity hides behind the screen and keyboard. To avoid even more problems, these machines should have a voter-verifiable paper ballot. This isn’t a receipt; it’s not something you take home with you. It’s a paper “ballot” with your votes—one that you verify for accuracy and then put in a ballot box. The machine provides quick tallies, but the paper is the basis for any recounts.—Transparency: All computer code used in voting machines should be public. This allows interested parties to examine the code and point out errors, resulting in continually improving security. Any voting-machine company that claims its code must remain secret for security reasons is lying. Security in computer systems comes from transparency—open systems that pass public scrutiny—and not secrecy.

But those are all solutions for the future. If you’re a voter this year, your options are fewer. My advice is to vote carefully. Read the instructions carefully, and ask questions if you are confused. Follow the instructions carefully, checking every step as you go. Remember that it might be impossible to correct a problem once you’ve finished voting. In many states—including California—you can request a paper ballot if you have any worries about the voting machine.

And be sure to vote. This year, thousands of people are watching and waiting at the polls to help voters make sure their vote counts.

This essay originally appeared in the San Francisco Chronicle.

Also read Avi Rubin’s op-ed on the subject.

Posted on October 31, 2004 at 9:13 AM20 Comments

Mail-in Ballot Attack

Ampersand lives in Oregon, which does its voting entirely by mail. On Monday—the day a lot of Oregon voters got their ballots—someone knocked over Ampersand’s “No on 36” sign and stole his mailbox, presumably hoping to get his ballot and prevent him from voting “no” on Amendment 36. In fact, he’d happened to receive his ballot the previous Saturday, but it could easily have worked.

From “Alas A Blog

On Monday, someone came into our yard, knocked over our “No on 36” sign, and stole our mailbox (with Monday’s mail inside it).

I doubt this was just random vandalism; Oregon mailed out voter ballots last week (Oregon does the vote entirely by mail), and a huge number of Oregonians got their ballots on Monday. So I think someone grabbed our mailbox and ran hoping that they’d get our ballots and thus keep us from voting against measure 36.

I doubt this was part of any widespread effort. Surely anyone doing it on a large scale would get tired of hauling off mailboxes, and just steal the mail inside. It’s also hard to avoid getting caught, since you have to steal the mail during the day—after it’s delivered but before the resident comes home to get it.

Still, it is interesting how the predictably timed mailing of ballots, and the prevalence of political lawn signs, enables a very narrowly targeted attack.

Posted on October 29, 2004 at 2:12 PM6 Comments

The Security of Checks and Balances

Much of the political rhetoric surrounding the US presidential election centers around the relative security posturings of President George W. Bush and Senator John Kerry, with each side loudly proclaiming that his opponent will do irrevocable harm to national security.

Terrorism is a serious issue facing our nation in the early 21st century, and the contrasting views of these candidates is important. But this debate obscures another security risk, one much more central to the US: the increasing centralisation of American political power in the hands of the executive branch of the government.

Over 200 years ago, the framers of the US Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies. A carefully thought-out system of checks and balances in the executive branch, the legislative branch, and the judicial branch, ensured that no single branch became too powerful. After watching tyrannies rise and fall throughout Europe, this seemed like a prudent way to form a government.

Since 9/11, the United States has seen an enormous power grab by the executive branch. From denying suspects the right to a trial—and sometimes to an attorney—to the law-free zone established at Guantanamo, from deciding which ratified treaties to ignore to flouting laws designed to foster open government, the Bush administration has consistently moved to increase its power at the expense of the rest of the government. The so-called “Torture Memos,” prepared at the request of the president, assert that the president can claim unlimited power as long as it is somehow connected with counterterrorism.

Presidential power as a security issue will not play a role in the upcoming US election. Bush has shown through his actions during his first term that he favours increasing the powers of the executive branch over the legislative and the judicial branches. Kerry’s words show that he is in agreement with the president on this issue. And largely, the legislative and judicial branches are allowing themselves to be trampled over.

In times of crisis, the natural human reaction is to look for safety in a single strong leader. This is why Bush’s rhetoric of strength has been so well-received by the American people, and why Kerry is also campaigning on a platform of strength. Unfortunately, consolidating power in one person is dangerous. History shows again and again that power is a corrupting influence, and that more power is more corrupting. The loss of the American system of checks and balances is more of a security danger than any terrorist risk.

The ancient Roman Senate had a similar way of dealing with major crises. When there was a serious military threat against the safety and security of the Republic, the long debates and compromise legislation that accompanied the democratic process seemed a needless luxury. The Senate would appoint a single person, called a “dictator” (Latin for “one who orders”) to have absolute power over Rome in order to more efficiently deal with the crisis. He was appointed for a period of six months or for the duration of the emergency, whichever period was shorter. Sometimes the process worked, but often the injustices that resulted from having a dictator were worse than the original crisis.

Today, the principles of democracy enshrined in the US constitution are more important than ever. In order to prevail over global terrorism while preserving the values that have made America great, the constitutional system of checks and balances is critical.

This is not a partisan issue; I don’t believe that John Kerry, if elected, would willingly lessen his own power any more than second-term President Bush would. What the US needs is a strong Congress and a strong court system to balance the presidency, not weak ones ceding ever more power to the presidency.

Originally published in the Sydney Morning Herald.

Posted on October 29, 2004 at 10:21 AM11 Comments

Foiling Metal Detectors

High school kids are sneaking cell phones past metal detectors.

From the New York Post:

Savvy students are figuring out all kinds of ways to get their cell phones past metal-detectors and school-security staff at city high schools, where the devices are banned.

Kids at Martin Luther King Jr. HS on the Upper West Side put the phones behind a belt buckle—and blame the buckle for the beeping metal-detector.

Some girls hide the phones where security guards won’t look—in their bras or between their legs.

Note that they’re not fooling the metal detectors; they’re fooling the people staffing the metal detectors.

Posted on October 27, 2004 at 1:44 PM53 Comments

A Sensible Elected Official

The new mayor of Madison, Alabama, has a surprisingly sensible attitude about security.

From the Huntsville Times:

City Hall security. Kirkindall, Atallo and Lacy agree the city may have gone a little overboard in the wake of the Sept. 11, 2001, terror attacks by eliminating 20 to 25 prime parking spaces near the building. Starting today, people will be allowed to park there again.

Atallo says the City Hall visitors log—another recent addition—also annoys people and doesn’t do anything to make Madison safer. To prove it, he’s been signing the names of famous terrorists – “O.B. Laden,” “Carlos T. Jackal” – in the book.

No one’s caught it.

As of today, the log is out.

I have no idea if he’s a Republican or a Democrat, but I wish there were more people like him in government.

Posted on October 26, 2004 at 11:36 AM5 Comments

World Series Security

The World Series is no stranger to security. Fans try to sneak into the ballpark without tickets, or with counterfeit tickets. Often foods and alcohol are prohibited from being brought into the ballpark, to enforce the monopoly of the high-priced concessions. Violence is always a risk: both small fights and larger-scale riots that result from fans from both teams being in such close proximity—like the one that almost happened during the sixth game of the AL series.

Today, the new risk is terrorism. Security at the Olympics cost $1.5 billion. $50 million each was spent at the Democratic and Republican conventions. There has been no public statement about the security bill for the World Series, but it’s reasonable to assume it will be impressive.

In our fervor to defend ourselves, it’s important that we spend our money wisely. Much of what people think of as security against terrorism doesn’t actually make us safer. Even in a world of high-tech security, the most important solution is the guy watching to keep beer bottles from being thrown onto the field.

Generally, security measures that defend specific targets are wasteful, because they can be avoided simply by switching targets. If we completely defend the World Series from attack, and the terrorists bomb a crowded shopping mall instead, little has been gained.

Even so, some high-profile locations, like national monuments and symbolic buildings, and some high-profile events, like political conventions and championship sporting events, warrant additional security. What additional measures make sense?

ID checks don’t make sense. Everyone has an ID. Even the 9/11 terrorists had IDs. What we want is to somehow check intention; is the person going to do something bad? But we can’t do that, so we check IDs instead. It’s a complete waste of time and money, and does absolutely nothing to make us safer.

Automatic face recognition systems don’t work. Computers that automatically pick terrorists out of crowds are a great movie plot device, but doesn’t work in the real world. We don’t have a comprehensive photographic database of known terrorists. Even worse, the face recognition technology is so faulty that it often can’t make the matches even when we do have decent photographs. We tried it at the 2001 Super Bowl; it was a failure.

Airport-like attendee screening doesn’t work. The terrorists who took over the Russian school sneaked their weapons in long before their attack. And screening fans is only a small part of the solution. There are simply too many people, vehicles, and supplies moving in and out of a ballpark regularly. This kind of security failed at the Olympics, as reporters proved again and again that they could sneak all sorts of things into the stadiums undetected.

What does work is people: smart security officials watching the crowds. It’s called “behavior recognition,�? and it requires trained personnel looking for suspicious behavior. Does someone look out of place? Is he nervous, and not watching the game? Is he not cheering, hissing, booing, and waving like a sports fan would?

This is what good policemen do all the time. It’s what Israeli airport security does. It works because instead of relying on checkpoints that can be bypassed, it relies on the human ability to notice something that just doesn’t feel right. It’s intuition, and it’s far more effective than computerized security solutions.

Will this result in perfect security? Of course not. No security measures are guaranteed; all we can do is reduce the odds. And the best way to do that is to pay attention. A few hundred plainclothes policemen, walking around the stadium and watching for anything suspicious, will provide more security against terrorism than almost anything else we can reasonably do.

And the best thing about policemen is that they’re adaptable. They can deal with terrorist threats, and they can deal with more common security issues, too.

Most of the threats at the World Series have nothing to do with terrorism; unruly or violent fans are a much more common problem. And more likely than a complex 9/11-like plot is a lone terrorist with a gun, a bomb, or something that will cause panic. But luckily, the security measures ballparks have already put in place to protect against the former also help protect against the latter.

Originally published by UPI.

Posted on October 25, 2004 at 6:31 PM4 Comments

Security Information Management Systems (SIMS)

The computer security industry is guilty of overhyping and underdelivering. Again and again, it tells customers that they must buy a certain product to be secure. Again and again, they buy the products—and are still insecure.

Firewalls didn’t keep out network attackers—in fact, the notion of “perimeter” is severely flawed. Intrusion detection systems (IDSs) didn’t keep networks safe, and worms and viruses do considerably damage despite the prevalence of antivirus products. It’s in this context that I want to evaluate Security Information Management Systems, or SIMS, which promise to solve a serious network problem: log analysis.

Computer logs are a goldmine of security information, containing not just IDS alerts, but messages from firewalls, servers, applications, and other network devices. Your network produces megabytes of these logs every day, and hidden in them are attack footprints. The trick is finding and reacting to them fast enough.

Analyzing log messages can determine how the attacker broke in, what he accessed, whether any backdoors were added, and so on. The idea behind log analysis is that if you can read the log messages in real time, you can figure out what the attacker is doing. And if you can respond fast enough, you can kick him out before he does damage. It’s security detection and response. Log analysis works, whether or not you use SIMS.

Even better, it works against a wide variety of risks. Unlike point solutions, security monitoring is general. Log analysis can detect attackers regardless of their tactics.

But SIMS don’t live up to the hype, because they’re missing the essential ingredient that so many other computer security products lack: human intelligence. Firewalls often fail because they’re configured and maintained improperly. IDSs are often useless because there’s no one to respond to their alerts—or to separate the real attacks from the false alarms. SIMS have the same problem: unless there’s a human expert monitoring them, they’re not defending anything. The tools are only as effective as the people using them.

SIMS require vigilance: attacks can happen at any time of the day and any day of the year. Consequently, staffing requires five fulltime employees; more, if you include supervisors and backup personnel with more specialized skills. Even if an organization could find the budget for all of these people, it would be very difficult to hire them in today’s job market. And attacks against a single organization don’t happen often enough to keep a team of this caliber engaged and interested.

Back in 1999, I founded Counterpane Internet Security; we sell an outsourced service called Managed Security Monitory, in which trained security analysts monitor IDS alerts and log messages. Because of the information our analysts received from the network—in real time—as well as their training and expertise, the analysts could detect attacks in progress and provide customers with a level of security they were incapable of achieving otherwise.

When building the Counterpane monitoring service in 1999, we examined log-monitoring appliances from companies like Intellitactics and e-Security. Back then, they weren’t anywhere near good enough for us to use, so we developed our own proprietary system. Today, because of the caliber of the human analysts who use the Counterpane system, it’s much better than any commercial SIMS. We were able to design it with our expert detection-and-response analysts in mind, and not the general sysadmin market.

The key to network security is people, not products. Piling more security products, such as SIMS, only our network won’t help. This is why I believe that network security will eventually be outsourced. There’s no other cost-effective way to reliably get the experts you need, and therefore no other cost-effective way to reliably get security.

This originally appeared in the September/October 2004 issue of IEEE Security and Privacy Magazine.

Posted on October 20, 2004 at 6:03 PM18 Comments

Technology and Counterterrorism

Technology makes us safer.

Communications technologies ensure that emergency response personnel can communicate with each other in an emergency—whether police, fire or medical. Bomb-sniffing machines now routinely scan airplane baggage. Other technologies may someday detect contaminants in our water supply or our atmosphere.

Throughout law enforcement and intelligence investigation, different technologies are being harnessed for the good of defense. However, technologies designed to secure specific targets have a limited value.

By its very nature, defense against terrorism means we must be prepared for anything. This makes it expensive—if not nearly impossible—to deploy threat-specific technological advances at all the places where they’re likely needed. So while it’s good to have bomb-detection devices in airports and bioweapon detectors in crowded subways, defensive technology cannot be applied at every conceivable target for every conceivable threat. If we spent billions of dollars securing airports and the terrorists shifted their attacks to shopping malls, we wouldn’t gain any security as a society.

It’s far more effective to try and mitigate the general threat. For example, technologies that improve intelligence gathering and analysis could help federal agents quickly chase down information about suspected terrorists. The technologies could help agents more rapidly uncover terrorist plots of any type and aimed at any target, from nuclear plants to the food supply. In addition, technologies that foster communication, coordination and emergency response could reduce the effects of a terrorist attack, regardless of what form the attack takes. We get the most value for our security dollar when we can leverage technology to extend the capabilities of humans.

Just as terrorists can use technology more or less wisely, we as defenders can do the same. It is only by keeping in mind the strengths and limitations of technology that we can increase our security without wasting money, freedoms or civil liberties, and without making ourselves more vulnerable to other threats. Security is a trade-off, and it is important that we use technologies that enable us to make better trade-offs and not worse ones.

Originally published on CNet

Posted on October 20, 2004 at 4:35 PM3 Comments

News

Great moments in security screening

The U.S. government’s cybersecurity chief resigned with a day’s notice. I can understand his frustration; the position had no power and could only suggest, plead, and cheerlead.
Computerworld
Washington Post
FCW.com
CNet

North Korea had over 500 trained cyberwarriors, according to the South Korean Defense Ministry. Maybe this is true, and maybe it’s just propaganda—from either the North or the South. Although certainly any smart military will train people in the art of attacking enemy computer networks.
channelnewsasia.com

Posted on October 18, 2004 at 9:23 PM5 Comments

Schneier: Security outsourcing widespread by 2010

Bruce Schneier is founder and chief technology officer of Mountain View, Calif.-based MSSP Counterpane Internet Security Inc. and author of Applied Cryptography, Secrets and Lies, and Beyond Fear. He also publishes Crypto-Gram, a free monthly newsletter, and writes op-ed pieces for various publications. Schneier spoke to SearchSecurity.com about the latest threats, Microsoft’s ongoing security struggles and other topics in a two-part interview that took place by e-mail and phone last week. In this installment, he talks about the safety of open source vs. closed source, the future of security management and spread of blogs.

Are open source products more secure than closed source?

Schneier: It’s more complicated than that. To analyze the security of a software product you need to have software security experts analyze the code. You can do that in the closed-source model by hiring them, or you can do that in the open-source model by making the code public and hoping that they do so for free. Both work, but obviously the latter is cheaper. It’s also not guaranteed. There’s lots of open-source software out there that no one has analyzed and is no more secure than all the closed-source products that no one has analyzed. But then there are things like Linux, Apache or OpenBSD that get a lot of analysis. When open-source code is properly analyzed, there’s nothing better. But just putting the code out in public is no guarantee.

A recent Yankee Group report said enterprises will outsource 90% of their security management by 2010; that more businesses have made security a priority to meet growing threats and comply with laws like HIPAA and Sarbanes-Oxley. Do you agree?

Schneier: I think that network security will largely be outsourced by 2010 regardless of compliance issues. It’s infrastructure, and infrastructure is always outsourced … eventually. I say eventually because it often takes years for companies to come to terms with it. But Internet security is no different than tax preparation, legal services, food services, cleaning services or phone service. It will be outsourced. I do believe that the various compliance issues, like the laws you mention, are causing companies to increase their security budgets. It’s the same economic driver that I talked about in your question about Microsoft. By increasing the penalties to companies if they don’t have adequate security, the laws induce companies to spend more on security. That’s good for everyone.

How is Crypto-Gram doing?

Schneier: Crypto-Gram currently has about 100,000 readers; 75,000 get it in e-mail every month and another 25,000 read it on the Web. When I started it in 1998, I had no idea it would get this big. I actually thought about charging for it, which would have been a colossal mistake. I think the key to Crypto-Gram’s success is that it’s both interesting and honest. Security is an amazingly rich topic, and there are always things in the news to talk about. Last month I talked about airline security, the Olympics and cellphones. This month I’m going to talk about academic freedom, the security of elections, and RFID chips in passports.

Some people compare Crypto-Gram to a blog. Is that a reasonable comparison?

Schneier: It’s reasonable in the sense that it’s one person writing on topics that interests him. But the form-factor is different. Blogs are Web-based journals, updated regularly. Crypto-Gram is a monthly e-mail newsletter. Sometimes I wish I had the immediacy of a blog, but I like the discipline of a regular publishing schedule. And I think I have more readers because I push the content to my readers’ e-mail boxes.

Do you think blogs have become more useful than traditional media as a way to get the latest security news to IT managers?

Schneier: Blogs are faster, but they’re unfiltered. They’re definitely the fastest way to get the latest news—on security or any other topic—as long as you’re not too concerned about accuracy. Traditional news sources are slower, but there’s higher quality. So they’re both useful, as long as you understand their relative strengths and weaknesses.

By Bill Brenner, News Writer
05 Oct 2004 | SearchSecurity.com

Posted on October 8, 2004 at 4:52 PM1 Comments

Schneier: Microsoft still has work to do

Bruce Schneier is founder and chief technology officer of Mountain View, Calif.-based MSSP Counterpane Internet Security Inc. and author of Applied Cryptography, Secrets and Lies, and Beyond Fear. He also publishes Crypto-Gram, a free monthly newsletter, and writes op-ed pieces for various publications. Schneier spoke to SearchSecurity.com about the latest threats, Microsoft’s ongoing security struggles and other topics in a two-part interview that took place by e-mail and phone last month. In this installment, he talks about the “hype” of SP2 and explains why it’s “foolish” to use Internet Explorer.

What’s the biggest threat to information security at the moment?

Schneier: Crime. Criminals have discovered IT in a big way. We’re seeing a huge increase in identity theft and associated financial theft. We’re seeing a rise in credit card fraud. We’re seeing a rise in blackmail. Years ago, the people breaking into computers were mostly kids participating in the information-age equivalent of spray painting. Today there’s a profit motive, as those same hacked computers become launching pads for spam, phishing attacks and Trojans that steal passwords. Right now we’re seeing a crime wave against Internet consumers that has the potential to radically change the way people use their computers. When enough average users complain about having money stolen, the government is going to step in and do something. The results are unlikely to be pretty.

Which threats are overly hyped?

Schneier: Cyberterrorism. It’s not much of a threat. These attacks are very difficult to execute. The software systems controlling our nation’s infrastructure are filled with vulnerabilities, but they’re generally not the kinds of vulnerabilities that cause catastrophic disruptions. The systems are designed to limit the damage that occurs from errors and accidents. They have manual overrides. These systems have been proven to work; they’ve experienced disruptions caused by accident and natural disaster. We’ve been through blackouts, telephone switch failures and disruptions of air traffic control computers. The results might be annoying, and engineers might spend days or weeks scrambling, but it doesn’t spread terror. The effect on the general population has been minimal.

Microsoft has made much of the added security muscle in SP2. Has it measured up to the hype?

Schneier: SP2 is much more hype than substance. It’s got some cool things, but I was unimpressed overall. It’s a pity, though. They had an opportunity to do more, and I think they could have done more. But even so, this stuff is hard. I think the fact that SP2 was largely superficial speaks to how the poor security choices Microsoft made years ago are deeply embedded inside the operating system.

Is Microsoft taking security more seriously?

Schneier: Microsoft is certainly taking it more seriously than three years ago, when they ignored it completely. But they’re still not taking security seriously enough for me. They’ve made some superficial changes in the way they approach security, but they still treat it more like a PR problem than a technical problem. To me, the problem is economic. Microsoft—or any other software company—is not a charity, and we should not expect them to do something that hurts their bottom line. As long as we all are willing to buy insecure software, software companies don’t have much incentive to make their products secure. For years I have been advocating software liability as a way of changing that balance. If software companies could get sued for defective products, just as automobile manufacturers are, then they would spend much more money making their products secure.

After the Download.ject attack in June, voices advocating alternatives to Internet Explorer grew louder. Which browser do you use?

Schneier: I think it’s foolish to use Internet Explorer. It’s filled with security holes, and it’s too hard to configure it to have decent security. Basically, it seems to be written in the best interests of Microsoft and not in the best interests of the customer. I have used the Opera browser for years, and I am very happy with it. It’s much better designed, and I never have to worry about Explorer-based attacks.

By Bill Brenner, News Writer
4 Oct 2004 | SearchSecurity.com

Posted on October 8, 2004 at 4:45 PM7 Comments

Disrupting Air Travel with Arabic Writing

In August, I wrote about the stupidity of United Airlines returning a flight from Sydney to Los Angeles back to Sydney because a flight attendant found an airsickness bag with the letters “BOB” written on it in a lavatory. (“BOB” supposedly stood for “Bomb on Board.”)

I received quite a bit of mail about that. Most of it was supportive, but some people argued that the airline should do everything in its power to protect its passengers and that the airline was reasonable and acting prudently.

The problem with that line of reasoning is that it has no limits. In corresponding with people, I asked whether a flight should be diverted if one of the passengers was wearing an orange shirt: orange being the color of the DHS’s heightened alert level. If you believe that the airline should respond drastically to any threat, no matter how small, then they should.

That example was fanciful, and deliberately so. Here’s another, even more fanciful, example. Unfortunately, it’s a real one.

Last month in Milwaukee, a Midwest Airlines flight had already pulled away from the gate when someone, the articles don’t say who, found Arabic writing in his or her copy of the airline’s in-flight magazine.

I have no idea what sort of panic ensued, but the airplane turned around and returned to the gate. Everyone was taken off the plane and inspected. The plane and all the luggage was inspected. Surprise; nothing was found.

The passengers didn’t fly out until the next morning.

This kind of thing is idiotic. Terrorism is a serious problem, and we’re not going to protect ourselves by overreacting every time someone’s overactive imagination kicks in. We need to be alert to the real threats, instead of making up random ones. It simply makes no sense.

News article

My original essay

Posted on October 7, 2004 at 5:06 PM0 Comments

The Legacy of DES

The Data Encryption Standard, or DES, was a mid-’70s brainchild of the National Bureau of Standards: the first modern, public, freely available encryption algorithm. For over two decades, DES was the workhorse of commercial cryptography.

Over the decades, DES has been used to protect everything from databases in mainframe computers, to the communications links between ATMs and banks, to data transmissions between police cars and police stations. Whoever you are, I can guarantee that many times in your life, the security of your data was protected by DES.

Just last month, the former National Bureau of Standards—the agency is now called the National Institute of Standards and Technology, or NIST—proposed withdrawing DES as an encryption standard, signifying the end of the federal government’s most important technology standard, one more important than ASCII, I would argue.

Today, cryptography is one of the most basic tools of computer security, but 30 years ago it barely existed as an academic discipline. In the days when the Internet was little more than a curiosity, cryptography wasn’t even a recognized branch of mathematics. Secret codes were always fascinating, but they were pencil-and-paper codes based on alphabets. In the secret government labs during World War II, cryptography entered the computer era and became mathematics. But with no professors teaching it, and no conferences discussing it, all the cryptographic research in the United States was conducted at the National Security Agency.

And then came DES.

Back in the early 1970s, it was a radical idea. The National Bureau of Standards decided that there should be a free encryption standard. Because the agency wanted it to be non-military, they solicited encryption algorithms from the public. They got only one serious response—the Data Encryption Standard—from the labs of IBM. In 1976, DES became the government’s standard encryption algorithm for “sensitive but unclassified” traffic. This included things like personal, financial and logistical information. And simply because there was nothing else, companies began using DES whenever they needed an encryption algorithm. Of course, not everyone believed DES was secure.

When IBM submitted DES as a standard, no one outside the National Security Agency had any expertise to analyze it. The NSA made two changes to DES: It tweaked the algorithm, and it cut the key size by more than half.

The strength of an algorithm is based on two things: how good the mathematics is, and how long the key is. A sure way of breaking an algorithm is to try every possible key. Modern algorithms have a key so long that this is impossible; even if you built a computer out of all the silicon atoms on the planet and ran it for millions of years, you couldn’t do it. So cryptographers look for shortcuts. If the mathematics are weak, maybe there’s a way to find the key faster: “breaking” the algorithm.

The NSA’s changes caused outcry among the few who paid attention, both regarding the “invisible hand” of the NSA—the tweaks were not made public, and no rationale was given for the final design—and the short key length.

But with the outcry came research. It’s not an exaggeration to say that the publication of DES created the modern academic discipline of cryptography. The first academic cryptographers began their careers by trying to break DES, or at least trying to understand the NSA’s tweak. And almost all of the encryption algorithms—public-key cryptography, in particular—can trace their roots back to DES. Papers analyzing different aspects of DES are still being published today.

By the mid-1990s, it became widely believed that the NSA was able to break DES by trying every possible key. This ability was demonstrated in 1998, when a $220,000 machine was built that could brute-force a DES key in a few days. In 1985, the academic community proposed a DES variant with the same mathematics but a longer key, called triple-DES. This variant had been used in more secure applications in place of DES for years, but it was time for a new standard. In 1997, NIST solicited an algorithm to replace DES.

The process illustrates the complete transformation of cryptography from a secretive NSA technology to a worldwide public technology. NIST once again solicited algorithms from the public, but this time the agency got 15 submissions from 10 countries. My own algorithm, Twofish, was one of them. And after two years of analysis and debate, NIST chose a Belgian algorithm, Rijndael, to become the Advanced Encryption Standard.

It’s a different world in cryptography now than it was 30 years ago. We know more about cryptography, and have more algorithms to choose among. AES won’t become a ubiquitous standard in the same way that DES did. But it is finding its way into banking security products, Internet security protocols, even computerized voting machines. A NIST standard is an imprimatur of quality and security, and vendors recognize that.

So, how good is the NSA at cryptography? They’re certainly better than the academic world. They have more mathematicians working on the problems, they’ve been working on them longer, and they have access to everything published in the academic world, while they don’t have to make their own results public. But are they a year ahead of the state of the art? Five years? A decade? No one knows.

It took the academic community two decades to figure out that the NSA “tweaks” actually improved the security of DES. This means that back in the ’70s, the National Security Agency was two decades ahead of the state of the art.

Today, the NSA is still smarter, but the rest of us are catching up quickly. In 1999, the academic community discovered a weakness in another NSA algorithm, SHA, that the NSA claimed to have discovered only four years previously. And just last week there was a published analysis of the NSA’s SHA-1 that demonstrated weaknesses that we believe the NSA didn’t know about at all.

Maybe now we’re just a couple of years behind.

This essay was originally published on CNet.com

Posted on October 6, 2004 at 6:05 PM2 Comments

RFID Passports

Since the terrorist attacks of 2001, the Bush administration—specifically, the Department of Homeland Security—has wanted the world to agree on a standard for machine-readable passports. Countries whose citizens currently do not have visa requirements to enter the United States will have to issue passports that conform to the standard or risk losing their nonvisa status.

These future passports, currently being tested, will include an embedded computer chip. This chip will allow the passport to contain much more information than a simple machine-readable character font, and will allow passport officials to quickly and easily read that information. That is a reasonable requirement and a good idea for bringing passport technology into the 21st century.

But the Bush administration is advocating radio frequency identification (RFID) chips for both U.S. and foreign passports, and that’s a very bad thing.

These chips are like smart cards, but they can be read from a distance. A receiving device can “talk” to the chip remotely, without any need for physical contact, and get whatever information is on it. Passport officials envision being able to download the information on the chip simply by bringing it within a few centimeters of an electronic reader.

Unfortunately, RFID chips can be read by any reader, not just the ones at passport control. The upshot of this is that travelers carrying around RFID passports are broadcasting their identity.

Think about what that means for a minute. It means that passport holders are continuously broadcasting their name, nationality, age, address and whatever else is on the RFID chip. It means that anyone with a reader can learn that information, without the passport holder’s knowledge or consent. It means that pickpockets, kidnappers and terrorists can easily—and surreptitiously—pick Americans or nationals of other participating countries out of a crowd.

It is a clear threat to both privacy and personal safety, and quite simply, that is why it is bad idea. Proponents of the system claim that the chips can be read only from within a distance of a few centimeters, so there is no potential for abuse. This is a spectacularly naïve claim. All wireless protocols can work at much longer ranges than specified. In tests, RFID chips have been read by receivers 20 meters away. Improvements in technology are inevitable.

Security is always a trade-off. If the benefits of RFID outweighed the risks, then maybe it would be worth it. Certainly, there isn’t a significant benefit when people present their passport to a customs official. If that customs official is going to take the passport and bring it near a reader, why can’t he go those extra few centimeters that a contact chip—one the reader must actually touch—would require?

The Bush administration is deliberately choosing a less secure technology without justification. If there were a good offsetting reason to choose that technology over a contact chip, then the choice might make sense.

Unfortunately, there is only one possible reason: The administration wants surreptitious access themselves. It wants to be able to identify people in crowds. It wants to surreptitiously pick out the Americans, and pick out the foreigners. It wants to do the very thing that it insists, despite demonstrations to the contrary, can’t be done.

Normally I am very careful before I ascribe such sinister motives to a government agency. Incompetence is the norm, and malevolence is much rarer. But this seems like a clear case of the Bush administration putting its own interests above the security and privacy of its citizens, and then lying about it.

This article originally appeared in the 4 October 2004 edition of the International Herald Tribune.

Posted on October 4, 2004 at 7:20 PM36 Comments

Aerial Surveillance to Detect Building Code Violations

The Baltimore housing department has a new tool to find homeowners who have been building rooftop decks without a permit: aerial mapping. Baltimore bought aerial photographs of the entire city and used software to correlate the images with databases of address information and permit records. Inspectors have just begun knocking on doors of residents who built decks without permission.

On the face of it, this is nothing new. Police always have been able to inspect buildings for permit violations. The difference is they would do it manually, and that limited its use. It simply wasn’t feasible for the police to automatically document every building code violation in any city. What’s different isn’t the police tactic but the efficiency of the process.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance involved trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the police officer sitting at a computer with a satellite image of an entire neighborhood. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

Wholesale surveillance is fast becoming the norm. Security cameras are everywhere, even in places satellites can’t see. Automatic toll road devices track cars at tunnels and bridges. We can all be tracked by our cell phones. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators.

Like the satellite images, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backward in time.

The effects of wholesale surveillance on privacy and civil liberties is profound, but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all possible crimefighting techniques. The Fourth Amendment already allows police to perform even the most intrusive searches of your home and person.

What we need are mechanisms to prevent abuse and hold the police accountable and assurances that the new techniques don’t place an unreasonable burden on the innocent. In many cases, the Fourth Amendment already provides for this in its requirement of a warrant.

The warrant process requires that a “neutral and detached magistrate” review the basis for the search and take responsibility for the outcome. The key is independent judicial oversight; the warrant process is itself a security measure that protects us from abuse and makes us more secure.

This works for some searches, but not for most wholesale surveillance. The courts already have ruled that the police cannot use thermal imaging to see through the walls of your home without a warrant, but that it’s OK for them to fly overhead and peer over your fences without a warrant. They need a warrant before opening your paper mail or listening in on your phone calls.

Wholesale surveillance calls for something else: lessening of criminal penalties. The reason criminal punishments are severe is to create a deterrent because it is hard to catch wrongdoers. As they become easier to catch, a realignment is necessary. When the police can automate the detection of a wrongdoing, perhaps there should no longer be any criminal penalty attached. For example, red-light cameras and speed-trap cameras issue citations without any “points” assessed against drivers.

Another obvious protection is notice. Baltimore should send mail to every homeowner announcing the use of aerial photography to document building code violations, urging individuals to come into compliance.

Wholesale surveillance is not simply a more efficient way for the police to do what they’ve always done. It’s a new police power, one made possible with today’s technology and one that will be made easier with tomorrow’s. And with any new police power, we as a society need to take an active role in establishing rules governing its use. To do otherwise is to cede ever more authority to the police.

This article was originally published in the 4 October 2004 edition of the Baltimore Sun.

Posted on October 4, 2004 at 7:18 PM3 Comments

Aerial Surveillance to Detect Building Code Violations

The Baltimore housing department has a new tool to find homeowners who have been building rooftop decks without a permit: aerial mapping. Baltimore bought aerial photographs of the entire city and used software to correlate the images with databases of address information and permit records. Inspectors have just begun knocking on doors of residents who built decks without permission.

On the face of it, this is nothing new. Police always have been able to inspect buildings for permit violations. The difference is they would do it manually, and that limited its use. It simply wasn’t feasible for the police to automatically document every building code violation in any city. What’s different isn’t the police tactic but the efficiency of the process.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance involved trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the police officer sitting at a computer with a satellite image of an entire neighborhood. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

Wholesale surveillance is fast becoming the norm. Security cameras are everywhere, even in places satellites can’t see. Automatic toll road devices track cars at tunnels and bridges. We can all be tracked by our cell phones. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators.

Like the satellite images, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backward in time.

The effects of wholesale surveillance on privacy and civil liberties is profound, but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all possible crimefighting techniques. The Fourth Amendment already allows police to perform even the most intrusive searches of your home and person.

What we need are mechanisms to prevent abuse and hold the police accountable and assurances that the new techniques don’t place an unreasonable burden on the innocent. In many cases, the Fourth Amendment already provides for this in its requirement of a warrant.

The warrant process requires that a “neutral and detached magistrate” review the basis for the search and take responsibility for the outcome. The key is independent judicial oversight; the warrant process is itself a security measure that protects us from abuse and makes us more secure.

This works for some searches, but not for most wholesale surveillance. The courts already have ruled that the police cannot use thermal imaging to see through the walls of your home without a warrant, but that it’s OK for them to fly overhead and peer over your fences without a warrant. They need a warrant before opening your paper mail or listening in on your phone calls.

Wholesale surveillance calls for something else: lessening of criminal penalties. The reason criminal punishments are severe is to create a deterrent because it is hard to catch wrongdoers. As they become easier to catch, a realignment is necessary. When the police can automate the detection of a wrongdoing, perhaps there should no longer be any criminal penalty attached. For example, red-light cameras and speed-trap cameras issue citations without any “points” assessed against drivers.

Another obvious protection is notice. Baltimore should send mail to every homeowner announcing the use of aerial photography to document building code violations, urging individuals to come into compliance.

Wholesale surveillance is not simply a more efficient way for the police to do what they’ve always done. It’s a new police power, one made possible with today’s technology and one that will be made easier with tomorrow’s. And with any new police power, we as a society need to take an active role in establishing rules governing its use. To do otherwise is to cede ever more authority to the police.

This article was originally published in the 4 October 2004 edition of the Baltimore Sun.

Posted on October 4, 2004 at 7:18 PM0 Comments

Do Terror Alerts Work?

As I read the litany of terror threat warnings that the government has issued in the past three years, the thing that jumps out at me is how vague they are. The careful wording implies everything without actually saying anything. We hear “terrorists might try to bomb buses and rail lines in major U.S. cities this summer,” and there’s “increasing concern about the possibility of a major terrorist attack.” “At least one of these attacks could be executed by the end of the summer 2003.” Warnings are based on “uncorroborated intelligence,” and issued even though “there is no credible, specific information about targets or method of attack.” And, of course, “weapons of mass destruction, including those containing chemical, biological, or radiological agents or materials, cannot be discounted.”

Terrorists might carry out their attacks using cropdusters, helicopters, scuba divers, even prescription drugs from Canada. They might be carrying almanacs. They might strike during the Christmas season, disrupt the “democratic process,” or target financial buildings in New York and Washington.

It’s been more than two years since the government instituted a color-coded terror alert system, and the Department of Homeland Security has issued about a dozen terror alerts in that time. How effective have they been in preventing terrorism? Have they made us any safer, or are they causing harm? Are they, as critics claim, just a political ploy?

When Attorney General John Ashcroft came to Minnesota recently, he said the fact that there had been no terrorist attacks in America in the three years since September 11th was proof that the Bush administration’s anti-terrorist policies were working. I thought: There were no terrorist attacks in America in the three years before September 11th, and we didn’t have any terror alerts. What does that prove?

In theory, the warnings are supposed to cultivate an atmosphere of preparedness. If Americans are vigilant against the terrorist threat, then maybe the terrorists will be caught and their plots foiled. And repeated warnings brace Americans for the aftermath of another attack.

The problem is that the warnings don’t do any of this. Because they are so vague and so frequent, and because they don’t recommend any useful actions that people can take, terror threat warnings don’t prevent terrorist attacks. They might force a terrorist to delay his plan temporarily, or change his target. But in general, professional security experts like me are not particularly impressed by systems that merely force the bad guys to make minor modifications in their tactics.

And the alerts don’t result in a more vigilant America. It’s one thing to issue a hurricane warning, and advise people to board up their windows and remain in the basement. Hurricanes are short-term events, and it’s obvious when the danger is imminent and when it’s over. People can do useful things in response to a hurricane warning; then there is a discrete period when their lives are markedly different, and they feel there was utility in the higher alert mode, even if nothing came of it.

It’s quite another thing to tell people to be on alert, but not to alter their plans—as Americans were instructed last Christmas. A terrorist alert that instills a vague feeling of dread or panic, without giving people anything to do in response, is ineffective. Indeed, it inspires terror itself. Compare people’s reactions to hurricane threats with their reactions to earthquake threats. According to scientists, California is expecting a huge earthquake sometime in the next two hundred years. Even though the magnitude of the disaster will be enormous, people just can’t stay alert for two centuries. The news seems to have generated the same levels of short-term fear and long-term apathy in Californians that the terrorist warnings do. It’s human nature; people simply can’t be vigilant indefinitely.

It’s true too that people want to make their own decisions. Regardless of what the government suggests, people are going to independently assess the situation. They’re going to decide for themselves whether or not changing their behavior seems like a good idea. If there’s no rational information to base their independent assessment on, they’re going to come to conclusions based on fear, prejudice, or ignorance.

We’re already seeing this in the U.S. We see it when Muslim men are assaulted on the street. We see it when a woman on an airplane panics because a Syrian pop group is flying with her. We see it again and again, as people react to rumors about terrorist threats from Al Qaeda and its allies endlessly repeated by the news media.

This all implies that if the government is going to issue a threat warning at all, it should provide as many details as possible. But this is a catch-22: Unfortunately, there’s an absolute limit to how much information the government can reveal. The classified nature of the intelligence that goes into these threat alerts precludes the government from giving the public all the information it would need to be meaningfully prepared. And maddeningly, the current administration occasionally compromises the intelligence assets it does have, in the interest of politics. It recently released the name of a Pakistani agent working undercover in Al Qaeda, blowing ongoing counterterrorist operations both in Pakistan and the U.K.

Still, ironically, most of the time the administration projects a “just trust me” attitude. And there are those in the U.S. who trust it, and there are those who do not. Unfortunately, there are good reasons not to trust it. There are two reasons government likes terror alerts. Both are self-serving, and neither has anything to do with security.

The first is such a common impulse of bureaucratic self-protection that it has achieved a popular acronym in government circles: CYA. If the worst happens and another attack occurs, the American public isn’t going to be as sympathetic to the current administration as it was last time. After the September 11th attacks, the public reaction was primarily shock and disbelief. In response, the government vowed to fight the terrorists. They passed the draconian USA PATRIOT Act, invaded two countries, and spent hundreds of billions of dollars. Next time, the public reaction will quickly turn into anger, and those in charge will need to explain why they failed. The public is going to demand to know what the government knew and why it didn’t warn people, and they’re not going to look kindly on someone who says: “We didn’t think the threat was serious enough to warn people.” Issuing threat warnings is a way to cover themselves. “What did you expect?” they’ll say. “We told you it was Code Orange.”

The second purpose is even more self-serving: Terror threat warnings are a publicity tool. They’re a method of keeping terrorism in people’s minds. Terrorist attacks on American soil are rare, and unless the topic stays in the news, people will move on to other concerns. There is, of course, a hierarchy to these things. Threats against U.S. soil are most important, threats against Americans abroad are next, and terrorist threats—even actual terrorist attacks—against foreigners in foreign countries are largely ignored.

Since the September 11th attacks, Republicans have made “tough on terror” the centerpiece of their reelection strategies. Study after study has shown that Americans who are worried about terrorism are more likely to vote Republican. In 2002, Karl Rove specifically told Republican legislators to run on that platform, and strength in the face of the terrorist threat is the basis of Bush’s reelection campaign. For that strategy to work, people need to be reminded constantly about the terrorist threat and how the current government is keeping them safe.

It has to be the right terrorist threat, though. Last month someone exploded a pipe bomb in a stem-cell research center near Boston, but the administration didn’t denounce this as a terrorist attack. In April 2003, the FBI disrupted a major terrorist plot in the U.S., arresting William Krar and seizing automatic weapons, pipe bombs, bombs disguised as briefcases, and at least one cyanide bomb—an actual chemical weapon. But because Krar was a member of a white supremacist group and not Muslim, Ashcroft didn’t hold a press conference, Tom Ridge didn’t announce how secure the homeland was, and Bush never mentioned it.

Threat warnings can be a potent tool in the fight against terrorism—when there is a specific threat at a specific moment. There are times when people need to act, and act quickly, in order to increase security. But this is a tool that can easily be abused, and when it’s abused it loses its effectiveness.

It’s instructive to look at the European countries that have been dealing with terrorism for decades, like the United Kingdom, Ireland, France, Italy, and Spain. None of these has a color-coded terror-alert system. None calls a press conference on the strength of “chatter.” Even Israel, which has seen more terrorism than any other nation in the world, issues terror alerts only when there is a specific imminent attack and they need people to be vigilant. And these alerts include specific times and places, with details people can use immediately. They’re not dissimilar from hurricane warnings.

A terror alert that instills a vague feeling of dread or panic echoes the very tactics of the terrorists. There are essentially two ways to terrorize people. The first is to do something spectacularly horrible, like flying airplanes into skyscrapers and killing thousands of people. The second is to keep people living in fear with the threat of doing something horrible. Decades ago, that was one of the IRA’s major aims. Inadvertently, the DHS is achieving the same thing.

There’s another downside to incessant threat warnings, one that happens when everyone realizes that they have been abused for political purposes. Call it the “Boy Who Cried Wolf” problem. After too many false alarms, the public will become inured to them. Already this has happened. Many Americans ignore terrorist threat warnings; many even ridicule them. The Bush administration lost considerable respect when it was revealed that August’s New York/Washington warning was based on three-year-old information. And the more recent warning that terrorists might target cheap prescription drugs from Canada was assumed universally to be politics-as-usual.

Repeated warnings do more harm than good, by needlessly creating fear and confusion among those who still trust the government, and anesthetizing everyone else to any future alerts that might be important. And every false alarm makes the next terror alert less effective.

Fighting global terrorism is difficult, and it’s not something that should be played for political gain. Countries that have been dealing with terrorism for decades have realized that much of the real work happens outside of public view, and that often the most important victories are the most secret. The elected officials of these countries take the time to explain this to their citizens, who in return have a realistic view of what the government can and can’t do to keep them safe.

By making terrorism the centerpiece of his reelection campaign, President Bush and the Republicans play a very dangerous game. They’re making many people needlessly fearful. They’re attracting the ridicule of others, both domestically and abroad. And they’re distracting themselves from the serious business of actually keeping Americans safe.

This article was originally published in the October 2004 edition of The Rake

Posted on October 4, 2004 at 7:08 PM2 Comments

License Plate "Guns" and Privacy

New Haven police have a new law enforcement tool: a license-plate scanner. Similar to a radar gun, it reads the license plates of moving or parked cars and links with remote police databases, immediately providing information about the car and owner. Right now the police check if there are any taxes owed on the car, if the car or license plate is stolen, and if the car is unregistered or uninsured. A car that comes up positive is towed.

On the face of it, this is nothing new. The police have always been able to run a license plate. The difference is they would do it manually, and that limited its use. It simply wasn’t feasible for the police to run the plates of every car in a parking garage, or every car that passed through an intersection. What’s different isn’t the police tactic, but the efficiency of the process.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

Wholesale surveillance is fast becoming the norm. New York’s E-Z Pass tracks cars at tunnels and bridges with tolls. We can all be tracked by our cell phones. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators. Security cameras are everywhere. If they wanted, the police could take the database of vehicles outfitted with the OnStar tracking system, and immediately locate all of those New Haven cars.

Like the license-plate scanners, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backwards in time.

The effects of wholesale surveillance on privacy and civil liberties is profound; but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all techniques at their disposal. What we need are corresponding mechanisms to prevent abuse, and that don’t place an unreasonable burden on the innocent.

Throughout our nation’s history, we have maintained a balance between the necessary interests of police and the civil rights of the people. The license plate itself is such a balance. Imagine the debate from the early 1900s: The police proposed affixing a plaque to every car with the car owner’s name, so they could better track cars used in crimes. Civil libertarians objected because that would reduce the privacy of every car owner. So a compromise was reached: a random string of letter and numbers that the police could use to determine the car owner. By deliberately designing a more cumbersome system, the needs of law enforcement and the public’s right to privacy were balanced.

The search warrant process, as prescribed in the Fourth Amendment, is another balancing method. So is the minimization requirement for telephone eavesdropping: the police must stop listening to a phone line if the suspect under investigation is not talking.

For license-plate scanners, one obvious protection is to require the police to erase data collected on innocent car owners immediately, and not save it. The police have no legitimate need to collect data on everyone’s driving habits. Another is to allow car owners access to the information about them used in these automated searches, and to allow them to challenge inaccuracies.

We need to go further. Criminal penalties are severe in order to create a deterrent, because it is hard to catch wrongdoers. As they become easier to catch, a realignment is necessary. When the police can automate the detection of a wrongdoing, perhaps there should no longer be any criminal penalty attached. For example, both red light cameras and speed-trap cameras all issue citations without any “points” assessed against the driver.

Wholesale surveillance is not simply a more efficient way for the police to do what they’ve always done. It’s a new police power, one made possible with today’s technology and one that will be made easier with tomorrow’s. And with any new police power, we as a society need to take an active role in establishing rules governing its use. To do otherwise is to cede ever more authority to the police.

This essay was originally published in the New Haven Register.

Posted on October 4, 2004 at 7:05 PM5 Comments

License Plate "Guns" and Privacy

New Haven police have a new law enforcement tool: a license-plate scanner. Similar to a radar gun, it reads the license plates of moving or parked cars and links with remote police databases, immediately providing information about the car and owner. Right now the police check if there are any taxes owed on the car, if the car or license plate is stolen, and if the car is unregistered or uninsured. A car that comes up positive is towed.

On the face of it, this is nothing new. The police have always been able to run a license plate. The difference is they would do it manually, and that limited its use. It simply wasn’t feasible for the police to run the plates of every car in a parking garage, or every car that passed through an intersection. What’s different isn’t the police tactic, but the efficiency of the process.

Technology is fundamentally changing the nature of surveillance. Years ago, surveillance meant trench-coated detectives following people down streets. It was laborious and expensive, and was only used when there was reasonable suspicion of a crime. Modern surveillance is the policeman with a license-plate scanner, or even a remote license-plate scanner mounted on a traffic light and a policeman sitting at a computer in the station. It’s the same, but it’s completely different. It’s wholesale surveillance.

And it disrupts the balance between the powers of the police and the rights of the people.

Wholesale surveillance is fast becoming the norm. New York’s E-Z Pass tracks cars at tunnels and bridges with tolls. We can all be tracked by our cell phones. Our purchases are tracked by banks and credit card companies, our telephone calls by phone companies, our Internet surfing habits by Web site operators. Security cameras are everywhere. If they wanted, the police could take the database of vehicles outfitted with the OnStar tracking system, and immediately locate all of those New Haven cars.

Like the license-plate scanners, the electronic footprints we leave everywhere can be automatically correlated with databases. The data can be stored forever, allowing police to conduct surveillance backwards in time.

The effects of wholesale surveillance on privacy and civil liberties is profound; but unfortunately, the debate often gets mischaracterized as a question about how much privacy we need to give up in order to be secure. This is wrong. It’s obvious that we are all safer when the police can use all techniques at their disposal. What we need are corresponding mechanisms to prevent abuse, and that don’t place an unreasonable burden on the innocent.

Throughout our nation’s history, we have maintained a balance between the necessary interests of police and the civil rights of the people. The license plate itself is such a balance. Imagine the debate from the early 1900s: The police proposed affixing a plaque to every car with the car owner’s name, so they could better track cars used in crimes. Civil libertarians objected because that would reduce the privacy of every car owner. So a compromise was reached: a random string of letter and numbers that the police could use to determine the car owner. By deliberately designing a more cumbersome system, the needs of law enforcement and the public’s right to privacy were balanced.

The search warrant process, as prescribed in the Fourth Amendment, is another balancing method. So is the minimization requirement for telephone eavesdropping: the police must stop listening to a phone line if the suspect under investigation is not talking.

For license-plate scanners, one obvious protection is to require the police to erase data collected on innocent car owners immediately, and not save it. The police have no legitimate need to collect data on everyone’s driving habits. Another is to allow car owners access to the information about them used in these automated searches, and to allow them to challenge inaccuracies.

We need to go further. Criminal penalties are severe in order to create a deterrent, because it is hard to catch wrongdoers. As they become easier to catch, a realignment is necessary. When the police can automate the detection of a wrongdoing, perhaps there should no longer be any criminal penalty attached. For example, both red light cameras and speed-trap cameras all issue citations without any “points” assessed against the driver.

Wholesale surveillance is not simply a more efficient way for the police to do what they’ve always done. It’s a new police power, one made possible with today’s technology and one that will be made easier with tomorrow’s. And with any new police power, we as a society need to take an active role in establishing rules governing its use. To do otherwise is to cede ever more authority to the police.

This essay was originally published in the New Haven Register.

Posted on October 4, 2004 at 7:05 PM4 Comments

The Doghouse: Lexar JumpDrives

If you read Lexar’s documentation, their JumpDrive Secure product is secure. “If lost or stolen, you can rest assured that what you’ve saved there remains there with 256-bit AES encryption.” Sounds good, but security professionals are an untrusting sort. @Stake decided to check. They found that “the password can be observed in memory or read directly from the device, without evidence of tampering.” Even worse: the password “is stored in an XOR encrypted form and can be read directly from the device without any authentication.”

The moral of the story: don’t trust magic security words like “256-bit AES.” The devil is in the details, and it’s easy to screw up security.

Although screwing it up this badly is impressive.

Lexar’s product

@Stake’s analysis

Posted on October 1, 2004 at 9:45 PM

The Doghouse: Lexar JumpDrives

If you read Lexar’s documentation, their JumpDrive Secure product is secure. “If lost or stolen, you can rest assured that what you’ve saved there remains there with 256-bit AES encryption.” Sounds good, but security professionals are an untrusting sort. @Stake decided to check. They found that “the password can be observed in memory or read directly from the device, without evidence of tampering.” Even worse: the password “is stored in an XOR encrypted form and can be read directly from the device without any authentication.”

The moral of the story: don’t trust magic security words like “256-bit AES.” The devil is in the details, and it’s easy to screw up security.

Although screwing it up this badly is impressive.

Lexar’s product

@Stake’s analysis

Posted on October 1, 2004 at 9:45 PM

Academic Freedom and Security

Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world’s premier cryptography conference. It’s held every August in Santa Barbara.

This year, 400 people from 30 countries came to listen to dozens of talks. Lu Yi was not one of them. Her paper was accepted at the conference. But because she is a Chinese Ph.D. student in Switzerland, she was not able to get a visa in time to attend the conference.

In the three years since 9/11, the U.S. government has instituted a series of security measures at our borders, all designed to keep terrorists out. One of those measures was to tighten up the rules for foreign visas. Certainly this has hurt the tourism industry in the U.S., but the damage done to academic research is more profound and longer-lasting.

According to a survey by the Association of American Universities, many universities reported a drop of more than 10 percent in foreign student applications from last year. During the 2003 academic year, student visas were down 9 percent. Foreign applications to graduate schools were down 32 percent, according to another study by the Council of Graduate Schools.

There is an increasing trend for academic conferences, meetings and seminars to move outside of the United States simply to avoid visa hassles.

This affects all of high-tech, but ironically it particularly affects the very technologies that are critical in our fight against terrorism.

Also in August, on the other side of the country, the University of Connecticut held the second International Conference on Advanced Technologies for Homeland Security. The attendees came from a variety of disciplines—chemical trace detection, communications compatibility, X-ray scanning, sensors of various types, data mining, HAZMAT clothing, network intrusion detection, bomb diffusion, remote-controlled drones—and illustrate the enormous breadth of scientific know-how that can usefully be applied to counterterrorism.

It’s wrong to believe that the U.S. can conduct the research we need alone. At the Connecticut conference, the researchers presenting results included many foreigners studying at U.S. universities. Only 30 percent of the papers at CRYPTO had only U.S. authors. The most important discovery of the conference, a weakness in a mathematical function that protects the integrity of much of the critical information on the Internet, was made by four researchers from China.

Every time a foreign scientist can’t attend a U.S. technology conference, our security suffers. Every time we turn away a qualified technology graduate student, our security suffers. Technology is one of our most potent weapons in the war on terrorism, and we’re not fostering the international cooperation and development that is crucial for U.S. security.

Security is always a trade-off, and specific security countermeasures affect everyone, both the bad guys and the good guys. The new U.S. immigration rules may affect the few terrorists trying to enter the United States on visas, but they also affect honest people trying to do the same.

All scientific disciplines are international, and free and open information exchange—both in conferences and in academic programs at universities—will result in the maximum advance in the technologies vital to homeland security. The Soviet Union tried to restrict academic freedom along national lines, and it didn’t do the country any good. We should try not to follow in those footsteps.

This essay was originally published in the San Jose Mercury News

Posted on October 1, 2004 at 9:44 PM0 Comments

Academic Freedom and Security

Cryptography is the science of secret codes, and it is a primary Internet security tool to fight hackers, cyber crime, and cyber terrorism. CRYPTO is the world’s premier cryptography conference. It’s held every August in Santa Barbara.

This year, 400 people from 30 countries came to listen to dozens of talks. Lu Yi was not one of them. Her paper was accepted at the conference. But because she is a Chinese Ph.D. student in Switzerland, she was not able to get a visa in time to attend the conference.

In the three years since 9/11, the U.S. government has instituted a series of security measures at our borders, all designed to keep terrorists out. One of those measures was to tighten up the rules for foreign visas. Certainly this has hurt the tourism industry in the U.S., but the damage done to academic research is more profound and longer-lasting.

According to a survey by the Association of American Universities, many universities reported a drop of more than 10 percent in foreign student applications from last year. During the 2003 academic year, student visas were down 9 percent. Foreign applications to graduate schools were down 32 percent, according to another study by the Council of Graduate Schools.

There is an increasing trend for academic conferences, meetings and seminars to move outside of the United States simply to avoid visa hassles.

This affects all of high-tech, but ironically it particularly affects the very technologies that are critical in our fight against terrorism.

Also in August, on the other side of the country, the University of Connecticut held the second International Conference on Advanced Technologies for Homeland Security. The attendees came from a variety of disciplines—chemical trace detection, communications compatibility, X-ray scanning, sensors of various types, data mining, HAZMAT clothing, network intrusion detection, bomb diffusion, remote-controlled drones—and illustrate the enormous breadth of scientific know-how that can usefully be applied to counterterrorism.

It’s wrong to believe that the U.S. can conduct the research we need alone. At the Connecticut conference, the researchers presenting results included many foreigners studying at U.S. universities. Only 30 percent of the papers at CRYPTO had only U.S. authors. The most important discovery of the conference, a weakness in a mathematical function that protects the integrity of much of the critical information on the Internet, was made by four researchers from China.

Every time a foreign scientist can’t attend a U.S. technology conference, our security suffers. Every time we turn away a qualified technology graduate student, our security suffers. Technology is one of our most potent weapons in the war on terrorism, and we’re not fostering the international cooperation and development that is crucial for U.S. security.

Security is always a trade-off, and specific security countermeasures affect everyone, both the bad guys and the good guys. The new U.S. immigration rules may affect the few terrorists trying to enter the United States on visas, but they also affect honest people trying to do the same.

All scientific disciplines are international, and free and open information exchange—both in conferences and in academic programs at universities—will result in the maximum advance in the technologies vital to homeland security. The Soviet Union tried to restrict academic freedom along national lines, and it didn’t do the country any good. We should try not to follow in those footsteps.

This essay was originally published in the San Jose Mercury News

Posted on October 1, 2004 at 9:44 PM0 Comments

News

Last month I wrote: “Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading: The Register.” Be sure you read this follow-up as well:
The Register

The author of the Sasser worm has been arrested:
Computerworld
The Register
And been offered a job:
Australian IT

Interesting essay on the psychology of terrorist alerts:
Philip Zimbardo

Encrypted e-mail client for the Treo:
Treo Central

The Honeynet Project is publishing a bi-annual CD-ROM and newsletter. If you’re involved in honeynets, it’s definitely worth getting. And even if you’re not, it’s worth supporting this endeavor.
Honeynet

CIO Magazine has published a survey of corporate information security. I have some issues with the survey, but it’s worth reading.
IT Security

At the Illinois State Capitol, someone shot an unarmed security guard and fled. The security upgrade after the incident is—get ready—to change the building admittance policy from a “check IDs” procedure to a “sign in” procedure. First off, identity checking does not increase security. And secondly, why do they think that an attacker would be willing to forge/steal an identification card, but would be unwilling to sign their name on a clipboard?
The Guardian

Neat research: a quantum-encrypted TCP/IP network:
MetroWest Daily News
Slashdot
And NEC has its own quantum cryptography research results:
InfoWorld

Security story about the U.S. embassy in New Zealand. It’s a good lesson about the pitfalls of not thinking beyond the immediate problem.
The Dominion

The future of worms:
Computerworld

Teacher arrested after a bookmark is called a concealed weapon:
St. Petersburg Times
Remember all those other things you can bring on an aircraft that can knock people unconscious: handbags, laptop computers, hardcover books. And that dental floss can be used as a garrote. And, and, oh…you get the idea.

Seems you can open Kryptonite bicycle locks with the cap from a plastic pen. The attack works on what locksmiths call the “impressioning” principle. Tubular locks are especially vulnerable to this because all the pins are exposed, and tools that require little skill to use can be relatively unsophisticated. There have been commercial locksmithing products to do this to circular locks for a long time. Once you get the feel for how to do it, it’s pretty easy. I find Kryptonite’s proposed solution—swapping for a smaller diameter lock so a particular brand of pen won’t work—to be especially amusing.
Indystar.com
Wired
Bikeforums

I often talk about how most firewalls are ineffective because they’re not configured properly. Here’s some research on firewall configuration:
IEEE Computer

Reading RFID tags from three feet away:
Computerworld

AOL is offering two-factor authentication services. It’s not free: $10 plus $2 per month. It’s an RSA Security token, with a number that changes every 60 seconds.
PC World

Counter-terrorism has its own snake oil:
Quantum Sleeper

Posted on October 1, 2004 at 9:40 PM3 Comments

News

Last month I wrote: “Long and interesting review of Windows XP SP2, including a list of missed opportunities for increased security. Worth reading: The Register.” Be sure you read this follow-up as well:
The Register

The author of the Sasser worm has been arrested:
Computerworld
The Register
And been offered a job:
Australian IT

Interesting essay on the psychology of terrorist alerts:
Philip Zimbardo

Encrypted e-mail client for the Treo:
Treo Central

The Honeynet Project is publishing a bi-annual CD-ROM and newsletter. If you’re involved in honeynets, it’s definitely worth getting. And even if you’re not, it’s worth supporting this endeavor.
Honeynet

CIO Magazine has published a survey of corporate information security. I have some issues with the survey, but it’s worth reading.
IT Security

At the Illinois State Capitol, someone shot an unarmed security guard and fled. The security upgrade after the incident is—get ready—to change the building admittance policy from a “check IDs” procedure to a “sign in” procedure. First off, identity checking does not increase security. And secondly, why do they think that an attacker would be willing to forge/steal an identification card, but would be unwilling to sign their name on a clipboard?
The Guardian

Neat research: a quantum-encrypted TCP/IP network:
MetroWest Daily News
Slashdot
And NEC has its own quantum cryptography research results:
InfoWorld

Security story about the U.S. embassy in New Zealand. It’s a good lesson about the pitfalls of not thinking beyond the immediate problem.
The Dominion

The future of worms:
Computerworld

Teacher arrested after a bookmark is called a concealed weapon:
St. Petersburg Times
Remember all those other things you can bring on an aircraft that can knock people unconscious: handbags, laptop computers, hardcover books. And that dental floss can be used as a garrote. And, and, oh…you get the idea.

Seems you can open Kryptonite bicycle locks with the cap from a plastic pen. The attack works on what locksmiths call the “impressioning” principle. Tubular locks are especially vulnerable to this because all the pins are exposed, and tools that require little skill to use can be relatively unsophisticated. There have been commercial locksmithing products to do this to circular locks for a long time. Once you get the feel for how to do it, it’s pretty easy. I find Kryptonite’s proposed solution—swapping for a smaller diameter lock so a particular brand of pen won’t work—to be especially amusing.
Indystar.com
Wired
Bikeforums

I often talk about how most firewalls are ineffective because they’re not configured properly. Here’s some research on firewall configuration:
IEEE Computer

Reading RFID tags from three feet away:
Computerworld

AOL is offering two-factor authentication services. It’s not free: $10 plus $2 per month. It’s an RSA Security token, with a number that changes every 60 seconds.
PC World

Counter-terrorism has its own snake oil:
Quantum Sleeper

Posted on October 1, 2004 at 9:40 PM3 Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PM2 Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PM2 Comments

New Blog, and Changes to Crypto-Gram

I’m in the process of making several changes to Crypto-Gram, all designed to give readers more reading options.

Blog: Crypto-Gram is now available in blog form. Called “Schneier on Security,” the blog will have the same content as Crypto-Gram but it will be posted continually rather than only on the 15th of the month. Initially, blog comments will be turned off. I’ll enable them as soon as my anti-blog-spam software is working.

RSS: The Crypto-Gram RSS feed has been working for about six months now. Current RSS subscribers will receive the blog version of Crypto-Gram instead of the once-a-month version.

E-Mail: Crypto-Gram will still be available as a once-a-month e-mail, and back issues of Crypto-Gram will still be available on the Web.

Many of these changes are based on a 400-person reader survey I conducted (making it more accurate than most political polls). Thank you to those who completed the survey, and to everyone for your continued support.

Posted on October 1, 2004 at 9:34 PM1 Comments

New Blog, and Changes to Crypto-Gram

I’m in the process of making several changes to Crypto-Gram, all designed to give readers more reading options.

Blog: Crypto-Gram is now available in blog form. Called “Schneier on Security,” the blog will have the same content as Crypto-Gram but it will be posted continually rather than only on the 15th of the month. Initially, blog comments will be turned off. I’ll enable them as soon as my anti-blog-spam software is working.

RSS: The Crypto-Gram RSS feed has been working for about six months now. Current RSS subscribers will receive the blog version of Crypto-Gram instead of the once-a-month version.

E-Mail: Crypto-Gram will still be available as a once-a-month e-mail, and back issues of Crypto-Gram will still be available on the Web.

Many of these changes are based on a 400-person reader survey I conducted (making it more accurate than most political polls). Thank you to those who completed the survey, and to everyone for your continued support.

Posted on October 1, 2004 at 9:34 PM1 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.