May 2005 Archives

Eric Schmidt on Secrecy and Security

From Information Week:

InformationWeek: What about security? Have you been paying as much attention to security as, say Microsoft—you can debate whether or not they've been successful, but they've poured a lot of resources into it.

Schmidt: More people to a bad architecture does not necessarily make a more secure system. Why don't you define security so I can answer your question better?

InformationWeek: I suppose it's an issue of making the technology transparent enough that people can deploy it with confidence.

Schmidt: Transparency is not necessarily the only way you achieve security. For example, part of the encryption algorithms are not typically made available to the open source community, because you don't want people discovering flaws in the encryption.

Actually, he's wrong. Everything about an encryption algorithm should always be made available to everyone, because otherwise you'll invariably have exploitable flaws in your encryption.

My essay on the topic is here.

Posted on May 31, 2005 at 1:09 PM10 Comments

Major Israeli Computer Espionage Case

This is a fascinating story of computer espionage.

Dozens of leading companies and top private investigators were named yesterday as suspects in a massive industrial espionage investigation that local police have been conducting for the past six months.

The companies suspected of commissioning the espionage, which was carried out by planting Trojan horse software in their competitors' computers, include the satellite television company Yes, which is suspected of spying on cable television company HOT; cell-phone companies Pelephone and Cellcom, suspected of spying on their mutual rival Partner; and Mayer, which imports Volvos and Hondas to Israel and is suspected of spying on Champion Motors, importer of Audis and Volkswagens. Spy programs were also located in the computers of major companies such as Strauss-Elite, Shekem Electric and the business daily Globes.

Read the whole story; it's filled with interesting details. To me, the most interesting is that even though the Trojan was installed on computers at dozens of Israel's top companies, it was discovered only because the Trojan writer also used it to spy after his ex-in-laws.

There's a lesson here for all computer criminals.

Edited to add: Much more information here.

Posted on May 31, 2005 at 7:17 AM9 Comments

Holding Computer Files Hostage

This one has been predicted for years. Someone breaks into your network, encrypts your data files, and then demands a ransom to hand over the key.

I don't know how the attackers did it, but below is probably the best way. A worm could be programmed to do it.

1. Break into a computer.

2. Generate a random 256-bit file-encryption key.

3. Encrypt the file-encryption key with a common RSA public key.

4. Encrypt data files with the file-encryption key.

5. Wipe data files and file-encryption key.

6. Wipe all free space on the drive.

7. Output a file containing the RSA-encrypted, file encryption key.

8. Demand ransom.

9. Receive ransom.

10. Receive encrypted file-encryption key.

11. Decrypt it and send it back.

In any situation like this, step 9 is the hardest. It's where you're most likely to get caught. I don't know much about anonymous money transfer, but I don't think Swiss bank accounts have the anonymity they used to.

You also might have to prove that you can decrypt the data, so an easy modification is to encrypt a piece of the data with another file-encryption key so you can prove to the victim that you have the RSA private key.

Internet attacks have changed over the last couple of years. They're no longer about hackers. They're about criminals. And we should expect to see more of this sort of thing in the future.

Posted on May 30, 2005 at 8:18 AM28 Comments

Analysis of the Witty Worm

Here's a very interesting analysis of the Witty Worm from March 2004. Among other things, the researchers found the initial infection point (patient 0). They also believe that the attack was, at least in part, a deliberate cyber-attack on the U.S. military; an army base was deliberately targeted in the worm's hotlist.

Posted on May 27, 2005 at 8:23 AM14 Comments

Encryption as Evidence of Criminal Intent

An appeals court in Minnesota has ruled that the presence of encryption software on a computer may be viewed as evidence of criminal intent.

I am speechless.

Edited to add: The complete text is online.

Posted on May 26, 2005 at 8:17 AM73 Comments

Touch-Screen Voting

David Card and Enrico Moretti, both economists at UC Berkeley, have published an interesting analysis of electronic voting machines and the 2004 election: "Does Voting Technology Affect Election Outcomes? Touch-screen Voting and the 2004 Presidential Election."

Here's the abstract:

Supporters of touch-screen voting claim it is a highly reliable voting technology, while a growing number of critics argue that paperless electronic voting systems are vulnerable to fraud. In this paper we use county-level data on voting technologies in the 2000 and 2004 presidential elections to test whether voting technology affects electoral outcomes. We first show that there is a positive correlation between use of touch-screen voting and the level of electoral support for George Bush. This is true in models that compare the 2000-2004 changes in vote shares between adopting and non-adopting counties within a state, after controlling for income, demographic composition, and other factors. Although small, the effect could have been large enough to influence the final results in some closely contested states. While on the surface this pattern would appear to be consistent with allegations of voting irregularities, a closer examination suggests this interpretation is incorrect. If irregularities did take place, they would be most likely in counties that could potentially affect statewide election totals, or in counties where election officials had incentives to affect the results. Contrary to this prediction, we find no evidence that touch-screen voting had a larger effect in swing states, or in states with a Republican Secretary of State. Touch-screen voting could also indirectly affect vote shares by influencing the relative turnout of different groups. We find that the adoption of touch-screen voting has a negative effect on estimated turnout rates, controlling for state effects and a variety of county-level controls. This effect is larger in counties with a higher fraction of Hispanic residents (who tend to favor Democrats) but not in counties with more African Americans (who are overwhelmingly Democrat voters). Models for the adoption of touch-screen voting suggest it was more likely to be used in counties with a higher fraction of Hispanic and Black residents, especially in swing states. Nevertheless, the impact of non-random adoption patterns on vote shares is small.

Posted on May 25, 2005 at 8:13 AM38 Comments

Massive Data Theft

During a time when large thefts of personal data are dime-a-dozen, this one stands out.

What is thought to be the largest U.S. banking security breach in history has gotten even bigger.

The number of bank accounts accessed illegally by a New Jersey cybercrime ring has grown to 676,000, according to police investigators. That's up from the initial estimate of 500,000 accounts police said last month had been breached.

Hackensack, N.J., police Det. Capt. Frank Lomia said today that an additional 176,000 accounts were found by investigators who have been probing the ring for several months. All 676,000 consumer accounts involve New Jersey residents who were clients at four different banks, he said.

Even before the latest account tally was made public, the U.S. Department of the Treasury labeled the incident the largest breach of banking security in the U.S. to date.

The case has already led to criminal charges against nine people, including seven former employees of the four banks. The crime ring apparently accessed the data illegally through the former bank workers. None of those employees were IT workers, police said.

One amazing thing about the story is how manual the process was.

The suspects pulled up the account data while working inside their banks, then printed out screen captures of the information or wrote it out by hand, Lomia said. The data was then provided to a company called DRL Associates Inc., which had been set up as a front for the operation. DRL advertised itself as a deadbeat-locator service and as a collection agency, but was not properly licensed for those activities by the state, police said.

And I'm not really sure out what the data was stolen for:

The information was then allegedly sold to more than 40 collection agencies and law firms, police said.

Is collections that really big an industry?

Edited to add: Here is some good commentary by Adam Fields.

Posted on May 24, 2005 at 8:49 AM30 Comments

Paris Hilton Cellphone Hack

The inside story behind the hacking of Paris Hilton's T-Mobile cell phone.

Good paragraph:

"This was all done not by skilled 'hackers' but by kids who managed to 'social' their way into a company's system and gain access to it within one or two phone calls," said Hallissey, who asked that her current place of residence not be disclosed. "Major corporations have made social engineering way too easy for these kids. In their call centers they hire low-pay employees to man the phones, give them a minimum of training, most of which usually dwells on call times, canned scripts and sales. This isn't unique to T-Mobile or AOL. This has become common practice for almost every company.

How right she is.

EDITED TO ADD (11/11): Everyone, please stop asking me for Paris Hilton's -- or anyone else's, for that matter -- cellphone number or e-mail adress. I don't have them.

Posted on May 23, 2005 at 12:41 PM

Fingerprint Library Cards

Biometric library cards are coming to Naperville, Illinois.

On the one hand, the library is just storing a data string derived from the fingerprint, and not the fingerprint itself. But I have a hard time believing the second paragraph below.

Library Deputy Director Mark West said the system will be implemented over the summer beginning with a public education campaign in June. West said he is confident the public will embrace the technology once it learns its limitations.

The stored numeric data cannot be used to reconstruct a fingerprint, West said, nor can it be cross-referenced with other fingerprint databases such as those kept by the FBI or the Illinois State Police.

Nor do I have any faith in this sentence:

Officials promise to protect the confidentiality of the fingerprint records.

Posted on May 23, 2005 at 7:44 AM53 Comments

Social Engineering Via Voicemail

Here's a clever social engineering attack:

The Division has received a number of calls concerning a voicemail message left by an anonymous female caller urging them to purchase a particular penny stock. The message is intended to appear as if the caller is calling a close friend and has dialed the wrong number. The caller talks fast stating she has a great inside deal on a penny stock. The caller personalizes the conversation by saying the recommendation comes from a broker the woman is dating and that her father previously purchased stock and made a huge profit. The purpose of the call is to make you think you've received a hot stock tip by mistake.

Posted on May 20, 2005 at 8:37 AM29 Comments

New Feature: 100 Latest Comments

If you look to the right, under the "Recent Entries," you'll see a link to the "100 Latest Comments." This link takes you to a single page with the 100 most recent comments to this blog, in reverse chronological order.

It's a quick and easy way to stay abreast of the various conversations going on here at Schneier on Security. I wish other blogs would do this.

Posted on May 18, 2005 at 12:36 PM21 Comments

Insider Threats

CERT (at Carnegie Mellon) just released a study on insider threats. They analyze 49 insider attacks between 1996 and 2002, and draw some conclusions about the attacks and attackers. Nothing about the prevalence of these attacks, and more about the particulars of them.

The report is mostly obvious, and isn't worth more than a skim. But the particular methodology only tells part of the story.

Because the study focuses on insider attacks on information systems rather than attacks using information systems, it's primarily about destructive acts. Of course the major motive is going to be revenge against the employer.

Near as I can tell, the report ignores attacks that use information systems to otherwise benefit the attacker. These attacks would include embezzlement -- which at a guess is much more common than revenge.

The report also doesn't seem to acknowledge that the researchers are only looking at attacks that were noticed. I'm not impressed by the fact that most of the attackers got caught, since those are the ones that were noticed. This reinforces the same bias: network disruption is far more noticeable than theft.

These are worrisome threats, but I'd be more concerned about insider attacks that aren't nearly so obvious.

Still, there are some interesting statistics about those who use information systems to get back at their employers.

For example of the latter, the study's "executive summary" notes that in 62 percent of the cases, "a negative work-related event triggered most of the insiders' actions." The study also found that 82 percent of the time the people who hacked their company "exhibited unusual behavior in the workplace prior to carrying out their activities." The survey surmises that's probably because the insiders were angry at someone they worked with or for: 84 percent of attacks were motivated by a desire to seek revenge, and in 85 percent of the cases the insider had a documented grievance against their employer or a co-worker....

Some other interesting (although not particularly surprising) tidbits: Almost all -- 96 percent -- of the insiders were men, and 30 percent of them had previously been arrested, including arrests for violent offenses (18 percent), alcohol or drug-related offenses (11 percent), and non-financial-fraud related theft offenses (11 percent).

Posted on May 18, 2005 at 9:28 AM15 Comments

Fearmongering About Bot Networks

Bot networks are a serious security problem, but this is ridiculous. From the Independent:

The PC in your home could be part of a complex international terrorist network. Without you realising it, your computer could be helping to launder millions of pounds, attacking companies' websites or cracking confidential government codes.

This is not the stuff of science fiction or a conspiracy theory from a paranoid mind, but a warning from one of the world's most-respected experts on computer crime. Dr Peter Tippett is chief technology officer at Cybertrust, a US computer security company, and a senior adviser on the issue to President George Bush. His warning is stark: criminals and terrorists are hijacking home PCs over the internet, creating "bot" computers to carry out illegal activities.

Yes, bot networks are bad. They're used to send spam (both commercial and phishing), launch denial-of-service attacks (sometimes involving extortion), and stage attacks on other systems. Most bot networks are controlled by kids, but more and more criminals are getting into the act.

But your computer a part of an international terrorist network? Get real.

Once a criminal has gathered together what is known as a "herd" of bots, the combined computing power can be dangerous. "If you want to break the nuclear launch code then set a million computers to work on it. There is now a danger of nation state attacks," says Dr Tippett. "The vast majority of terrorist organisations will use bots."

I keep reading that last sentence, and wonder if "bots" is just a typo for "bombs." And the line about bot networks being used to crack nuclear launch codes is nothing more than fearmongering.

Clearly I need to write an essay on bot networks.

Posted on May 17, 2005 at 3:33 PM39 Comments

AES Timing Attack

Nice timing attack against AES.

For those of you who don't know, timing attacks are an example of side-channel cryptanalysis: cryptanalysis using additional information about the inner workings of the cryptographic algorithm. I wrote about them here.

What's the big idea here?

There are two ways to look at a cryptographic primitive: block cipher, digital signature function, whatever. The first is as a chunk of math. The second is a physical (or software) implementation of that math.

Traditionally, cryptanalysis has been directed solely against the math. Differential and linear cryptanalysis are good examples of this: high-powered mathematical tools that can be used to break different block ciphers.

On the other hand, timing attacks, power analysis, and fault analysis all makes assumptions about implementation, and uses additional information garnered from attacking those implementations. Failure analysis assumes a one-bit feedback from the implementation -- was the message successfully decrypted -- in order to break the underlying cryptographic primitive. Timing attacks assumes that an attacker knows how long a particular encryption operation takes.

Posted on May 17, 2005 at 10:05 AM46 Comments

Surveillance Cameras in U.S. Cities

From EPIC:

The Department of Homeland Security (DHS) has requested more than $2 billion to finance grants to state and local governments for homeland security needs. Some of this money is being used by state and local governments to create networks of surveillance cameras to watch over the public in the streets, shopping centers, at airports and more. However, studies have found that such surveillance systems have little effect on crime, and that it is more effective to place more officers on the streets and improve lighting in high-crime areas. There are significant concerns about citizens' privacy rights and misuse or abuse of the system. A professor at the University of Nevada at Reno has alleged that the university used a homeland security camera system to surreptitiously watch him after he filed a complaint alleging that the university abused its research animals. Also, British studies have found there is a significant danger of racial discrimination and stereotyping by those monitoring the cameras.

Posted on May 16, 2005 at 9:00 AM31 Comments

Crypto-Gram

Seven years ago I started writing a monthly newsletter on security. Today, Crypto-Gram has over 120,000 readers, and is still growing.

If you read this blog every day, you don't need to subscribe to Crypto-Gram. Everything in Crypto-Gram appears first in this blog. But I often update the longer essays, based on new information and reader comments, before including them in Crypto-Gram. The blog is more timely, but Crypto-Gram is more polished.

Some of you might prefer to read my writing in a monthly digest rather than in bits and pieces. Some of you might prefer an email that comes to you, rather than having to remember to check this blog. If so, try Crypto-Gram.

Crypto-Gram comes out on the 15th of every month. You can read the current issue (it came out today) here. You can read back issues here.

And if you want to subscribe to the monthly email -- I promise no marketing e-mail ever -- here.

Posted on May 15, 2005 at 3:01 PM22 Comments

Identity-Theft Humor

From The Onion:

Arizona Man Steals Bush's Identity, Vetoes Bill, Meets with Mexican President

WASHINGTON, DC--Confusion and disbelief reigned at the White House after President Bush announced Monday that an Arizona man, known to authorities only as H4xX0r1337, stole his identity and used it to buy electronic goods, veto a bill, and meet with Mexican President Vicente Fox.

"This is incredibly frustrating," Bush told reporters Tuesday. "Not only does this guy have my credit-card information, he has my Social Security number, all my personal information, and the launch codes for a number of ballistic intercontinental nuclear missiles. I almost don't want to think about it."

For those readers who don't know, The Onion publishes fake funny news items.

Posted on May 13, 2005 at 4:39 PM18 Comments

Combating Spam

Spam is back in the news, and it has a new name. This time it's voice-over-IP spam, and it has the clever name of "spit" (spam over Internet telephony). Spit has the potential to completely ruin VoIP. No one is going to install the system if they're going to get dozens of calls a day from audio spammers. Or, at least, they're only going to accept phone calls from a white list of previously known callers.

VoIP spam joins the ranks of e-mail spam, Usenet newsgroup spam, instant message spam, cell phone text message spam, and blog comment spam. And, if you think broadly enough, these computer-network spam delivery mechanisms join the ranks of computer telemarketing (phone spam), junk mail (paper spam), billboards (visual space spam), and cars driving through town with megaphones (audio spam). It's all basically the same thing -- unsolicited marketing messages -- and only by understanding the problem at this level of generality can we discuss solutions.

In general, the goal of advertising is to influence people. Usually it's to influence people to purchase a product, but it could just as easily be to influence people to support a particular political candidate or position. Advertising does this by implanting a marketing message into the brain of the recipient. The mechanism of implantation is simply a tactic.

Tactics for unsolicited marketing messages rise and fall in popularity based on their cost and benefit. If the benefit is significant, people are willing to spend more. If the benefit is small, people will only do it if it is cheap. A 30-second prime-time television ad costs 1.8 cents per adult viewer, a full-page color magazine ad about 0.9 cents per reader. A highway billboard costs 0.21 cents per car. Direct mail is the most expensive, at over 50 cents per third-class letter mailed. (That's why targeted mailing lists are so valuable; they increase the per-piece benefit.)

Spam is such a common tactic not because it's particularly effective; the response rates for spam are very low. It's common because it's ridiculously cheap. Typically, spammers charge less than a hundredth of a cent per e-mail. (And that number is just what spamming houses charge their customers to deliver spam; if you're a clever hacker, you can build your own spam network for much less money.) If it is worth $10 for you to successfully influence one person -- to buy your product, vote for your guy, whatever -- then you only need a 1 in a 100,000 success rate. You can market really marginal products with spam.

So far, so good. But the cost/benefit calculation is missing a component: the "cost" of annoying people. Everyone who is not influenced by the marketing message is annoyed to some degree. The advertiser pays a partial cost for annoying people; they might boycott his product. But most of the time he does not, and the cost of the advertising is paid by the person: the beauty of the landscape is ruined by the billboard, dinner is disrupted by a telemarketer, spam costs money to ship around the Internet and time to wade through, etc. (Note that I am using "cost" very generally here, and not just monetarily. Time and happiness are both costs.)

This is why spam is so bad. For each e-mail, the spammer pays a cost and receives benefit. But there is an additional cost paid by the e-mail recipient. But because so much spam is unwanted, that additional cost is huge -- and it's a cost that the spammer never sees. If spammers could be made to bear the total cost of spam, then its level would be more along the lines of what society would find acceptable.

This economic analysis is important, because it's the only way to understand how effective different solutions will be. This is an economic problem, and the solutions need to change the fundamental economics. (The analysis is largely the same for VoIP spam, Usenet newsgroup spam, blog comment spam, and so on.)

The best solutions raise the cost of spam. Spam filters raise the cost by increasing the amount of spam that someone needs to send before someone will read it. If 99% of all spam is filtered into trash, then sending spam becomes 100 times more expensive. This is also the idea behind white lists -- lists of senders a user is willing to accept e-mail from -- and blacklists: lists of senders a user is not willing to accept e-mail from.

Filtering doesn't just have to be at the recipient's e-mail. It can be implemented within the network to clean up spam, or at the sender. Several ISPs are already filtering outgoing e-mail for spam, and the trend will increase.

Anti-spam laws raise the cost of spam to an intolerable level; no one wants to go to jail for spamming. We've already seen some convictions in the U.S. Unfortunately, this only works when the spammer is within the reach of the law, and is less effective against criminals who are using spam as a mechanism to commit fraud.

Other proposed solutions try to impose direct costs on e-mail senders. I have seen proposals for e-mail "postage," either for every e-mail sent or for every e-mail above a reasonable threshold. I have seen proposals where the sender of an e-mail posts a small bond, which the receiver can cash if the e-mail is spam. There are other proposals that involve "computational puzzles": time-consuming tasks the sender's computer must perform, unnoticeable to someone who is sending e-mail normally, but too much for someone sending e-mail in bulk. These solutions generally involve re-engineering the Internet, something that is not done lightly, and hence are in the discussion stages only.

All of these solutions work to a degree, and we end up with an arms race. Anti-spam products block a certain type of spam. Spammers invent a tactic that gets around those products. Then the products block that spam. Then the spammers invent yet another type of spam. And so on.

Blacklisting spammer sites forced the spammers to disguise the origin of spam e-mail. People recognizing e-mail from people they knew, and other anti-spam measures, forced spammers to hack into innocent machines and use them as launching pads. Scanning millions of e-mails looking for identical bulk spam forced spammers to individualize each spam message. Semantic spam detection forced spammers to design even more clever spam. And so on. Each defense is met with yet another attack, and each attack is met with yet another defense.

Remember that when you think about host identification, or postage, as an anti-spam measure. Spammers don't care about tactics; they want to send their e-mail. Techniques like this will simply force spammers to rely more on hacked innocent machines. As long as the underlying computers are insecure, we can't prevent spammers from sending.

This is the problem with another potential solution: re-engineering the Internet to prohibit the forging of e-mail headers. This would make it easier for spam detection software to detect spamming IP addresses, but spammers would just use hacked machines instead of their own computers.

Honestly, there's no end in sight for the spam arms race. Even so, spam is one of computer security's success stories. The current crop of anti-spam products work. I get almost no spam and very few legitimate e-mails end up in my spam trap. I wish they would work better -- Crypto-Gram is occasionally classified as spam by one service or another, for example -- but they're working pretty well. It'll be a long time before spam stops clogging up the Internet, but at least we don't have to look at it.

Posted on May 13, 2005 at 9:47 AM44 Comments

Should Terrorism be Reported in the News?

In the New York Times (read it here without registering), columnist John Tierney argues that the media is performing a public disservice by writing about all the suicide bombings in Iraq. This only serves to scare people, he claims, and serves the terrorists' ends.

Some liberal bloggers have jumped on this op-ed as furthering the administration's attempts to hide the horrors of the Iraqi war from the American people, but I think the argument is more subtle than that. Before you can figure out why Tierney is wrong, you need to understand that he has a point.

Terrorism is a crime against the mind. The real target of a terrorist is morale, and press coverage helps him achieve his goal. I wrote in Beyond Fear (pages 242-3):

Morale is the most significant terrorist target. By refusing to be scared, by refusing to overreact, and by refusing to publicize terrorist attacks endlessly in the media, we limit the effectiveness of terrorist attacks. Through the long spate of IRA bombings in England and Northern Ireland in the 1970s and 1980s, the press understood that the terrorists wanted the British government to overreact, and praised their restraint. The U.S. press demonstrated no such understanding in the months after 9/11 and made it easier for the U.S. government to overreact.

Consider this thought experiment. If the press did not report the 9/11 attacks, if most people in the U.S. didn't know about them, then the attacks wouldn't have been such a defining moment in our national politics. If we lived 100 years ago, and people only read newspaper articles and saw still photographs of the attacks, then people wouldn't have had such an emotional reaction. If we lived 200 years ago and all we had to go on was the written word and oral accounts, the emotional reaction would be even less. Modern news coverage amplifies the terrorists' actions by endlessly replaying them, with real video and sound, burning them into the psyche of every viewer.

Just as the media's attention to 9/11 scared people into accepting government overreactions like the PATRIOT Act, the media's attention to the suicide bombings in Iraq are convincing people that Iraq is more dangerous than it is.

Tiernan writes:

I'm not advocating official censorship, but there's no reason the news media can't reconsider their own fondness for covering suicide bombings. A little restraint would give the public a more realistic view of the world's dangers.

Just as New Yorkers came to be guided by crime statistics instead of the mayhem on the evening news, people might begin to believe the statistics showing that their odds of being killed by a terrorist are minuscule in Iraq or anywhere else.

I pretty much said the same thing, albeit more generally, in Beyond Fear (page 29):

Modern mass media, specifically movies and TV news, has degraded our sense of natural risk. We learn about risks, or we think we are learning, not by directly experiencing the world around us and by seeing what happens to others, but increasingly by getting our view of things through the distorted lens of the media. Our experience is distilled for us, and it’s a skewed sample that plays havoc with our perceptions. Kids try stunts they’ve seen performed by professional stuntmen on TV, never recognizing the precautions the pros take. The five o’clock news doesn’t truly reflect the world we live in -- only a very few small and special parts of it.

Slices of life with immediate visual impact get magnified; those with no visual component, or that can’t be immediately and viscerally comprehended, get downplayed. Rarities and anomalies, like terrorism, are endlessly discussed and debated, while common risks like heart disease, lung cancer, diabetes, and suicide are minimized.

The global reach of today’s news further exacerbates this problem. If a child is kidnapped in Salt Lake City during the summer, mothers all over the country suddenly worry about the risk to their children. If there are a few shark attacks in Florida -- and a graphic movie -- suddenly every swimmer is worried. (More people are killed every year by pigs than by sharks, which shows you how good we are at evaluating risk.)

One of the things I routinely tell people is that if it's in the news, don't worry about it. By definition, "news" means that it hardly ever happens. If a risk is in the news, then it's probably not worth worrying about. When something is no longer reported -- automobile deaths, domestic violence -- when it's so common that it's not news, then you should start worrying.

Tierney is arguing his position as someone who thinks that the Bush administration is doing a good job fighting terrorism, and that the media's reporting of suicide bombings in Iraq are sapping Americans' will to fight. I am looking at the same issue from the other side, as someone who thinks the media's reporting of terrorist attacks and threats has increased public support for the Bush administration's draconian counterterrorism laws and dangerous and damaging foreign and domestic policies. If the media didn't report all of the administrations's alerts and warnings and arrests, we would have a much more sensible counterterrorism policy in America and we would all be much safer.

So why is the argument wrong? It's wrong because the danger of not reporting terrorist attacks is greater than the risk of continuing to report them. Freedom of the press is a security measure. The only tool we have to keep government honest is public disclosure. Once we start hiding pieces of reality from the public -- either through legal censorship or self-imposed "restraint" -- we end up with a government that acts based on secrets. We end up with some sort of system that decides what the public should or should not know.

Here's one example. Last year I argued that the constant stream of terrorist alerts were a mechanism to keep Americans scared. This week, the media reported that the Bush administration repeatedly raised the terror threat level on flimsy evidence, against the recommendation of former DHS secretary Tom Ridge. If the media follows this story, we will learn -- too late for the 2004 election, but not too late for the future -- more about the Bush administration's terrorist propaganda machine.

Freedom of the press -- the unfettered publishing of all the bad news -- isn't without dangers. But anything else is even more dangerous. That's why Tierney is wrong.

And honestly, if anyone thinks they can get an accurate picture of anyplace on the planet by reading news reports, they're sadly mistaken.

Posted on May 12, 2005 at 9:49 AM46 Comments

Phishing and Identity Theft

I've already written about identity theft, and have said that the real problem is fraudulent transactions. This essay says much the same thing:

So, say your bank uses a username and password to login to your account. Conventional wisdom (?) says that you need to prevent the bad guys from stealing your username and password, right? WRONG! What you are trying to prevent is the bad guys STEALING YOUR MONEY. This distinction is very important. If you have an account with $0 dollars in it, which you never use, what does it matter if someone knows the access details? Your username and password are only valuable insofar as the bank allows anyone who knows them to take your money. And therein lies the REAL problem. The bank is too lazy (or incompetent) to do what Bruce Schneier describes as "authenticate the transaction, not the person". While it is incredibly difficult to prevent the bad guys from stealing access credentials (especially with browsers like Internet Explorer around), it is actually much simpler to prevent your money disappearing off to some foreign country....

When something goes wrong, the bank will tell you that you "authorised" the transaction, where in fact the party who ultimately "authorised" it is the bank, based on the information they chose to take as evidence that this transaction is the genuine desire of a legitimate customer.

The essay provides some recommendations as well.

  • Restrict IP addresses outside Australia
  • Restrict odd times of day (or at least be more vigilant)
  • Set cookies to identify machines
  • Record IP usually used
  • Record times of day usually accessed
  • Record days of week/month
  • Send emails when suspicious activity is detected
  • Lock accounts when fraud is suspected
  • Introduce a delay in transfers out -- for suspicious amounts, longer
  • Make care proportional to risk
  • Define risk relative to customer, not bank

These are good ideas, but need more refinement in the specifics. But they're a great start, and banks would do well to pay attention to them.

Posted on May 10, 2005 at 4:24 PM60 Comments

Company Continues Bad Information Security Practices

Stories about thefts of personal data are dime-a-dozen these days, and are generally not worth writing about.

This one has an interesting coda, though.

An employee hoping to get extra work done over the weekend printed out 2004 payroll information for hundreds of SafeNet's U.S. employees, snapped it into a briefcase and placed the briefcase in a car.

The car was broken into over the weekend and the briefcase stolen -- along with the employees' names, bank account numbers and Social Security numbers that were on the printouts, a company spokeswoman confirmed yesterday.

My guess is that most readers can point out the bad security practices here. One, the Social Security numbers and bank account numbers should not be kept with the bulk of the payroll data. Ideally, they should use employee numbers and keep sensitive (but irrelevant for most of the payroll process) information separate from the bulk of the commonly processed payroll data. And two, hard copies of that sensitive information should never go home with employees.

But SafeNet won't learn from its mistake:

The company said no policies were violated, and that no new policies are being written as a result of this incident.

The irony here is that this is a security company.

Posted on May 10, 2005 at 3:00 PM20 Comments

The Potential for an SSH Worm

SSH, or secure shell, is the standard protocol for remotely accessing UNIX systems. It's used everywhere: universities, laboratories, and corporations (particularly in data-intensive back office services). Thanks to SSH, administrators can stack hundreds of computers close together into air-conditioned rooms and administer them from the comfort of their desks.

When a user's SSH client first establishes a connection to a remote server, it stores the name of the server and its public key in a known_hosts database. This database of names and keys allows the client to more easily identify the server in the future.

There are risks to this database, though. If an attacker compromises the user's account, the database can be used as a hit-list of follow-on targets. And if the attacker knows the username, password, and key credentials of the user, these follow-on targets are likely to accept them as well.

A new paper from MIT explores the potential for a worm to use this infection mechanism to propagate across the Internet. Already attackers are exploiting this database after cracking passwords. The paper also warns that a worm that spreads via SSH is likely to evade detection by the bulk of techniques currently coming out of the worm detection community.

While a worm of this type has not been seen since the first Internet worm of 1988, attacks have been growing in sophistication and most of the tools required are already in use by attackers. It's only a matter of time before someone writes a worm like this.

One of the countermeasures proposed in the paper is to store hashes of host names in the database, rather than the names themselves. This is similar to the way hashes of passwords are stored in password databases, so that security need not rely entirely on the secrecy of the database.

The authors of the paper have worked with the open source community, and version 4.0 of OpenSSH has the option of hashing the known-hosts database. There is also a patch for OpenSSH 3.9 that does the same thing.

The authors are also looking for more data to judge the extent of the problem. Details about the research, the patch, data collection, and whatever else thay have going on can be found here.

Posted on May 10, 2005 at 9:06 AM31 Comments

REAL ID

The United States is getting a national ID card. The REAL ID Act (text of the bill and the Congressional Research Services analysis of the bill) establishes uniform standards for state driver's licenses, effectively creating a national ID card. It's a bad idea, and is going to make us all less safe. It's also very expensive. And it's all happening without any serious debate in Congress.

I've already written about national IDs. I've written about the fallacies of identification as a security tool. I'm not going to repeat myself here, and I urge everyone who is interested to read those two essays (and even this older essay). A national ID is a lousy security trade-off, and everyone needs to understand why.

Aside from those generalities, there are specifics about REAL ID that make for bad security.

The REAL ID Act requires driver's licenses to include a "common machine-readable technology." This will, of course, make identity theft easier. Assume that this information will be collected by bars and other businesses, and that it will be resold to companies like ChoicePoint and Acxiom. It actually doesn't matter how well the states and federal government protect the data on driver's licenses, as there will be parallel commercial databases with the same information.

Even worse, the same specification for RFID chips embedded in passports includes details about embedding RFID chips in driver's licenses. I expect the federal government will require states to do this, with all of the associated security problems (e.g., surreptitious access).

REAL ID requires that driver's licenses contain actual addresses, and no post office boxes. There are no exceptions made for judges or police -- even undercover police officers. This seems like a major unnecessary security risk.

REAL ID also prohibits states from issuing driver's licenses to illegal aliens. This makes no sense, and will only result in these illegal aliens driving without licenses -- which isn't going to help anyone's security. (This is an interesting insecurity, and is a direct result of trying to take a document that is a specific permission to drive an automobile, and turning it into a general identification device.)

REAL ID is expensive. It's an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I've seen estimates that the cost to the states of complying with REAL ID will be $120 million. That's $120 million that can't be spent on actual security.

And the wackiest thing is that none of this is required. In October 2004, the Intelligence Reform and Terrorism Prevention Act of 2004 was signed into law. That law included stronger security measures for driver's licenses, the security measures recommended by the 9/11 Commission Report. That's already done. It's already law.

REAL ID goes way beyond that. It's a huge power-grab by the federal government over the states' systems for issuing driver's licenses.

REAL ID doesn't go into effect until three years after it becomes law, but I expect things to be much worse by then. One of my fears is that this new uniform driver's license will bring a new level of "show me your papers" checks by the government. Already you can't fly without an ID, even though no one has ever explained how that ID check makes airplane terrorism any harder. I have previously written about Secure Flight, another lousy security system that tries to match airline passengers against terrorist watch lists. I've already heard rumblings about requiring states to check identities against "government databases" before issuing driver's licenses. I'm sure Secure Flight will be used for cruise ships, trains, and possibly even subways. Combine REAL ID with Secure Flight and you have an unprecedented system for broad surveillance of the population.

Is there anyone who would feel safer under this kind of police state?

Americans overwhelmingly reject national IDs in general, and there's an enormous amount of opposition to the REAL ID Act. This is from the EPIC page on REAL ID and National IDs:

More than 600 organizations have expressed opposition to the Real ID Act. Only two groups--Coalition for a Secure Driver's License and Numbers USA--support the controversial national ID plan. Organizations such as the American Association of Motor Vehicle Administrators, National Association of Evangelicals, American Library Association, Association for Computing Machinery (pdf), National Council of State Legislatures, American Immigration Lawyers Association (pdf), and National Governors Association are among those against the legislation.

And this site is trying to coordinate individual action against the REAL ID Act, although time is running short. It's already passed in the House, and the Senate votes tomorrow.

If you haven't heard much about REAL ID in the newspapers, that's not an accident. The politics of REAL ID is almost surreal. It was voted down last fall, but has been reintroduced and attached to legislation that funds military actions in Iraq. This is a "must-pass" piece of legislation, which means that there has been no debate on REAL ID. No hearings, no debates in committees, no debates on the floor. Nothing.

Near as I can tell, this whole thing is being pushed by Wisconsin Rep. Sensenbrenner primarily as an anti-immigration measure. The huge insecurities this will cause to everyone else in the United States seem to be collateral damage.

Unfortunately, I think this is a done deal. The legislation REAL ID is attached to must pass, and it will pass. Which means REAL ID will become law. But it can be fought in other ways: via funding, in the courts, etc. Those seriously interested in this issue are invited to attend an EPIC-sponsored event in Washington, DC, on the topic on June 6th. I'll be there.

Posted on May 9, 2005 at 9:06 AM

New U.S. Government Cybersecurity Position

From InfoWorld:

The Department of Homeland Security Cybersecurity Enhancement Act, approved by the House Subcommittee on Economic Security, Infrastructure Protection and Cybersecurity, would create the position of assistant secretary for cybersecurity at DHS. The bill, sponsored by Representatives Mac Thornberry, a Texas Republican, and Zoe Lofgren, a California Democrat, would also make the assistant secretary responsible for establishing a national cybersecurity threat reduction program and a national cybersecurity training program....

The top cybersecurity official at DHS has been the director of the agency's National Cyber Security Division, a lower-level position, and technology trade groups for several months have been calling for a higher-level position that could make cybersecurity a higher priority at DHS.

Sadly, this isn't going to amount to anything. Yes, it's good to have a higher-level official in charge of cybersecurity. But responsibility without authority doesn't work. A bigger bully pulpit isn't going to help without a coherent plan behind it, and we have none.

The absolute best thing the DHS could do for cybersecurity would be to coordinate the U.S. government's enormous purchasing power and demand more secure hardware and software.

Here's the text of the act, if anyone cares.

Posted on May 6, 2005 at 8:05 AM11 Comments

Lessons of the ChoicePoint Theft

Nice essay about the implications of the ChoicePoint data theft (and all the other data thefts, losses, and disclosures making headlines).

Posted on May 5, 2005 at 8:54 AM6 Comments

Detecting Nuclear Material in Transport

One of the few good things that's coming out of the U.S. terrorism policy is some interesting scientific research. This paper discusses detecting nuclear material in transport.

The authors believe that fixed detectors -- for example, at ports -- simply won't work. Terrorists are more likely to use highly enriched uranium (HEU), which is harder to detect, than plutonium. This difficulty of detection is more based on its natural rate of reactivity than on some technological hurdle. "The gamma rays and neutrons useful for detecting shielded HEU permit detection only at short distances (2-4 feet or less) and require that there be sufficient time to count a sufficient number of particles (several minutes to hours)."

The authors conclude that the only way to reliably detect shielded HEU is to build detectors into the transport vehicles. These detectors could take hours to record any radioactivity.

Posted on May 4, 2005 at 7:48 AM20 Comments

PDF Redacting Failure

I wasn't going to even bother writing about this, but I got too many e-mails from people.

We all know that masking over the text of a PDF document doesn't actually erase the underlying text, right?

Don't we?

Seems like we don't.

Italian media have published classified sections of an official US military inquiry into the accidental killing of an Italian agent in Baghdad.

A Greek medical student at Bologna University who was surfing the web early on Sunday found that with two simple clicks of his computer mouse he could restore censored portions of the report.

Posted on May 3, 2005 at 9:11 AM24 Comments

Users Disabling Security

It's an old story: users disable a security measure because it's annoying, allowing an attacker to bypass the measure.

A rape defendant accused in a deadly courthouse rampage was able to enter the chambers of the judge slain in the attack and hold the occupants hostage because the door was unlocked and a buzzer entry system was not activated, a sheriff's report says.

Security doesn't work unless the users want it to work. This is true on the personal and national scale, with or without technology.

Posted on May 2, 2005 at 8:22 AM28 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..