July 15, 2006
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0607.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
I'm sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.
I'm in this awkward situation because 1) this article is due tomorrow, and 2) I'm attending the fifth Workshop on the Economics of Information Security, or: WEIS -- to my mind, the most interesting computer security conference of the year.
The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, "Why Information Security Is Hard -- An Economic Perspective," and me in various essays and presentations from that same period.
WEIS began a year later at the University of California at Berkeley and has grown ever since. It's the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.
And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: the people who could protect a system are not the ones who suffer the costs of failure.
When you start looking, economic considerations are everywhere in computer security. Hospitals' medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients' privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.
In all of these examples, the economic considerations of security are more important than the technical considerations.
More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.
Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that's because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.
Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive -- or not restrictive enough -- to maximize society's creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft's Trusted Computing Initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.
WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones -- if you have an uncommon phone, the police probably don't have the tools to perform forensic analysis -- and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points, and that the best predictor of wireless security is the default configuration of the router.
Other researchers presented economic models to explain patch management, peer-to-peer worms, investment in information security technologies and opt-in versus opt-out privacy policies. There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures: less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols, specifically DNS Security Extensions.
This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others' languages. But now it seems that there's a lot more synergy, and more collaborations between the two camps.
I've long said that the fundamental problems in computer security are no longer about technology; they're about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we're going to improve security in the information age.
Links to all the WEIS papers are available here.
Ross Anderson's Why Information Security Is Hard -- An Economic Perspective":
Crypto-Gram is currently in its ninth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram-back.html>. These are a selection of articles that appeared in this calendar month in other years.
CardSystems Exposes 40 Million Identities:
Due Process and Security:
Coca-Cola and the NSA:
How to Fight:
Embedded Control Systems and Security:
Phone Hacking: The Next Generation:
Full Disclosure and the CIA:
Security Risks of Unicode:
The Future of Crypto-Hacking:
Two quotes. "Authorities had also severely limited the cellular network for fear it could be used to trigger more attacks." And: "Some of the injured were seen frantically dialing their cell phones. The mobile phone network collapsed adding to the sense of panic."
Cell phones are useful to terrorists, but they're more useful to the rest of us.
Note: The story was changed online, and the second quote was deleted.
Google's $6B-a-year advertising business is at risk because it can't be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.
With network click fraud, you host GoogleAds on your own website. Google pays you every time someone clicks on its ad on your site. It's fraud if you sit at the computer and repeatedly click on the ad or -- better yet -- write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people's computers to generate the fake clicks.
The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money -- sometimes a lot of money -- on nothing. Click Monkeys is a spoof site that offers to commit click fraud for you.)
Click fraud has become a classic security arms race. Google improves its fraud detection tools, so the fraudsters get cleverer ... and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn't doing enough. My guess is that everyone is right: it's in Google's interest both to solve and to downplay the importance of the problem.
But the overarching problem is both hard to solve and important: how do you tell if there's an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn't automated his responses, and isn't being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens), or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that's infected with a Trojan.
This problem manifests itself in other areas, as well.
For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly, or see information they normally couldn't see.
Playing is less fun if everyone else is computer assisted, but unless there's a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players -- or even computers playing without a real person at all -- have the potential to drive all the human players away from the game.
Look around the internet, and you see this problem pop up again and again. The whole point of captchas is to ensure that it's a real person visiting a website, not just a bot on a computer. Standard testing doesn't work online, because the tester can't be sure that the test taker doesn't have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that's not always practical and obviates the benefits of internet testing.
This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant's computer committed some hacking offence, but the defense argued that it wasn't the defendant who did it -- that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties -- and it wasn't him who'd downloaded the porn.
Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer, or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it -- in both senses of that word.
And it's a problem that will get worse as computers get better at imitating people.
Google is testing a new advertising model to deal with click fraud: cost per action. Advertisers don't pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It's a hard model to make work -- Google would become more of a partner in the final sale instead of an indifferent displayer of advertising -- but it's the right security response to click fraud: change the rules of the game so that click fraud doesn't matter.
That's how to solve a security problem.
Lawsuits against Google:
Google cost-per-action testing:
Surreal story about a person coming into the U.S. from Iraq who is held up at the border because he used to sell copyrighted images on T-shirts.
There are a variety of encryption technologies that allow you to analyze data without knowing details of the data. Think of it as privacy-enhanced data mining.
"How to build a low-cost, extended-range RFID skimmer" by Ilan Kirschenbaum and Avishai Wool. To appear in 15th USENIX Security Symposium, Vancouver, Canada, August 2006.
Fascinating paper on Xbox security. The conclusion: "The security system of the Xbox has been a complete failure."
Random identity generator:
More information about the Greek wiretapping scandal:
I've long known about the possible Unix date issue, but this is the first I've heard of an actual bug due to the Unix time epoch rolling over in 2038.
MySpace is increasing security.
Excellent analysis on applying CALEA to VoIP: "Security Implications of Applying the Communications Assistance to Law Enforcement Act to Voice over IP," by Steve Bellovin, Matt Blaze, Ernie Brickell, Clint Brooks, Vint Cerf, Whit Diffie, Susan Landau, Jon Peterson, and John Treichler. At least read the Executive Summary.
Maybe I shouldn't have said this: "'I have a completely open Wi-Fi network,' Schneier told ZDNet UK. 'Firstly, I don't care if my neighbors are using my network. Secondly, I've protected my computers. Thirdly, it's polite. When people come over they can use it.'" For the record, I have an ultra-secure wireless network that automatically reports all hacking attempts to unsavory men with bitey dogs.
More true than funny, unfortunately. A template for news stories on data gathering:
I can't believe I forgot to blog this great article about the communications intercept trade show in DC:
Loading ActiveX controls on Vista without administrator privileges.
A song: Facial Recognition Technology Blues
Annual Report from the Privacy Commissioner of Canada
In this attack, you can seize control of someone's computer using his WiFi interface, even if he's not connected to a network. No details yet; the researchers are presenting their results at BlackHat on August 2nd.
Here's a new patent issued to the U.S. Navy. It sounds like they've patented the firewall.
I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here's a more formal explanation:
One response to software liability is to deliberately program in such a way as to obscure liabilities. This blog entry on "unreliable programming" is satire, but it's perceptive.
A news article on the failure of two-factor authentication. Phishers are converting to man-in-the-middle attacks, which bypass the security measures.
The New York Times is running a scare story on the linkage between identity theft and methamphetamine users. Supposedly meth users are ideally suited to be computer hackers. I don't know if this is true or not, but I worry about Congressional intervention if hacking gets linked to the war on drugs.
The Galileo satellite codes have been cracked. Actually, the cracked codes are from a prototype satellite; the final Galileo codes will be different.
Spy gadgets you can buy. What's interesting to me is less what is available commercially today, and more what we can extrapolate is available to real spies.
Good article on how complexity greatly limits the effectiveness of terror investigations. The stories of wasted resources are all from the UK, but the morals are universal.
O2 is a UK cell phone network. The company gives you the option of setting up a PIN on your phone. The idea is that if someone steals your phone, they can't make calls. If they type the PIN incorrectly three times, the phone is blocked. To deal with the problems of phone owners mistyping their PIN -- or forgetting it -- they can contact O2 and get a Personal Unlock Code (PUK). Presumably, the operator goes through some authentication steps to ensure that the person calling is actually the legitimate owner of the phone.
So far, so good.
But O2 has decided to automate the PUK process. Now anyone on the Internet can visit an O2 website type in a valid mobile telephone number, and get a valid PUK to reset the PIN -- without any authentication whatsoever.
This seems like a bad idea, but after I posted it on my blog a representative from O2 sent me the following:
"Yes, it does seem there is a security risk by O2 supplying such a service, but in fact we believe this risk is very small. The risk is when a customer's phone is lost or stolen. There are two scenarios in that event:
The O2 website:
For a long time, the League of Women Voters (LWV) had been on the wrong side of the electronic voting machine issue. They were in favor of electronic machines, and didn't see the need for voter-verifiable paper trails. (They use to have a horrid and misleading Q&A about the issue on their website, but it's gone now. Barbara Simons published a rebuttal, which includes their original Q&A.)
The politics of the LWV are Byzantine, but basically there are local leagues under state leagues, which in turn are under the national (LWVUS) league. There is a national convention once every other year, and all sorts of resolutions are passed by the membership. But the national office can do a lot to undercut the membership and the state leagues. The politics of voting machines is an example of this.
At the 2004 convention, the LWV membership passed a resolution on electronic voting called "SARA," which stood for "Secure, Accurate, Recountable, and Accessible." Those in favor of the resolution thought that "recountable" meant auditable, which meant voter-verifiable paper trails. But the national LWV office decided to spin SARA to say that recountable does not imply paper. While they could no longer oppose paper outright, they refused to say that paper was desirable. For example, they held Georgia's system up as a model, and Georgia uses paperless Diebold DRE machines. It makes you wonder if the LWVUS leadership is in someone's pocket.
So at the 2006 convention, the LWV membership passed *another* resolution. This one was much more clearly worded: designed to make it impossible for the national office to pretend that the LWV was not in favor of voter-verified paper trails.
Unfortunately, the League of Women Voters has not issued a press release about this resolution. (There is a press release by VerifiedVoting.org about it.) I'm sure that the national office simply doesn't want to acknowledge the membership's position on the issue, and wishes the issue would just go away quietly. It's a pity; the resolution is a great one and worth publicizing.
Here's the text of the resolution:
"Resolution Related to Program Requiring a Voter-Verifiable Paper Ballot or Paper Record with Electronic Voting Machines
"Motion to adopt the following resolution related to program requiring a voter-verified paper ballot or paper record with electronic voting systems.
"Whereas: Some LWVs have had difficulty applying the SARA Resolution (Secure, Accurate, Recountable and Accessible) passed at the last Convention, and
"Whereas: Paperless electronic voting systems are not inherently secure, can malfunction, and do not provide a recountable audit trail,
"Therefore be it resolved that:
"The position on the Citizens' Right to Vote be interpreted to affirm that LWVUS supports only voting systems that are designed so that:
By the way, the 2006 LWV membership also voted on a resolution in favor of net neutrality (the Connecticut league issued a press release, because they spearheaded the issue), and one against the death penalty. The national LWV office hasn't issued a press release about those two issues, either.
Verified Voting press release:
Net neutrality press release by the Connecticut LWV:
Q&A with Barbara Simons' rebuttal:
I have been participating in the Brennan Center's Task Force on Voting Security. Earlier this month we released a report on electronic voting.
From the executive summary:
"In 2005, the Brennan Center convened a Task Force of internationally renowned government, academic, and private-sector scientists, voting machine experts and security professionals to conduct the nation's first systematic analysis of security vulnerabilities in the three most commonly purchased electronic voting systems. The Task Force spent more than a year conducting its analysis and drafting this report. During this time, the methodology, analysis, and text were extensively peer reviewed by the National Institute of Standards and Technology ("NIST")."
"The Task Force examined security threats to the technologies used in Direct Recording Electronic voting systems ("DREs"), DREs with a voter verified auditable paper trail ("DREs w/ VVPT") and Precinct Count Optical Scan ("PCOS") systems. The analysis assumes that appropriate physical security and accounting procedures are all in place."
"Three fundamental points emerge from the threat analysis in the Security Report:
"1. All three voting systems have significant security and reliability vulnerabilities, which pose a real danger to the integrity of national, state, and local elections.
"There are a number of steps that jurisdictions can take to address the vulnerabilities identified in the Security Report and make their voting systems significantly more secure. We recommend adoption of the following security measures:
"1. Conduct automatic routine audits comparing voter verified paper records to the electronic record following every election. A voter verified paper record accompanied by a solid automatic routine audit of those records can go a long way toward making the least difficult attacks much more difficult.
The report is long, but I think it's worth reading. If you're short on time, though, at least read the Executive Summary.
The report has generated some press. Unfortunately, the news articles recycle some of the lame points that Diebold continues to make in the face of this kind of analysis. From The Washington Post article:
"Voting machine vendors have dismissed many of the concerns, saying they are theoretical and do not reflect the real-life experience of running elections, such as how machines are kept in a secure environment.
"'It just isn't the piece of equipment, ' said David Bear, a spokesman for Diebold Election Systems, one of the country's largest vendors. 'It's all the elements of an election environment that make for a secure election.'
"'This report is based on speculation rather than an examination of the record. To date, voting systems have not been successfully attacked in a live election,' said Bob Cohen, a spokesman for the Election Technology Council, a voting machine vendors' trade group. 'The purported vulnerabilities presented in this study, while interesting in theory, would be extremely difficult to exploit.'"
I wish The Washington Post found someone to point out that there have been many, many irregularities with electronic voting machines over the years, and the lack of convincing evidence of fraud is exactly the problem with their no-audit-possible systems. Or that the "it's all theoretical" argument is the same one that software vendors used to use to discredit security vulnerabilities before the full-disclosure movement forced them to admit that their software had problems.
There are hundreds of comments -- many of them interesting -- on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane is the world's leading protector of networked information - the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.
Copyright (c) 2006 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..