Essays in the Category “Economics of Security”
Google recently announced that it would start including individual users' names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.
These changes come on the heels of Google's move to explore replacing tracking cookies with something that users have even less control over.
The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users' and customers' data.
Pre-Snowden, there was no downside to cooperating with the NSA.
Last weekend a Texas couple apparently discovered that the electronic "baby monitor" in their children's bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child's father unplugged the monitor.
What does this mean for the rest of us? How secure are consumer electronic systems, now that they're all attached to the Internet?
Security Has Become a For-Profit Business
This is an edited version of a longer essay.
It's a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation come new possibilities, but also new concerns.
For one, the NYPD is testing a security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained back in January, "If something is obstructing the flow of that radiation, for example a weapon, the device will highlight that object."
Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential profiling.
In the eternal arms race between bad guys and those who police them, automated systems can have perverse effects
A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists demonstrated that it's possible to fabricate DNA evidence.
Reassuring people about privacy makes them more, not less, concerned. It's called "privacy salience", and Leslie John, Alessandro Acquisti, and George Loewenstein -- all at Carnegie Mellon University -- demonstrated this in a series of clever experiments. In one, subjects completed an online survey consisting of a series of questions about their academic behaviour -- "Have you ever cheated on an exam?" for example. Half of the subjects were first required to sign a consent warning -- designed to make privacy concerns more salient -- while the other half did not.
It's a sad, horrific story. Homeowner returns to find his house demolished. The demolition company was hired legitimately but there was a mistake and it demolished the wrong house. The demolition company relied on GPS co-ordinates, but requiring street addresses isn't a solution.
Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.
There's a basic consumer protection principle at work here, and it's the concept of "unfair and deceptive" trade practices. Basically, a company shouldn't be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren't generally available, claim features that don't exist, and so on.
Before his arrest, Tom Berge stole lead roof tiles from several buildings in south-east England, including the Honeywood Museum in Carshalton, the Croydon parish church, and the Sutton high school for girls. He then sold those tiles to scrap metal dealers.
As a security expert, I find this story interesting for two reasons. First, among attempts to ban, or at least censor, Google Earth, lest it help the terrorists, here is an actual crime that relied on the service: Berge needed Google Earth for reconnaissance.
An employee of Whole Foods in Ann Arbor, Michigan, was fired in 2007 for apprehending a shoplifter. More specifically, he was fired for touching a customer, even though that customer had a backpack filled with stolen groceries and was running away with them.
I regularly see security decisions that, like the Whole Foods incident, seem to make absolutely no sense. However, in every case, the decisions actually make perfect sense once you understand the underlying incentives driving the decision.
Book Review of Here Comes Everybody: The Power of Organizing Without Organizations
In 1937, Ronald Coase answered one of the most perplexing questions in economics: if markets are so great, why do organizations exist? Why don't people just buy and sell their own services in a market instead? Coase, who won the 1991 Nobel Prize in Economics, answered the question by noting a market's transaction costs: buyers and sellers need to find one another, then reach agreement, and so on. The Coase theorem implies that if these transaction costs are low enough, direct markets of individuals make a whole lot of sense.
In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of "full disclosure," asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.
The "Oyster card" used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston "T" was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start -- despite facing an open-and-shut case of First Amendment prior restraint.
London's Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won't be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.
It's not true that no one worries about terrorists attacking chemical plants, it's just that our politics seem to leave us unable to deal with the threat.
Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous.
Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department's Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision.
More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.
Encryption is the obvious solution for this problem -- I use PGPdisk -- but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts.
Full disclosure -- the practice of making the details of security vulnerabilities public -- is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.
Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?).
This essay appeared as the second half of a point-counterpoint with Marcus Ranum. Marcus's side can be found on his website.
Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don't have the capability to protect that information.
The big news in professional bicycle racing is that Floyd Landis may be stripped of his Tour de France title because he tested positive for a banned performance-enhancing drug. Sidestepping the issues of whether professional athletes should be allowed to take performance-enhancing drugs, how dangerous those drugs are, and what constitutes a performance-enhancing drug in the first place, I'd like to talk about the security and economic issues surrounding the issue of doping in professional sports.
Drug testing is a security issue. Various sports federations around the world do their best to detect illegal doping, and players do their best to evade the tests.
Google's $6 billion-a-year advertising business is at risk because it can't be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.
With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site.
I'm sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.
I'm in this awkward situation because 1) this article is due tomorrow, and 2) I'm attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.
The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently.
Have you ever been to a retail store and seen this sign on the register: "Your purchase free if you don't get a receipt"? You almost certainly didn't see it in an expensive or high-end store. You saw it in a convenience store, or a fast-food restaurant. Or maybe a liquor store.
At a security conference last week, Howard Schmidt, the former White House cybersecurity adviser, took the bold step of arguing that software developers should be held personally accountable for the security of the code they write.
He's on the right track, but he's made a dangerous mistake. It's the software manufacturers that should be held liable, not the individual programmers. Getting this one right will result in more-secure software for everyone; getting it wrong will simply result in a lot of messy lawsuits.
Last week California became the first state to enact a law specifically addressing phishing. Phishing, for those of you who have been away from the internet for the past few years, is when an attacker sends you an e-mail falsely claiming to be a legitimate business in order to trick you into giving away your account info -- passwords, mostly. When this is done by hacking DNS, it's called pharming.
Financial companies have until now avoided taking on phishers in a serious way, because it's cheaper and simpler to pay the costs of fraud.
An update to this essay was published in ENISA Quarterly in January 2007.
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses.
It's been said that all business-to-business sales are motivated by either fear or greed. Traditionally, security products and services have been a fear sell: fear of burglars, murders, kidnappers, and -- more recently -- hackers. Despite repeated attempts by the computer security industry to position itself as a greed sell -- "better Internet security will make your company more profitable because you can better manage your risks" -- fear remains the primary motivator for the purchase of network security products and services.
The problem is that many security risks are not borne by the organization making the purchasing decision.
Computer security is at a crossroads. It's failing, regularly, and with increasingly serious results. CEOs are starting to notice. When they finally get fed up, they'll demand improvements. (Either that or they'll abandon the Internet, but I don't believe that is a likely possibility.) And they'll get the improvements they demand; corporate America can be an enormously powerful motivator once it gets going.
Computer security is not a problem that technology can solve. Security solutions have a technological component, but security is fundamentally a people problem. Businesses approach security as they do any other business uncertainty: in terms of risk management. Organizations optimize their activities to minimize their cost-risk product, and understanding those motivations is key to understanding computer security today.
Network security is not a technological problem; it's a business problem. The only way to address it is to focus on business motivations. To improve the security of their products, companies - both vendors and users - must care; for companies to care, the problem must affect stock price. The way to make this happen is to start enforcing liabilities.
Underwriters Laboratories (UL) is an independent testing organization created in 1893, when William Henry Merrill was called in to find out why the Palace of Electricity at the Columbian Exposition in Chicago kept catching on fire (which is not the best way to tout the wonders of electricity). After making the exhibit safe, he realized he had a business model on his hands. Eventually, if your electrical equipment wasn't UL certified, you couldn't get insurance.
Today, UL rates all kinds of equipment, not just electrical.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.