Entries Tagged "economics of security"

Page 8 of 39

Is Software Security a Waste of Money?

I worry that comments about the value of software security made at the RSA Conference last week will be taken out of context. John Viega did not say that software security wasn’t important. He said:

For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that’s probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security. Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security.

I agree with that. For small companies, it’s not worth worrying much about software security. But for large software companies, it’s vital.

Posted on March 11, 2013 at 6:12 AMView Comments

Technologies of Surveillance

It’s a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.

For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the Daily News back in January, If something is obstructing the flow of that radiation—a weapon, for example—the device will highlight that object.

Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won’t make a difference. We’re scared of both terrorism and crime, even as the risks decrease; and when we’re scared, we’re willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.

A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD’s Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?

For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone’s clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.

The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they’re marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.

The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they’re too embedded in our security culture to be removed.

The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don’t work very well and, ironically, wouldn’t have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn’t sufficiently blur the images so they didn’t show travelers naked. Now, only millimeter-wave full-body scanners remain.

Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don’t.

Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It’s the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system—this time by a government instead of a company—despite decades of evidence that it doesn’t work and makes us all less safe.

It’s not that these technologies are totally useless. It’s that they’re expensive, and none of them is a panacea. Maybe there’s a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there’s a profit motive at work, too.

An edited version of this essay, without links, appeared in the New York Daily News.

EDITED TO ADD (2/13): IBM’s version massive data policing system is being tested in Rio de Jeneiro.

Posted on March 5, 2013 at 6:28 AMView Comments

All Those Companies that Can't Afford Dedicated Security

This is interesting:

In the security practice, we have our own version of no-man’s land, and that’s midsize companies. Wendy Nather refers to these folks as being below the “Security Poverty Line.” These folks have a couple hundred to a couple thousand employees. That’s big enough to have real data interesting to attackers, but not big enough to have a dedicated security staff and the resources they need to really protect anything. These folks are caught between the baseline and the service box. They default to compliance mandates like PCI-DSS because they don’t know any better. And the attackers seem to sneak those passing shots by them on a seemingly regular basis.

[…]

Back when I was on the vendor side, I’d joke about how 800 security companies chased 1,000 customers—meaning most of the effort was focus on the 1,000 largest customers in the world. But I wasn’t joking. Every VP of sales talks about how it takes the same amount of work to sell to a Fortune-class enterprise as it does to sell into the midmarket. They aren’t wrong, and it leaves a huge gap in the applicable solutions for the midmarket.

[…]

To be clear, folks in security no-man’s land don’t go to the RSA Conference, probably don’t read security pubs, or follow the security echo chamber on Twitter. They are too busy fighting fires and trying to keep things operational. And that’s fine. But all of the industry gatherings just remind me that the industry’s machinery is geared toward the large enterprise, not the unfortunate 5 million other companies in the world that really need the help.

I’ve seen this trend, and I think it’s a result of the increasing sophistication of the IT industry. Today, it’s increasingly rare for organizations to have bespoke security, just as it’s increasingly rare for them to have bespoke IT. It’s only the larger organizations that can afford it. Everyone else is increasingly outsourcing its IT to cloud providers. These providers are taking care of security—although we can certainly argue about how good a job they’re doing—so that the organizations themselves don’t have to. A company whose email consists entirely of Gmail accounts, whose payroll is entirely outsourced to Paychex, whose customer tracking system is entirely on Salesforce.com, and so on—and who increasingly accesses those systems using specialized devices like iPads and Android tablets—simply doesn’t have any IT infrastructure to secure anymore.

To be sure, I think we’re a long way off from this future being a secure one, but it’s the one the industry is headed toward. Yes, vendors at the RSA conference are only selling to the largest organizations. And, as I wrote back in 2008, soon they will only be selling to IT outsourcing companies (the term “cloud provider” hadn’t been invented yet):

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

[…]

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products and new startups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

Posted on February 22, 2013 at 6:03 AMView Comments

2013 U.S. Homeland Security Budget

Among other findings in this CBO report:

Funding for homeland security has dropped somewhat from its 2009 peak of $76 billion, in inflation-adjusted terms; funding for 2012 totaled $68 billion. Nevertheless, the nation is now spending substantially more than what it spent on homeland security in 2001.

Note that this is just direct spending on homeland security. This does not include DoD spending—which would include the costs of the wars in Iraq and Afghanistan—and Department of Justice spending. John Mueller estimates that we have spent $1.1 trillion over the ten years between 2002 and 2011.

Posted on October 2, 2012 at 9:41 AMView Comments

WEIS 2012

Last week I was at the Workshop on Economics and Information Security in Berlin. Excellent conference, as always. Ross Anderson liveblogged the event; see the comments for summaries of the talks.

On the second day, Ross and I debated—well, discussed—cybersecurity spending. At the first WEIS, he and I had a similar discussion: I argued that we weren’t spending enough on cybersecurity, and he argued that we were spending too much. For this discussion, we reversed our positions.

Posted on July 2, 2012 at 6:20 AMView Comments

E-Mail Accounts More Valuable than Bank Accounts

This informal survey produced the following result: “45% of the users found their email accounts more valuable than their bank accounts.”

The author believes this is evidence of some sophisticated security reasoning on the part of users:

From a security standpoint, I can’t agree more with these people. Email accounts are used most commonly to reset other websites’ account passwords, so if it gets compromised, the others will fall like dominos.

I disagree. I think something a lot simpler is going on. People believe that if their bank account is hacked, the bank will help them clean up the mess and they’ll get their money back. And in most cases, they will. They know that if their e-mail is hacked, all the damage will be theirs to deal with. I think this is public opinion reflecting reality.

Posted on June 26, 2012 at 1:57 PMView Comments

Economic Analysis of Bank Robberies

Yes, it’s clever:

The basic problem is the average haul from a bank job: for the three-year period, it was only £20,330.50 (~$31,613). And it gets worse, as the average robbery involved 1.6 thieves. So the authors conclude, “The return on an average bank robbery is, frankly, rubbish. It is not unimaginable wealth. It is a very modest £12,706.60 per person per raid.”

“Given that the average UK wage for those in full-time employment is around £26,000, it will give him a modest life-style for no more than 6 months,” the authors note. If a robber keeps hitting banks at a rate sufficient to maintain that modest lifestyle, by a year and a half into their career, odds are better than not they’ll have been caught. “As a profitable occupation, bank robbery leaves a lot to be desired.”

Worse still, the success of a robbery was a bit like winning the lottery, as the standard deviation on the £20,330.50 was £53,510.20. That means some robbers did far better than average, but it also means that fully a third of robberies failed entirely.

(If, at this point, you’re thinking that the UK is just a poor location for the bank robbery industry, think again, as the authors use FBI figures to determine that the average heist in the States only nets $4,330.00.)

There are ways to increase your chance of getting a larger haul. “Every extra member of the gang raises the expected value of the robbery proceeds by £9,033.20, on average and other things being equal,” the authors note. Brandishing some sort of firearm adds another £10 300.50, “again on average and other things being equal.”

We all kind of knew this—that’s why most of us aren’t bank robbers. The interesting question, at least to me, is why anyone is a bank robber. Why do people do things that, by any rational economic analysis, are irrational?

The answer is that people are terrible at figuring this sort of stuff out. They’re terrible at estimating the probability that any of their endeavors will succeed, and they’re terrible at estimating what their reward will be if they do succeed. There is a lot of research supporting this, but the most recent—and entertaining—thing on the topic I’ve seen recently is this TED talk by Daniel Gilbert.

Note bonus discussion terrorism at the very end.

EDITED TO ADD (7/14): Bank robbery and the Dunning-Kruger effect.

Posted on June 22, 2012 at 7:20 AMView Comments

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure—announcing the vulnerability publicly and damn the consequences—to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that—at least in most cases—is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on—and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others—while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public—the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect—there’s a lot of very poorly written software still out there—but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

1 6 7 8 9 10 39

Sidebar photo of Bruce Schneier by Joe MacInnis.