Entries Tagged "economics of security"

Page 25 of 39

"Cyber Crime Toolkits" Hit the News

On the BBC website:

“They are starting to pop up left and right,” said Tim Eades from security company Sana, of the sites offering downloadable hacking tools. “It’s the classic verticalisation of a market as it starts to mature.”

Malicious hackers had evolved over the last few years, he said, and were now selling the tools they used to use to the growing numbers of fledgling cyber thieves.

Mr Eades said some hacking groups offer boutique virus writing services that produce malicious programs that security software will not spot. Individual malicious programs cost up to £17 (25 euros), he said.

At the top end of the scale, said Mr Eades, were tools like the notorious MPack which costs up to £500.

The regular updates for the software ensure it uses the latest vulnerabilities to help criminals hijack PCs via booby-trapped webpages. It also includes a statistical package that lets owners know how successful their attack has been and where victims are based.

In one sense, there’s nothing new here. There have been rootkits and virus construction kits available on the Internet for years. The very definition of a “script kiddie” is someone who uses these tools without really understanding them. What is new is the market: these new tools aren’t for wannabe hackers, they’re for criminals. And with the new market comes a for-profit business model.

Posted on September 5, 2007 at 7:10 AMView Comments

The TSA and the Case of the Strange Battery Charger

A TSA screener doesn’t like the look of a homemade battery charger, and refuses to let it on an airplane. Interesting story, both for the escalation procedure the TSA screener followed, and this final observation:

But these are the times we live in. A handful of people with no knowledge of physics, engineering, or pyrotechnics are responsible for determining what is and what is not safe to bring on a plane. They’re paid minimum wage and told to panic if they see something they don’t recognize. Does this make me feel safer? It doesn’t really matter. Implementing real security would bring the cost of flying up, which would likely cause a collapse of the airborne transportation network this country has worked so hard to build up.

The UK banned laptop computers in carry-on luggage for a few days and quickly reversed the idea. The lack of laptops would make the option unattractive to business professionals. Security would cost more than money and many passengers wouldn’t have accepted it.

So the TSA finally let me onto my flight with the two devices they told me they weren’t going to let me take on my flight. They told me the device looked like an I.E.D., then let me on the plane with it.

Does that mean I can bring them on my flight next week?

And that’s the problem: the TSA is both arbitrary and capricious, and it’s impossible to follow the rules because no one knows how they will be applied.

Posted on July 19, 2007 at 6:53 AMView Comments

Security ROI

Interesting essay on security and return on investment (ROI):

Let’s get back to ROI. The major problem the ROSI crowd has is they are trying to speak the language of their managers who select projects based on ROI. There is no problem with selecting projects based on ROI, if the project is a wealth creation project and not a wealth preservation project.

Security managers should be unafraid to avoid using the term ROI, and instead say “My project will cost $1,000 but save the company $10,000.” Saving money / wealth preservation / loss avoidance is good.

Posted on July 14, 2007 at 6:54 AMView Comments

Cocktail Condoms

They’re protective covers that go over your drink and “protect” against someone trying to slip a Mickey Finn (or whatever they’re called these days):

The concept behind the cocktail cover is fairly simply. About the size of a coaster, it can be used to cap a drink that goes unattended. When a person returns to a beverage, there is a layer that can be pulled back, leaving a thin sheath protecting the cocktail. That can be punctured with a straw or pulled off entirely—either way the drinker will know that the cocktail has not been tampered with.

I’m sure there are many ways to defeat this security device if you’re so inclined: a syringe, affixing a new cover after you tamper with the drink, and so on. And this is exactly the sort of rare risk we’re likely to overreact to. But to me, the most interesting aspect of this story is the agenda. If these things become common, it won’t be because of security. It will be because of advertising:

Barry said that companies could advertise on the cocktail covers, likely covering the cost of production. Each cover, he said, costs less than 10 cents to make.

Posted on June 25, 2007 at 6:25 AMView Comments

Dan Geer on Trade-Offs and Monoculture

In the April 2007 issue of Queue, Dan Geer writes about security trade-offs, monoculture, and genetic diversity in honeybees:

Security people are never in charge unless an acute embarrassment has occurred. Otherwise, their advice is tempered by “economic reality,” which is to say that security is means, not an end. This is as it should be. Since means are about tradeoffs, security is about tradeoffs, but you already knew that.

Posted on May 17, 2007 at 6:58 AMView Comments

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

1933 Anti-Spam Doorbell

Here’s a great description of an anti-spam doorbell from 1933. A visitor had to deposit a dime into a slot to make the doorbell ring. If the homeowner appreciated the visit, he would return the dime. Otherwise, the dime became the cost of disturbing the homeowner.

This kind of system has been proposed for e-mail as well: the sender has to pay the receiver—or someone else in the system—a nominal amount for each e-mail sent. This money is returned if the e-mail is wanted, and forfeited if it is spam. The result would be to raise the cost of sending spam to the point where it is uneconomical.

I think it’s worth comparing the two systems—the doorbell system and the e-mail system—to demonstrate why it won’t work for spam.

The doorbell system fails for three reasons: the percentage of annoying visitors is small enough to make the system largely unnecessary, visitors don’t generally have dimes on them (presumably fixable if the system becomes ubiquitous), and it’s too easy to successfully bypass the system by knocking (not true for an apartment building).

The anti-spam system doesn’t suffer from the first two problems: spam is an enormous percentage of total e-mail, and an automated accounting system makes the financial mechanics easy. But the anti-spam system is too easy to bypass, and it’s too easy to hack. And once you set up a financial system, you’re simply inviting hacks.

The anti-spam system fails because spammers don’t have to send e-mail directly—they can take over innocent computers and send it from them. So it’s the people whose computers have been hacked into, victims in their own right, who will end up paying for spam. This risk can be limited by letting people put an upper limit on the money in their accounts, but it is still serious.

And criminals can exploit the system in the other direction, too. They could hack into innocent computers and have them send “spam” to their email addresses, collecting money in the process.

Trying to impose some sort of economic penalty on unwanted e-mail is a good idea, but it won’t work unless the endpoints are trusted. And we’re nowhere near that trust today.

Posted on May 10, 2007 at 5:57 AMView Comments

Do We Really Need a Security Industry?

Last week I attended the Infosecurity Europe conference in London. Like at the RSA Conference in February, the show floor was chockablock full of network, computer and information security companies. As I often do, I mused about what it means for the IT industry that there are thousands of dedicated security products on the market: some good, more lousy, many difficult even to describe. Why aren’t IT products and services naturally secure, and what would it mean for the industry if they were?

I mentioned this in an interview with Silicon.com, and the published article seems to have caused a bit of a stir. Rather than letting people wonder what I really meant, I thought I should explain.

The primary reason the IT security industry exists is because IT products and services aren’t naturally secure. If computers were already secure against viruses, there wouldn’t be any need for antivirus products. If bad network traffic couldn’t be used to attack computers, no one would bother buying a firewall. If there were no more buffer overflows, no one would have to buy products to protect against their effects. If the IT products we purchased were secure out of the box, we wouldn’t have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn’t help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity—companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later. Their profits would rise in step with the overall level of security on the internet. Initially we’d still be spending a comparable amount of money per year on security—on secure development practices, on embedded security and so on—but some of that money would be going into improving the quality of the IT products we’re buying, and would reduce the amount we spend on security in future years.

I know this is a utopian vision that I probably won’t see in my lifetime, but the IT services market is pushing us in this direction. As IT becomes more of a utility, users are going to buy a whole lot more services than products. And by nature, services are more about results than technologies. Service customers—whether home users or multinational corporations—care less and less about the specifics of security technologies, and increasingly expect their IT to be integrally secure.

Eight years ago, I formed Counterpane Internet Security on the premise that end users (big corporate users, in this case) really don’t want to have to deal with network security. They want to fly airplanes, produce pharmaceuticals or do whatever their core business is. They don’t want to hire the expertise to monitor their network security, and will gladly farm it out to a company that can do it for them. We provided an array of services that took day-to-day security out of the hands of our customers: security monitoring, security-device management, incident response. Security was something our customers purchased, but they purchased results, not details.

Last year BT bought Counterpane, further embedding network security services into the IT infrastructure. BT has customers that don’t want to deal with network management at all; they just want it to work. They want the internet to be like the phone network, or the power grid, or the water system; they want it to be a utility. For these customers, security isn’t even something they purchase: It’s one small part of a larger IT services deal. It’s the same reason IBM bought ISS: to be able to have a more integrated solution to sell to customers.

This is where the IT industry is headed, and when it gets there, there’ll be no point in user conferences like Infosec and RSA. They won’t go away; they’ll simply become industry conferences. If you want to measure progress, look at the demographics of these conferences. A shift toward infrastructure-geared attendees is a measure of success.

Of course, security products won’t disappear—at least, not in my lifetime. There’ll still be firewalls, antivirus software and everything else. There’ll still be startup companies developing clever and innovative security technologies. But the end user won’t care about them. They’ll be embedded within the services sold by large IT outsourcing companies like BT, EDS and IBM, or ISPs like EarthLink and Comcast. Or they’ll be a check-box item somewhere in the core switch.

IT security is getting harder—increasing complexity is largely to blame—and the need for aftermarket security products isn’t disappearing anytime soon. But there’s no earthly reason why users need to know what an intrusion-detection system with stateful protocol analysis is, or why it’s helpful in spotting SQL injection attacks. The whole IT security industry is an accident—an artifact of how the computer industry developed. As IT fades into the background and becomes just another utility, users will simply expect it to work—and the details of how it works won’t matter.

This was my 41st essay for Wired.com.

EDITED TO ADD (5/3): Commentary.

EDITED TO ADD (5/4): More commentary.

EDITED TO ADD (5/10): More commentary.

Posted on May 3, 2007 at 10:09 AMView Comments

Sponsor-Only Security at the 2012 London Olympics

If you want your security technology to be considered for the London Olympics, you have to be a major sponsor of the event.

…he casually revealed that because neither of these companies was a ‘major sponsor’ of the Olympics their technology could not be used.

Yes, you read that right, as far as the technology behind the security of the London Olympic Games is concerned best of breed and suitability for purpose do not come into, paying a large amount of money to the International Olympic Committee does.

I have repeatedly said that security is generally only part of a larger context, but this borders on ridiculous.

Posted on April 30, 2007 at 5:55 AMView Comments

1 23 24 25 26 27 39

Sidebar photo of Bruce Schneier by Joe MacInnis.