Entries Tagged "debates"

Page 2 of 5

Me on Cyberwar

During the cyberwar debate a few months ago, I said this:

If we frame this discussion as a war discussion, then what you do when there’s a threat of war is you call in the military and you get military solutions. You get lockdown; you get an enemy that needs to be subdued. If you think about these threats in terms of crime, you get police solutions. And as we have this debate, not just on stage, but in the country, the way we frame it, the way we talk about it; the way the headlines read, determine what sort of solutions we want, make us feel better. And so the threat of cyberwar is being grossly exaggerated and I think it’s being done for a reason. This is a power grab by government. What Mike McConnell didn’t mention is that grossly exaggerating a threat of cyberwar is incredibly profitable.

More of my writings on cyberwar, and the debate, here.

Posted on October 1, 2010 at 12:10 PMView Comments

Consumerization and Corporate IT Security

If you’re a typical wired American, you’ve got a bunch of tech tools you like and a bunch more you covet. You have a cell phone that can easily text. You’ve got a laptop configured just the way you want it. Maybe you have a Kindle for reading, or an iPad. And when the next new thing comes along, some of you will line up on the first day it’s available.

So why can’t work keep up? Why are you forced to use an unfamiliar, and sometimes outdated, operating system? Why do you need a second laptop, maybe an older and clunkier one? Why do you need a second cell phone with a new interface, or a BlackBerry, when your phone already does e-mail? Or a second BlackBerry tied to corporate e-mail? Why can’t you use the cool stuff you already have?

More and more companies are letting you. They’re giving you an allowance and allowing you to buy whatever laptop you want, and to connect into the corporate network with whatever device you choose. They’re allowing you to use whatever cell phone you have, whatever portable e-mail device you have, whatever you personally need to get your job done. And the security office is freaking.

You can’t blame them, really. Security is hard enough when you have control of the hardware, operating system and software. Lose control of any of those things, and the difficulty goes through the roof. How do you ensure that the employee devices are secure, and have up-to-date security patches? How do you control what goes on them? How do you deal with the tech support issues when they fail? How do you even begin to manage this logistical nightmare? Better to dig your heels in and say “no.”

But security is on the losing end of this argument, and the sooner it realizes that, the better.

The meta-trend here is consumerization: cool technologies show up for the consumer market before they’re available to the business market. Every corporation is under pressure from its employees to allow them to use these new technologies at work, and that pressure is only getting stronger. Younger employees simply aren’t going to stand for using last year’s stuff, and they’re not going to carry around a second laptop. They’re either going to figure out ways around the corporate security rules, or they’re going to take another job with a more trendy company. Either way, senior management is going to tell security to get out of the way. It might even be the CEO, who wants to get to the company’s databases from his brand new iPad, driving the change. Either way, it’s going to be harder and harder to say no.

At the same time, cloud computing makes this easier. More and more, employee computing devices are nothing more than dumb terminals with a browser interface. When corporate e-mail is all webmail, corporate documents are all on GoogleDocs, and when all the specialized applications have a web interface, it’s easier to allow employees to use any up-to-date browser. It’s what companies are already doing with their partners, suppliers, and customers.

Also on the plus side, technology companies have woken up to this trend and — from Microsoft and Cisco on down to the startups — are trying to offer security solutions. Like everything else, it’s a mixed bag: some of them will work and some of them won’t, most of them will need careful configuration to work well, and few of them will get it right. The result is that we’ll muddle through, as usual.

Security is always a tradeoff, and security decisions are often made for non-security reasons. In this case, the right decision is to sacrifice security for convenience and flexibility. Corporations want their employees to be able to work from anywhere, and they’re going to have loosened control over the tools they allow in order to get it.

This essay first appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security Magazine. You can read Marcus’s half here.

Posted on September 7, 2010 at 7:25 AMView Comments

Hiring Hackers

Any essay on hiring hackers quickly gets bogged down in definitions. What is a hacker, and how is he different from a cracker? I have my own definitions, but I’d rather define the issue more specifically: Would you hire someone convicted of a computer crime to fill a position of trust in your computer network? Or, more generally, would you hire someone convicted of a crime for a job related to that crime?

The answer, of course, is “it depends.” It depends on the specifics of the crime. It depends on the ethics involved. It depends on the recidivism rate of the type of criminal. It depends a whole lot on the individual.

Would you hire a convicted pedophile to work at a day care center? Would you hire Bernie Madoff to manage your investment fund? The answer is almost certainly no to those two — but you might hire a convicted bank robber to consult on bank security. You might hire someone who was convicted of false advertising to write ad copy for your next marketing campaign. And you might hire someone who ran a chop shop to fix your car. It depends on the person and the crime.

It can get even murkier. Would you hire a CIA-trained assassin to be a bodyguard? Would you put a general who led a successful attack in charge of defense? What if they were both convicted of crimes in whatever country they were operating in? There are different legal and ethical issues, to be sure, but in both cases the people learned a certain set of skills regarding offense that could be transferable to defense.

Which brings us back to computers. Hacking is primarily a mindset: a way of thinking about security. Its primary focus is in attacking systems, but it’s invaluable to the defense of those systems as well. Because computer systems are so complex, defending them often requires people who can think like attackers.

Admittedly, there’s a difference between thinking like an attacker and acting like a criminal, and between researching vulnerabilities in fielded systems and exploiting those vulnerabilities for personal gain. But there is a huge variability in computer crime convictions, and — at least in the early days — many hacking convictions were unjust and unfair. And there’s also a difference between someone’s behavior as a teenager and his behavior later in life. Additionally, there might very well be a difference between someone’s behavior before and after a hacking conviction. It all depends on the person.

An employer’s goal should be to hire moral and ethical people with the skill set required to do the job. And while a hacking conviction is certainly a mark against a person, it isn’t always grounds for complete non-consideration.

“We don’t hire hackers” and “we don’t hire felons” are coarse generalizations, in the same way that “we only hire people with this or that security certification” is. They work — you’re less likely to hire the wrong person if you follow them — but they’re both coarse and flawed. Just as all potential employees with certifications aren’t automatically good hires, all potential employees with hacking convictions aren’t automatically bad hires. Sure, it’s easier to hire people based on things you can learn from checkboxes, but you won’t get the best employees that way. It’s far better to look at the individual, and put those check boxes into context. But we don’t always have time to do that.

Last winter, a Minneapolis attorney who works to get felons a fair shake after they served their time told of a sign he saw: “Snow shovelers wanted. Felons need not apply.” It’s not good for society if felons who have served their time can’t even get jobs shoveling snow.

This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus’s half is here.

Posted on June 10, 2010 at 6:34 AMView Comments

Preventing Terrorist Attacks in Crowded Areas

On the New York Times Room for Debate Blog, I — along with several other people — was asked about how to prevent terrorist attacks in crowded areas. This is my response.

In the wake of Saturday’s failed Times Square car bombing, it’s natural to ask how we can prevent this sort of thing from happening again. The answer is stop focusing on the specifics of what actually happened, and instead think about the threat in general.

Think about the security measures commonly proposed. Cameras won’t help. They don’t prevent terrorist attacks, and their forensic value after the fact is minimal. In the Times Square case, surely there’s enough other evidence — the car’s identification number, the auto body shop the stolen license plates came from, the name of the fertilizer store — to identify the guy. We will almost certainly not need the camera footage. The images released so far, like the images in so many other terrorist attacks, may make for exciting television, but their value to law enforcement officers is limited.

Check points won’t help, either. You can’t check everybody and everything. There are too many people to check, and too many train stations, buses, theaters, department stores and other places where people congregate. Patrolling guards, bomb-sniffing dogs, chemical and biological weapons detectors: they all suffer from similar problems. In general, focusing on specific tactics or defending specific targets doesn’t make sense. They’re inflexible; possibly effective if you guess the plot correctly, but completely ineffective if you don’t. At best, the countermeasures just force the terrorists to make minor changes in their tactic and target.

It’s much smarter to spend our limited counterterrorism resources on measures that don’t focus on the specific. It’s more efficient to spend money on investigating and stopping terrorist attacks before they happen, and responding effectively to any that occur. This approach works because it’s flexible and adaptive; it’s effective regardless of what the bad guys are planning for next time.

After the Christmas Day airplane bombing attempt, I was asked how we can better protect our airplanes from terrorist attacks. I pointed out that the event was a security success — the plane landed safely, nobody was hurt, a terrorist was in custody — and that the next attack would probably have nothing to do with explosive underwear. After the Moscow subway bombing, I wrote that overly specific security countermeasures like subway cameras and sensors were a waste of money.

Now we have a failed car bombing in Times Square. We can’t protect against the next imagined movie-plot threat. Isn’t it time to recognize that the bad guys are flexible and adaptive, and that we need the same quality in our countermeasures?

I know, nothing I haven’t said many times before.

Steven Simon likes cameras, although his arguments are more movie-plot than real. Michael Black, Noah Shachtman, Michael Tarr, and Jeffrey Rosen all write about the limitations of security cameras. Paul Ekman wants more people. And Richard Clarke has a nice essay about how we shouldn’t panic.

Posted on May 4, 2010 at 1:31 PMView Comments

Anonymity and the Internet

Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We’ll know who is sending us spam and who is trying to hack into corporate networks. And when there are massive denial-of-service attacks, such as those against Estonia or Georgia or South Korea, we’ll know who was responsible and take action accordingly.

The problem is that it won’t work. Any design of the Internet must allow for anonymity. Universal identification is impossible. Even attribution — knowing who is responsible for particular Internet packets — is impossible. Attempting to build such a system is futile, and will only give criminals and hackers new ways to hide.

Imagine a magic world in which every Internet packet could be traced to its origin. Even in this world, our Internet security problems wouldn’t be solved. There’s a huge gap between proving that a packet came from a particular computer and that a packet was directed by a particular person. This is the exact problem we have with botnets, or pedophiles storing child porn on innocents’ computers. In these cases, we know the origins of the DDoS packets and the spam; they’re from legitimate machines that have been hacked. Attribution isn’t as valuable as you might think.

Implementing an Internet without anonymity is very difficult, and causes its own problems. In order to have perfect attribution, we’d need agencies — real-world organizations — to provide Internet identity credentials based on other identification systems: passports, national identity cards, driver’s licenses, whatever. Sloppier identification systems, based on things such as credit cards, are simply too easy to subvert. We have nothing that comes close to this global identification infrastructure. Moreover, centralizing information like this actually hurts security because it makes identity theft that much more profitable a crime.

And realistically, any theoretical ideal Internet would need to allow people access even without their magic credentials. People would still use the Internet at public kiosks and at friends’ houses. People would lose their magic Internet tokens just like they lose their driver’s licenses and passports today. The legitimate bypass mechanisms would allow even more ways for criminals and hackers to subvert the system.

On top of all this, the magic attribution technology doesn’t exist. Bits are bits; they don’t come with identity information attached to them. Every software system we’ve ever invented has been successfully hacked, repeatedly. We simply don’t have anywhere near the expertise to build an airtight attribution system.

Not that it really matters. Even if everyone could trace all packets perfectly, to the person or origin and not just the computer, anonymity would still be possible. It would just take one person to set up an anonymity server. If I wanted to send a packet anonymously to someone else, I’d just route it through that server. For even greater anonymity, I could route it through multiple servers. This is called onion routing and, with appropriate cryptography and enough users, it adds anonymity back to any communications system that prohibits it.

Attempts to banish anonymity from the Internet won’t affect those savvy enough to bypass it, would cost billions, and would have only a negligible effect on security. What such attempts would do is affect the average user’s access to free speech, including those who use the Internet’s anonymity to survive: dissidents in Iran, China, and elsewhere.

Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you’ll never truly know where a packet came from. Work on the problems you can solve: software that’s secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we’re doing, and they’ll do more to improve security than trying to fix insoluble problems.

The whole attribution problem is very similar to the copy-protection/digital-rights-management problem. Just as it’s impossible to make specific bits not copyable, it’s impossible to know where specific bits came from. Bits are bits. They don’t naturally come with restrictions on their use attached to them, and they don’t naturally come with author information attached to them. Any attempts to circumvent this limitation will fail, and will increasingly need to be backed up by the sort of real-world police-state measures that the entertainment industry is demanding in order to make copy-protection work. That’s how China does it: police, informants, and fear.

Just as the music industry needs to learn that the world of bits requires a different business model, law enforcement and others need to understand that the old ideas of identification don’t work on the Internet. For good or for bad, whether you like it or not, there’s always going to be anonymity on the Internet.

This essay originally appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response below my essay.

EDITED TO ADD (2/5): Microsoft’s Craig Mundie wants to abolish anonymity as well.

What Mundie is proposing is to impose authentication. He draws an analogy to automobile use. If you want to drive a car, you have to have a license (not to mention an inspection, insurance, etc). If you do something bad with that car, like break a law, there is the chance that you will lose your license and be prevented from driving in the future. In other words, there is a legal and social process for imposing discipline. Mundie imagines three tiers of Internet ID: one for people, one for machines and one for programs (which often act as proxies for the other two).

Posted on February 3, 2010 at 6:16 AMView Comments

Is Antivirus Dead?

This essay previously appeared in Information Security Magazine, as the second half of a point-counterpoint with Marcus Ranum. You can read his half here as well.

Security is never black and white. If someone asks, “for best security, should I do A or B?” the answer almost invariably is both. But security is always a trade-off. Often it’s impossible to do both A and B — there’s no time to do both, it’s too expensive to do both, or whatever — and you have to choose. In that case, you look at A and B and you make you best choice. But it’s almost always more secure to do both.

Yes, antivirus programs have been getting less effective as new viruses are more frequent and existing viruses mutate faster. Yes, antivirus companies are forever playing catch-up, trying to create signatures for new viruses. Yes, signature-based antivirus software won’t protect you when a virus is new, before the signature is added to the detection program. Antivirus is by no means a panacea.

On the other hand, an antivirus program with up-to-date signatures will protect you from a lot of threats. It’ll protect you against viruses, against spyware, against Trojans — against all sorts of malware. It’ll run in the background, automatically, and you won’t notice any performance degradation at all. And — here’s the best part — it can be free. AVG won’t cost you a penny. To me, this is an easy trade-off, certainly for the average computer user who clicks on attachments he probably shouldn’t click on, downloads things he probably shouldn’t download, and doesn’t understand the finer workings of Windows Personal Firewall.

Certainly security would be improved if people used whitelisting programs such as Bit9 Parity and Savant Protection — and I personally recommend Malwarebytes’ Anti-Malware — but a lot of users are going to have trouble with this. The average user will probably just swat away the “you’re trying to run a program not on your whitelist” warning message or — even worse — wonder why his computer is broken when he tries to run a new piece of software. The average corporate IT department doesn’t have a good idea of what software is running on all the computers within the corporation, and doesn’t want the administrative overhead of managing all the change requests. And whitelists aren’t a panacea, either: they don’t defend against malware that attaches itself to data files (think Word macro viruses), for example.

One of the newest trends in IT is consumerization, and if you don’t already know about it, you soon will. It’s the idea that new technologies, the cool stuff people want, will become available for the consumer market before they become available for the business market. What it means to business is that people — employees, customers, partners — will access business networks from wherever they happen to be, with whatever hardware and software they have. Maybe it’ll be the computer you gave them when you hired them. Maybe it’ll be their home computer, the one their kids use. Maybe it’ll be their cell phone or PDA, or a computer in a hotel’s business center. Your business will have no way to know what they’re using, and — more importantly — you’ll have no control.

In this kind of environment, computers are going to connect to each other without a whole lot of trust between them. Untrusted computers are going to connect to untrusted networks. Trusted computers are going to connect to untrusted networks. The whole idea of “safe computing” is going to take on a whole new meaning — every man for himself. A corporate network is going to need a simple, dumb, signature-based antivirus product at the gateway of its network. And a user is going to need a similar program to protect his computer.

Bottom line: antivirus software is neither necessary nor sufficient for security, but it’s still a good idea. It’s not a panacea that magically makes you safe, nor is it is obsolete in the face of current threats. As countermeasures go, it’s cheap, it’s easy, and it’s effective. I haven’t dumped my antivirus program, and I have no intention of doing so anytime soon.

Posted on November 10, 2009 at 6:31 AMView Comments

Real-World Access Control

Access control is difficult in an organizational setting. On one hand, every employee needs enough access to do his job. On the other hand, every time you give an employee more access, there’s more risk: he could abuse that access, or lose information he has access to, or be socially engineered into giving that access to a malfeasant. So a smart, risk-conscious organization will give each employee the exact level of access he needs to do his job, and no more.

Over the years, there’s been a lot of work put into role-based access control. But despite the large number of academic papers and high-profile security products, most organizations don’t implement it–at all–with the predictable security problems as a result.

Regularly we read stories of employees abusing their database access-control privileges for personal reasons: medical records, tax records, passport records, police records. NSA eavesdroppers spy on their wives and girlfriends. Departing employees take corporate secrets

A spectacular access control failure occurred in the UK in 2007. An employee of Her Majesty’s Revenue & Customs had to send a couple of thousand sample records from a database on all children in the country to National Audit Office. But it was easier for him to copy the entire database of 25 million people onto a couple of disks and put it in the mail than it was to select out just the records needed. Unfortunately, the discs got lost in the mail and the story was a huge embarrassment for the government.

Eric Johnson at Dartmouth’s Tuck School of Business has been studying the problem, and his results won’t startle anyone who has thought about it at all. RBAC is very hard to implement correctly. Organizations generally don’t even know who has what role. The employee doesn’t know, the boss doesn’t know–and these days the employee might have more than one boss — and senior management certainly doesn’t know. There’s a reason RBAC came out of the military; in that world, command structures are simple and well-defined.

Even worse, employees’ roles change all the time–Johnson chronicled one business group of 3,000 people that made 1,000 role changes in just three months–and it’s often not obvious what information an employee needs until he actually needs it. And information simply isn’t that granular. Just as it’s much easier to give someone access to an entire file cabinet than to only the particular files he needs, it’s much easier to give someone access to an entire database than only the particular records he needs.

This means that organizations either over-entitle or under-entitle employees. But since getting the job done is more important than anything else, organizations tend to over-entitle. Johnson estimates that 50 percent to 90 percent of employees are over-entitled in large organizations. In the uncommon instance where an employee needs access to something he normally doesn’t have, there’s generally some process for him to get it. And access is almost never revoked once it’s been granted. In large formal organizations, Johnson was able to predict how long an employee had worked there based on how much access he had.

Clearly, organizations can do better. Johnson’s current work involves building access-control systems with easy self-escalation, audit to make sure that power isn’t abused, violation penalties (Intel, for example, issues “speeding tickets” to violators), and compliance rewards. His goal is to implement incentives and controls that manage access without making people too risk-averse.

In the end, a perfect access control system just isn’t possible; organizations are simply too chaotic for it to work. And any good system will allow a certain number of access control violations, if they’re made in good faith by people just trying to do their jobs. The “speeding ticket” analogy is better than it looks: we post limits of 55 miles per hour, but generally don’t start ticketing people unless they’re going over 70.

This essay previously appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response here — after you answer some nosy questions to get a free account.

Posted on September 3, 2009 at 12:54 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.