Entries Tagged "risk assessment"

Page 21 of 21

Convicted Felons with Big Dogs

Here’s a security threat I’ll bet you never even considered before: convicted felons with large dogs:

The Contra Costa County board of supervisors [in California] unanimously supported on Tuesday prohibiting convicted felons from owning any dog that is aggressive or weighs more than 20 pounds, making it all but certain the proposal will become law when it formally comes before the board for approval Nov. 15.

These are not felons in jail. These are felons who have been released from jail after serving their time. They’re allowed to re-enter society, but letting them own a large dog would be just too much of a risk to the community?

Posted on October 28, 2005 at 12:17 PMView Comments

Research in Behavioral Risk Analysis

I very am interested in this kind of research:

Network Structure, Behavioral Considerations and Risk Management in Interdependent Security Games

Interdependent security (IDS) games model situations where each player has to determine whether or not to invest in protection or security against an uncertain event knowing that there is some chance s/he will be negatively impacted by others who do not follow suit. IDS games capture a wide variety of collective risk and decision-making problems that include airline security, corporate governance, computer network security and vaccinations against diseases. This research project will investigate the marriage of IDS models with network formation models developed from social network theory and apply these models to problems in network security. Behavioral and controlled experiments will examine how human participants actually make choices under uncertainty in IDS settings. Computational aspects of IDS models will also be examined. To encourage and induce individuals to invest in cost-effective protection measures for IDS problems, we will examine several risk management strategies designed to foster cooperative behavior that include providing risk information, communication with others, economic incentives, and tipping strategies.

The proposed research is interdisciplinary in nature and should serve as an exciting focal point for researchers in computer science, decision and management sciences, economics, psychology, risk management, and policy analysis. It promises to advance our understanding of decision-making under risk and uncertainty for problems that are commonly faced by individuals, organizations, and nations. Through advances in computational methods one should be able to apply IDS models to large-scale problems. The research will also focus on weak links in an interdependent system and suggest risk management strategies for reducing individual and societal losses in the interconnected world in which we live.

Posted on September 15, 2005 at 7:05 AMView Comments

Talking to Strangers

In Beyond Fear I wrote: “Many children are taught never to talk to strangers, an extreme precaution with minimal security benefit.”

In talks, I’m even more direct. I think “don’t talk to strangers” is just about the worst possible advice you can give a child. Most people are friendly and helpful, and if a child is in distress, asking the help of a stranger is probably the best possible thing he can do.

This advice would have helped Brennan Hawkins, the 11-year-old boy who was lost in the Utah wilderness for four days.

The parents said Brennan had seen people searching for him on horse and ATV, but avoided them because of what he had been taught.

“He stayed on the trail, he avoided strangers,” Jody Hawkins said. “His biggest fear, he told me, was that someone would steal him.”

They said they hadn’t talked to Brennan and his four siblings about what they should do about strangers if they were lost. “This may have come to a faster conclusion had we discussed that,” Toby Hawkins said.

In a world where good guys are common and bad guys are rare, assuming a random person is a good guy is a smart security strategy. We need to help children develop their natural intuition about risk, and not give them overbroad rules.

Also in Beyond Fear, I wrote:

As both individuals and a society, we can make choices about our security. We can choose more security or less security. We can choose greater impositions on our lives and freedoms, or fewer impositions. We can choose the types of risks and security solutions we’re willing to tolerate and decide that others are unacceptable.

As individuals, we can decide to buy a home alarm system to make ourselves more secure, or we can save the money because we don’t consider the added security to be worth it. We can decide not to travel because we fear terrorism, or we can decide to see the world because the world is wonderful. We can fear strangers because they might be attackers, or we can talk to strangers because they might become friends.

Posted on June 23, 2005 at 2:40 PMView Comments

Security Trade-Offs

An essay by an anonymous CSO. This is how it begins:

On any given day, we CSOs come to work facing a multitude of security risks. They range from a sophisticated hacker breaching the network to a common thug picking a lock on the loading dock and making off with company property. Each of these scenarios has a probability of occurring and a payout (in this case, a cost to the company) should it actually occur. To guard against these risks, we have a finite budget of resources in the way of time, personnel, money and equipment—poker chips, if you will.

If we’re good gamblers, we put those chips where there is the highest probability of winning a high payout. In other words, we guard against risks that are most likely to occur and that, if they do occur, will cost the company the most money. We could always be better, but as CSOs, I think we’re getting pretty good at this process. So lately I’ve been wondering—as I watch spending on national security continue to skyrocket, with diminishing marginal returns—why we as a nation can’t apply this same logic to national security spending. If we did this, the war on terrorism would look a lot different. In fact, it might even be over.

The whole thing is worth reading.

Posted on April 22, 2005 at 12:32 PMView Comments

Destroying the Earth

This is a fascinating—and detailed—analysis of what would be required to destroy the earth: materials, methods, feasibility, schedule. While the DHS might view this as a terrorist manual and get it removed from the Internet, the good news is that obliterating the planet isn’t an easy task.

Posted on March 15, 2005 at 5:30 PMView Comments

Linux Security

I’m a big fan of the Honeynet Project (and a member of their board of directors). They don’t have a security product; they do security research. Basically, they wire computers up with sensors, put them on the Internet, and watch hackers attack them.

They just released a report about the security of Linux:

Recent data from our honeynet sensor grid reveals that the average life expectancy to compromise for an unpatched Linux system has increased from 72 hours to 3 months. This means that a unpatched Linux system with commonly used configurations (such as server builds of RedHat 9.0 or Suse 6.2) have an online mean life expectancy of 3 months before being successfully compromised.

This is much greater than that of Windows systems, which have average life expectancies on the order of a few minutes.

It’s also important to remember that this paper focuses on vulnerable systems. The Honeynet researchers deployed almost 20 vulnerable systems to monitor hacker tactics, and found that no one was hacking the systems. That’s the real story: the hackers aren’t bothering with Linux. Two years ago, a vulnerable Linux system would be hacked in less than three days; now it takes three months.

Why? My guess is a combination of two reasons. One, Linux is that much more secure than Windows. Two, the bad guys are focusing on Windows—more bang for the buck.

See also here and here.

Posted on January 6, 2005 at 1:45 PMView Comments

World Series Security

The World Series is no stranger to security. Fans try to sneak into the ballpark without tickets, or with counterfeit tickets. Often foods and alcohol are prohibited from being brought into the ballpark, to enforce the monopoly of the high-priced concessions. Violence is always a risk: both small fights and larger-scale riots that result from fans from both teams being in such close proximity—like the one that almost happened during the sixth game of the AL series.

Today, the new risk is terrorism. Security at the Olympics cost $1.5 billion. $50 million each was spent at the Democratic and Republican conventions. There has been no public statement about the security bill for the World Series, but it’s reasonable to assume it will be impressive.

In our fervor to defend ourselves, it’s important that we spend our money wisely. Much of what people think of as security against terrorism doesn’t actually make us safer. Even in a world of high-tech security, the most important solution is the guy watching to keep beer bottles from being thrown onto the field.

Generally, security measures that defend specific targets are wasteful, because they can be avoided simply by switching targets. If we completely defend the World Series from attack, and the terrorists bomb a crowded shopping mall instead, little has been gained.

Even so, some high-profile locations, like national monuments and symbolic buildings, and some high-profile events, like political conventions and championship sporting events, warrant additional security. What additional measures make sense?

ID checks don’t make sense. Everyone has an ID. Even the 9/11 terrorists had IDs. What we want is to somehow check intention; is the person going to do something bad? But we can’t do that, so we check IDs instead. It’s a complete waste of time and money, and does absolutely nothing to make us safer.

Automatic face recognition systems don’t work. Computers that automatically pick terrorists out of crowds are a great movie plot device, but doesn’t work in the real world. We don’t have a comprehensive photographic database of known terrorists. Even worse, the face recognition technology is so faulty that it often can’t make the matches even when we do have decent photographs. We tried it at the 2001 Super Bowl; it was a failure.

Airport-like attendee screening doesn’t work. The terrorists who took over the Russian school sneaked their weapons in long before their attack. And screening fans is only a small part of the solution. There are simply too many people, vehicles, and supplies moving in and out of a ballpark regularly. This kind of security failed at the Olympics, as reporters proved again and again that they could sneak all sorts of things into the stadiums undetected.

What does work is people: smart security officials watching the crowds. It’s called “behavior recognition,�? and it requires trained personnel looking for suspicious behavior. Does someone look out of place? Is he nervous, and not watching the game? Is he not cheering, hissing, booing, and waving like a sports fan would?

This is what good policemen do all the time. It’s what Israeli airport security does. It works because instead of relying on checkpoints that can be bypassed, it relies on the human ability to notice something that just doesn’t feel right. It’s intuition, and it’s far more effective than computerized security solutions.

Will this result in perfect security? Of course not. No security measures are guaranteed; all we can do is reduce the odds. And the best way to do that is to pay attention. A few hundred plainclothes policemen, walking around the stadium and watching for anything suspicious, will provide more security against terrorism than almost anything else we can reasonably do.

And the best thing about policemen is that they’re adaptable. They can deal with terrorist threats, and they can deal with more common security issues, too.

Most of the threats at the World Series have nothing to do with terrorism; unruly or violent fans are a much more common problem. And more likely than a complex 9/11-like plot is a lone terrorist with a gun, a bomb, or something that will cause panic. But luckily, the security measures ballparks have already put in place to protect against the former also help protect against the latter.

Originally published by UPI.

Posted on October 25, 2004 at 6:31 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets—keys—but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures—and even routine government operations—secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry—to whom the government is ultimately accountable—is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

1 19 20 21

Sidebar photo of Bruce Schneier by Joe MacInnis.