Entries Tagged "theory of security"

Page 1 of 3

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Risks of Networked Systems

Interesting research:

Helbing’s publication illustrates how cascade effects and complex dynamics amplify the vulnerability of networked systems. For example, just a few long-distance connections can largely decrease our ability to mitigate the threats posed by global pandemics. Initially beneficial trends, such as globalization, increasing network densities, higher complexity, and an acceleration of institutional decision processes may ultimately push human-made or human-influenced systems towards systemic instability, Helbing finds. Systemic instability refers to a system, which will get out of control sooner or later, even if everybody involved is well skilled, highly motivated and behaving properly. Crowd disasters are shocking examples illustrating that many deaths may occur even when everybody tries hard not to hurt anyone.

Posted on May 2, 2013 at 1:09 PMView Comments

When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force—for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it’s not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They’re more nimble and adaptable than defensive institutions like police forces. They’re not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side—it’s easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can’t do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don’t think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious…and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for—and society’s acceptance of—increased security.

Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they’re exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn’t enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they’re not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We’re already almost entirely living in a surveillance state, though we don’t realize it or won’t admit it to ourselves. This will only get worse as technology advances… today’s Ph.D. theses are tomorrow’s high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn’t that these security measures won’t work—even as they shred our freedoms and liberties—it’s that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We’ll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn’t kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won’t work in the end, what is the solution?

Resilience—building systems able to survive unexpected and devastating attacks—is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city—witness New Orleans after Hurricane Katrina or even New York after Sandy—we need to start acting like it, and planning for it. Still, it’s hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don’t know how to adapt any defenses—including resilience—fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We’re going to have to figure this out if we want to survive, and I’m not sure how many decades we have left.

This essay originally appeared on Wired.com.

Commentary.

Posted on March 21, 2013 at 7:02 AMView Comments

Rules for Radicals

It was written in 1971, but this still seems like a cool book:

For an elementary illustration of tactics, take parts of your face as the point of reference; your eyes, your ears, and your nose. First the eyes: if you have organized a vast, mass-based people’s organization, you can parade it visibly before the enemy and openly show your power. Second the ears; if your organization is small in numbers, then do what Gideon did: conceal the members in the dark but raise a din and clamor that will make the listener believe that your organization numbers many more than it does. Third, the nose; if your organization is too tiny even for noise, stink up the place.

Always remember the first rule of power tactics: Power is not only what you have but what the enemy thinks you have.

The second rule is: Never go outside the experience of your people. When an action or tactic is outside the experience of the people, the result is confusion, fear, and retreat. It also means a collapse of communication, as we have notes.

The third rule is: Wherever possible go outside the experience of the enemy. Here you want to cause confusion, fear, and retreat.

The fourth rule is: Make the enemy live up to their own book of rules. You can kill them with this, for they can no more obey their own rules than the Christian church can live up to Christianity.

The fourth rule carries within in the fifth rule: Ridicule is man’s most potent weapon. It is almost impossible to counterattack ridicule. Also it infuriates the opposition, who then react to your advantage.

The sixth rule is: A good tactic is one that your people enjoy. If your people are not having a ball doing it, there is something very wrong with the tactic.

The seventh rule: A tactic that drags on too long becomes a drag.

[…]

The twelfth rule: The price of a successful attack is a constructive alternative. You cannot risk being trapped by the enemy in his sudden agreement with your demand and saying “You’re right—we don’t know what to do about this issue. Now you tell us.”

The thirteenth rule: Pick the target, freeze it, personalize it, and polarize it.

Posted on May 17, 2012 at 7:20 AMView Comments

How Changing Technology Affects Security

Security is a tradeoff, a balancing act between attacker and defender. Unfortunately, that balance is never static. Changes in technology affect both sides. Society uses new technologies to decrease what I call the scope of defection—what attackers can get away with—and attackers use new technologies to increase it. What’s interesting is the difference between how the two groups incorporate new technologies.

Changes in security systems can be slow. Society has to implement any new security technology as a group, which implies agreement and coordination and—in some instances—a lengthy bureaucratic procurement process. Meanwhile, an attacker can just use the new technology. For example, at the end of the horse-and-buggy era, it was easier for a bank robber to use his new motorcar as a getaway vehicle than it was for a town’s police department to decide it needed a police car, get the budget to buy one, choose which one to buy, buy it, and then develop training and policies for it. And if only one police department did this, the bank robber could just move to another town. Defectors are more agile and adaptable, making them much better at being early adopters of new technology.

We saw it in law enforcement’s initial inability to deal with Internet crime. Criminals were simply more flexible. Traditional criminal organizations like the Mafia didn’t immediately move onto the Internet; instead, new Internet-savvy criminals sprung up. They set up websites like CardersMarket and DarkMarket, and established new organized crime groups within a decade or so of the Internet’s commercialization. Meanwhile, law enforcement simply didn’t have the organizational fluidity to adapt as quickly. Cities couldn’t fire their old-school detectives and replace them with people who understood the Internet. The detectives’ natural inertia and tendency to sweep problems under the rug slowed things even more. They spent the better part of a decade playing catch-up.

There’s one more problem: defenders are in what military strategist Carl von Clausewitz calls “the position of the interior.” They have to defend against every possible attack, while the defector only has to find one flaw that allows one way through the defenses. As systems get more complicated due to technology, more attacks become possible. This means defectors have a first-mover advantage; they get to try the new attack first. Consequently, society is constantly responding: shoe scanners in response to the shoe bomber, harder-to-counterfeit money in response to better counterfeiting technologies, better antivirus software to combat new computer viruses, and so on. The attacker’s clear advantage increases the scope of defection even further.

Of course, there are exceptions. There are technologies that immediately benefit the defender and are of no use at all to the attacker—for example, fingerprint technology allowed police to identify suspects after they left the crime scene and didn’t provide any corresponding benefit to criminals. The same thing happened with immobilizing technology for cars, alarm systems for houses, and computer authentication technologies. Some technologies benefit both but still give more advantage to the defenders. The radio allowed street policemen to communicate remotely, which increased our level of safety more than the corresponding downside of criminals communicating remotely endangers us.

Still, we tend to be reactive in security, and only implement new measures in response to an increased scope of defection. We’re slow about doing it and even slower about getting it right.

This essay originally appeared in IEEE Security & Privacy. It was adapted from Chapter 16 of Liars and Outliers.

Posted on March 7, 2012 at 6:14 AMView Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Tactics, Targets, and Objectives

If you encounter an aggressive lion, stare him down. But not a leopard; avoid his gaze at all costs. In both cases, back away slowly; don’t run. If you stumble on a pack of hyenas, run and climb a tree; hyenas can’t climb trees. But don’t do that if you’re being chased by an elephant; he’ll just knock the tree down. Stand still until he forgets about you.

I spent the last few days on safari in a South African game park, and this was just some of the security advice we were all given. What’s interesting about this advice is how well-defined it is. The defenses might not be terribly effective—you still might get eaten, gored or trampled—but they’re your best hope. Doing something else isn’t advised, because animals do the same things over and over again. These are security countermeasures against specific tactics.

Lions and leopards learn tactics that work for them, and I was taught tactics to defend myself. Humans are intelligent, and that means we are more adaptable than animals. But we’re also, generally speaking, lazy and stupid; and, like a lion or hyena, we will repeat tactics that work. Pickpockets use the same tricks over and over again. So do phishers, and school shooters. If improvised explosive devices didn’t work often enough, Iraqi insurgents would do something else.

So security against people generally focuses on tactics as well.

A friend of mine recently asked me where she should hide her jewelry in her apartment, so that burglars wouldn’t find it. Burglars tend to look in the same places all the time—dresser tops, night tables, dresser drawers, bathroom counters—so hiding valuables somewhere else is more likely to be effective, especially against a burglar who is pressed for time. Leave decoy cash and jewelry in an obvious place so a burglar will think he’s found your stash and then leave. Again, there’s no guarantee of success, but it’s your best hope.

The key to these countermeasures is to find the pattern: the common attack tactic that is worth defending against. That takes data. A single instance of an attack that didn’t work—liquid bombs, shoe bombs—or one instance that did—9/11—is not a pattern. Implementing defensive tactics against them is the same as my safari guide saying: “We’ve only ever heard of one tourist encountering a lion. He stared it down and survived. Another tourist tried the same thing with a leopard, and he got eaten. So when you see a lion….” The advice I was given was based on thousands of years of collective wisdom from people encountering African animals again and again.

Compare this with the Transportation Security Administration’s approach. With every unique threat, TSA implements a countermeasure with no basis to say that it helps, or that the threat will ever recur.

Furthermore, human attackers can adapt more quickly than lions. A lion won’t learn that he should ignore people who stare him down, and eat them anyway. But people will learn. Burglars now know the common “secret” places people hide their valuables—the toilet, cereal boxes, the refrigerator and freezer, the medicine cabinet, under the bed—and look there. I told my friend to find a different secret place, and to put decoy valuables in a more obvious place.

This is the arms race of security. Common attack tactics result in common countermeasures. Eventually, those countermeasures will be evaded and new attack tactics developed. These, in turn, require new countermeasures. You can easily see this in the constant arms race that is credit card fraud, ATM fraud or automobile theft.

The result of these tactic-specific security countermeasures is to make the attacker go elsewhere. For the most part, the attacker doesn’t particularly care about the target. Lions don’t care who or what they eat; to a lion, you’re just a conveniently packaged bag of protein. Burglars don’t care which house they rob, and terrorists don’t care who they kill. If your countermeasure makes the lion attack an impala instead of you, or if your burglar alarm makes the burglar rob the house next door instead of yours, that’s a win for you.

Tactics matter less if the attacker is after you personally. If, for example, you have a priceless painting hanging in your living room and the burglar knows it, he’s not going to rob the house next door instead—even if you have a burglar alarm. He’s going to figure out how to defeat your system. Or he’ll stop you at gunpoint and force you to open the door. Or he’ll pose as an air-conditioner repairman. What matters is the target, and a good attacker will consider a variety of tactics to reach his target.

This approach requires a different kind of countermeasure, but it’s still well-understood in the security world. For people, it’s what alarm companies, insurance companies and bodyguards specialize in. President Bush needs a different level of protection against targeted attacks than Bill Gates does, and I need a different level of protection than either of them. It would be foolish of me to hire bodyguards in case someone was targeting me for robbery or kidnapping. Yes, I would be more secure, but it’s not a good security trade-off.

Al-Qaida terrorism is different yet again. The goal is to terrorize. It doesn’t care about the target, but it doesn’t have any pattern of tactic, either. Given that, the best way to spend our counterterrorism dollar is on intelligence, investigation and emergency response. And to refuse to be terrorized.

These measures are effective because they don’t assume any particular tactic, and they don’t assume any particular target. We should only apply specific countermeasures when the cost-benefit ratio makes sense (reinforcing airplane cockpit doors) or when a specific tactic is repeatedly observed (lions attacking people who don’t stare them down). Otherwise, general countermeasures are far more effective a defense.

This essay originally appeared on Wired.com.

EDITED TO ADD (6/14): Learning behavior in tigers.

Posted on May 31, 2007 at 6:11 AMView Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.