Entries Tagged "trust"

Page 12 of 15

Basketball Referees and Single Points of Failure

Sports referees are supposed to be fair and impartial. They’re not supposed to favor one team over another. And they’re most certainly not supposed to have a financial interest in the outcome of a game.

Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob. He has confessed to far less—gambling in general, and selling inside information on players, referees and coaches to a big-time professional gambler named James “Sheep” Battista. But the investigation continues, and the whole scandal is an enormous black eye for the sport. Fans like to think that the game is fair and that the winning team really is the winning team.

The details of the story are fascinating and well worth reading. But what interests me more are its general lessons about risk and audit.

What sorts of systems—IT, financial, NBA games or whatever—are most at risk of being manipulated? The ones where the smallest change can have the greatest impact, and the ones where trusted insiders can make that change.

Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports. Star players like Michael Jordan, Kobe Bryant and LeBron James can carry an entire team on their shoulders. Even baseball great Alex Rodriguez can’t do that.

Because individual players matter so much, a single referee can affect a basketball game more than he can in any other sport. Referees call fouls. Contact occurs on nearly every play, any of which could be called as a foul. They’re called “touch fouls,” and they are mostly, but not always, ignored. The refs get to decide which ones to call.

Even more drastically, a ref can put a star player in foul trouble immediately—and cause the coach to bench him longer throughout the game—if he wants the other side to win. He can set the pace of the game, low-scoring or high-scoring, based on how he calls fouls. He can decide to invalidate a basket by calling an offensive foul on the play, or give a team the potential for some extra points by calling a defensive foul. There’s no formal instant replay. There’s no second opinion. A ref’s word is law—there are only three of them—and a crooked ref has enormous power to control the game.

It’s not just that basketball referees are single points of failure, it’s that they’re both trusted insiders and single points of catastrophic failure.

These sorts of vulnerabilities exist in many systems. Consider what a terrorist-sympathizing Transportation Security Administration screener could do to airport security. Or what a criminal CFO could embezzle. Or what a dishonest computer-repair technician could do to your computer or network. The same goes for a corrupt judge, police officer, customs inspector, border-control officer, food-safety inspector and so on.

The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low. It’s after the fact, but if the likelihood of detection is high and the penalties (fines, jail time, public disgrace) are severe, it’s a pretty strong deterrent. Of course, the counterattack is to target the auditing system. Hackers routinely try to erase audit logs that contain evidence of their intrusions.

Even so, audit is the reason we want open-source code reviews and verifiable paper trails in voting machines; otherwise, a single crooked programmer could single-handedly change an election. It’s also why the Securities and Exchange Commission closely monitors trades by brokers: They are in an ideal position to get away with insider trading. The NBA claims it monitors referees for patterns that might indicate abuse; there’s still no answer to why it didn’t detect Donaghy.

Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats. While a company may inherently trust its employees, those trusted employees have far greater power to affect corporate systems and are often single points of failure. And trusted employees can also be compromised by external elements, as Tom Donaghy was by Battista and possibly the Mafia.

All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.

This is my 50th essay for Wired.com.

Posted on September 6, 2007 at 4:38 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 5)

This is Part 5 of a five-part series. Link to whole thing.

BS: So far, we’ve only talked about passengers. What about airport workers? Nearly one million workers move in and out of airports every day without ever being screened. The JFK plot, as laughably unrealistic as it was, highlighted the security risks of airport workers. As with any security problem, we need to secure the weak links, rather than make already strong links stronger. What about airport employees, delivery vehicles, and so on?

KH: I totally agree with your point about a strong base level of security everywhere and not creating large gaps by over-focusing on one area. This is especially true with airport employees. We do background checks on all airport employees who have access to the sterile area. These employees are in the same places doing the same jobs day after day, so when someone does something out of the ordinary, it immediately stands out. They serve as an additional set of eyes and ears throughout the airport.

Even so, we should do more on airport employees and my House testimony of April 19 gives details of where we’re heading. The main point is that everything you need for an attack is already inside the perimeter of an airport. For example, why take lighters from people who work with blowtorches in facilities with millions of gallons of jet fuel?

You could perhaps feel better by setting up employee checkpoints at entry points, but you’d hassle a lot of people at great cost with minimal additional benefit, and a smart, patient terrorist could find a way to beat you. Today’s random, unpredictable screenings that can and do occur everywhere, all the time (including delivery vehicles, etc.) are harder to defeat. With the latter, you make it impossible to engineer an attack; with the former, you give the blueprint for exactly that.

BS: There’s another reason to screen pilots and flight attendants: they go through the same security lines as passengers. People have to remember that it’s not pilots being screened, it’s people dressed as pilots. You either have to implement a system to verify that people dressed as pilots are actual pilots, or just screen everybody. The latter choice is far easier.

I want to ask you about general philosophy. Basically, there are three broad ways of defending airplanes: preventing bad people from getting on them (ID checks), preventing bad objects from getting on them (passenger screening, baggage screening), and preventing bad things from happening on them (reinforcing the cockpit door, sky marshals). The first one seems to be a complete failure, the second one is spotty at best. I’ve always been a fan of the third. Any future developments in that area?

KH: You are too eager to discount the first—stopping bad people from getting on planes. That is the most effective! Don’t forget about all the intel work done partnering with other countries to stop plots before they get here (UK liquids, NY subway), all the work done to keep them out either through no-flys (at least several times a month) or by Customs & Border Protection on their way in, and law enforcement once they are here (Ft. Dix). Then, you add the behavior observation (both uniformed and not) and identity validation (as we take that on) and that’s all before they get to the checkpoint.

The screening-for-things part, we’ve discussed, so I’ll jump to in-air measures. Reinforced, locked cockpit doors and air marshals are indeed huge upgrades since 9/11. Along the same lines, you have to consider the role of the engaged flight crew and passengers—they are quick to give a heads-up about suspicious behavior and they can, and do, take decisive action when threatened. Also, there are thousands of flights covered by pilots who are qualified as law enforcement and are armed, as well as the agents from other government entities like the Secret Service and FBI who provide coverage as well. There is also a fair amount of communications with the flight deck during flights if anything comes up en route—either in the aircraft or if we get information that would be of interest to them. That allows “quiet” diversions or other preventive measures. Training is, of course, important too. Pilots need to know what to do in the event of a missile sighting or other event, and need to know what we are going to do in different situations. Other things coming: better air-to-ground communications for air marshals and flight information, including, possibly, video.

So, when you boil it down, keeping the bomb off the plane is the number one priority. A terrorist has to know that once that door closes, he or she is locked into a confined space with dozens, if not hundreds, of zero-tolerance people, some of whom may be armed with firearms, not to mention the memory of United Flight 93.

BS: I’ve read repeated calls to privatize airport security: to return it to the way it was pre-9/11. Personally, I think it’s a bad idea, but I’d like your opinion on the question. And regardless of what you think should happen, do you think it will happen?

KH: From an operational security point of view, I think it works both ways. So it is not a strategic issue for me.

SFO, our largest private airport, has excellent security and is on a par with its federalized counterparts (in fact, I am on a flight from there as I write this). One current federalized advantage is that we can surge resources around the system with no notice; essentially, the ability to move from anywhere to anywhere and mix TSOs with federal air marshals in different force packages. We would need to be sure we don’t lose that interchangeability if we were to expand privatized screening.

I don’t see a major security or economic driver that would push us to large-scale privatization. Economically, the current cost-plus model makes it a better deal for the government in smaller airports than in bigger. So, maybe more small airports will privatize. If Congress requires collective bargaining for our TSOs, that will impose an additional overhead cost of about $500 million, which would shift the economic balance significantly toward privatized screening. But unless that happens, I don’t see major change in this area.

BS: Last question. I regularly criticize overly specific security measures, because forcing the terrorists to make minor modifications in their tactics doesn’t make us any safer. We’ve talked about specific airline threats, but what about airplanes as a specific threat? On the one hand, if we secure our airlines and the terrorists all decide instead to bomb shopping malls, we haven’t improved our security very much. On the other hand, airplanes make particularly attractive targets for several reasons. One, they’re considered national symbols. Two, they’re a common and important travel vehicle, and are deeply embedded throughout our economy. Three, they travel to distant places where the terrorists are. And four, the failure mode is severe: a small bomb drops the plane out of the sky and kills everyone. I don’t expect you to give back any of your budget, but when do we have “enough” airplane security as compared with the rest of our nation’s infrastructure?

KH: Airplanes are a high-profile target for terrorists for all the reasons you cited. The reason we have the focus we do on aviation is because of the effect the airline system has on our country, both economically and psychologically. We do considerable work (through grants and voluntary agreements) to ensure the safety of surface transportation, but it’s less visible to the public because people other than ones in TSA uniforms are taking care of that responsibility.

We look at the aviation system as one component in a much larger network that also includes freight rail, mass transit, highways, etc. And that’s just in the U.S. Then you add the world’s transportation sectors—it’s all about the network.

The only components that require specific security measures are the critical points of failure—and they have to be protected at virtually any cost. It doesn’t matter which individual part of the network is attacked—what matters is that the network as a whole is resilient enough to operate even with losing one or more components.

The network approach allows various transportation modes to benefit from our layers of security. Take our first layer: intel. It is fundamental to our security program to catch terrorists long before they get to their target, and even better if we catch them before they get into our country. Our intel operation works closely with other international and domestic agencies, and that information and analysis benefits all transportation modes.

Dogs have proven very successful at detecting explosives. They work in airports and they work in mass transit venues as well. As we test and pilot technologies like millimeter wave in airports, we assess their viability in other transportation modes, and vice versa.

To get back to your question, we’re not at the point where we can say “enough” for aviation security. But we’re also aware of the attractiveness of other modes and continue to use the network to share resources and lessons learned.

BS: Thank you very much for your time. I appreciate both your time and your candor.

KH: I enjoyed the exchange and appreciated your insights. Thanks for the opportunity.

Posted on August 3, 2007 at 6:12 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 4)

This is Part 4 of a five-part series. Link to whole thing.

BS: What about Registered Traveler? When TSA first started talking about the program, the plan was to divide people into two categories: more trusted people who get less screening, and less trusted people who get more screening. This opened an enormous security hole; whenever you create an easy way and a hard way through security, you invite the bad guys to take the easier way. Since then, it’s transformed into a way for people to pay for better screening equipment and faster processing—a great idea with no security downsides. Given that, why bother with the background checks at all? What else is it besides a way for a potential terrorist to spend $60 and find out if the government is on to them?

KH: Registered Traveler (RT) is a promising program but suffers from unrealistic expectations. The idea—that you and I aren’t really risks and we should be screened less so that TSA can apply scarce resources on the more likely terrorist—makes sense and got branded as RT. The problem is that with two million people a day, how can we tell them apart in an effective way? We know terrorists use people who are not on watch lists and who don’t have criminal convictions, so we can’t use those criteria alone. Right now, I’ve said that RT is behind Secure Flight in priority and that TSA is open to working with private sector entities to facilitate RT, but we will not fund it, reduce overall security, or inconvenience regular travelers. As private companies deploy extra security above what TSA does, we can change the screening process accordingly. It has to be more than a front-of-the-line pass, and I think there are some innovations coming out in the year ahead that will better define what RT can become.

BS: Let’s talk about behavioral profiling. I’ve long thought that most of airline security could be ditched in favor of well-trained guards, both in and out of uniform, wandering the crowds looking for suspicious behavior. Can you talk about some of the things you’re doing along those lines, and especially ways to prevent this from turning into just another form of racial profiling?

KH: Moving security out from behind the checkpoint is a big priority for us. First, it gives us the opportunity to pick up a threat a lot earlier. Taking away weapons or explosives at the checkpoint is stopping the plot at nearly the last possible moment. Obviously, a good security system aims at stopping attacks well before that. That’s why we have many layers of security (intel, law enforcement, behavior detection, etc.) to get to that person well before the security checkpoint. When a threat gets to the checkpoint, we’re operating on his/her terms—they pick when, where, and how they present themselves to us. We want to pick up the cues on our terms, before they’re ready, even if they’re just at the surveillance stage.

We use a system of behavior observation that is based on the science that demonstrates that there are certain involuntary, subconscious actions that can betray a person’s hostile intent. For instance, there are tiny—but noticeable to the trained person—movements in a person’s facial muscles when they have certain emotions. It is very different from the stress we all show when we’re anxious about missing the flight due to, say, a long security line. This is true across race, gender, age, ethnicity, etc. It is our way of not falling into the trap where we predict what a terrorist is going to look like. We know they use people who “look like” terrorists, but they also use people who do not, perhaps thinking that we cue only off of what the 9/11 hijackers looked like.

Our Behavior Detection teams routinely—and quietly—identify problem people just through observable behavior cues. More than 150 people have been identified by our teams, turned over to law enforcement, and subsequently arrested. This layer is invisible to the public, but don’t discount it, because it may be the most effective. We publicize non-terrorist-related successes like a murder suspect caught in Minneapolis and a bank robber caught in Philadelphia.

Most common are people showing phony documents, but we have even picked out undercover operatives—including our own. One individual, identified by a TSO in late May and not allowed to fly, was killed in a police shoot-out five days later. Additionally, several individuals have been of interest from the counter-terrorism perspective. With just this limited deployment of Behavior Detection Officers (BDOs), we have identified more people of counterterrorism interest than all the people combined caught with prohibited items. Look for us to continue to look at ways that highlight problem people rather than just problem objects.

BS: That’s really good news, and I think it’s the most promising new security measure you’ve got. Although, honestly, bragging about capturing a guy for wearing a fake military uniform just makes you look silly.

Part 5: Keeping the bomb off the plane

Posted on August 2, 2007 at 6:12 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 1)

This is Part 1 of a five-part series. Link to whole thing.

In April, Kip Hawley, the head of the Transportation Security Administration (TSA), invited me to Washington for a meeting. Despite some serious trepidation, I accepted. And it was a good meeting. Most of it was off the record, but he asked me how the TSA could overcome its negative image. I told him to be more transparent, and stop ducking the hard questions. He said that he wanted to do that. He did enjoy writing a guest blog post for Aviation Daily, but having a blog himself didn’t work within the bureaucracy. What else could he do?

This interview, conducted in May and June via e-mail, was one of my suggestions.

Bruce Schneier: By today’s rules, I can carry on liquids in quantities of three ounces or less, unless they’re in larger bottles. But I can carry on multiple three-ounce bottles. Or a single larger bottle with a non-prescription medicine label, like contact lens fluid. It all has to fit inside a one-quart plastic bag, except for that large bottle of contact lens fluid. And if you confiscate my liquids, you’re going to toss them into a large pile right next to the screening station—which you would never do if anyone thought they were actually dangerous.

Can you please convince me there’s not an Office for Annoying Air Travelers making this sort of stuff up?

Kip Hawley: Screening ideas are indeed thought up by the Office for Annoying Air Travelers and vetted through the Directorate for Confusion and Complexity, and then we review them to insure that there are sufficient unintended irritating consequences so that the blogosphere is constantly fueled. Imagine for a moment that TSA people are somewhat bright, and motivated to protect the public with the least intrusion into their lives, not to mention travel themselves. How might you engineer backwards from that premise to get to three ounces and a baggie?

We faced a different kind of liquid explosive, one that was engineered to evade then-existing technology and process. Not the old Bojinka formula or other well-understood ones—TSA already trains and tests on those. After August 10, we began testing different variants with the national labs, among others, and engaged with other countries that have sophisticated explosives capabilities to find out what is necessary to reliably bring down a plane.

We started with the premise that we should prohibit only what’s needed from a security perspective. Otherwise, we would have stuck with a total liquid ban. But we learned through testing that that no matter what someone brought on, if it was in a small enough container, it wasn’t a serious threat. So what would the justification be for prohibiting lip gloss, nasal spray, etc? There was none, other than for our own convenience and the sake of a simple explanation.

Based on the scientific findings and a don’t-intrude-unless-needed-for-security philosophy, we came up with a container size that eliminates an assembled bomb (without having to determine what exactly is inside the bottle labeled “shampoo”), limits the total liquid any one person can bring (without requiring Transportation Security Officers (TSOs) to count individual bottles), and allows for additional security measures relating to multiple people mixing a bomb post-checkpoint. Three ounces and a baggie in the bin gives us a way for people to safely bring on limited quantities of liquids, aerosols and gels.

BS: How will this foil a plot, given that there are no consequences to trying? Airplane contraband falls into two broad categories: stuff you get in trouble for trying to smuggle onboard, and stuff that just gets taken away from you. If I’m caught at a security checkpoint with a gun or a bomb, you’re going to call the police and really ruin my day. But if I have a large bottle of that liquid explosive, you confiscate it with a smile and let me though. So unless you’re 100% perfect in catching this stuff—which you’re not—I can just try again and again until I get it through.

This isn’t like contaminants in food, where if you remove 90% of the particles, you’re 90% safer. None of those false alarms—none of those innocuous liquids taken away from innocent travelers—improve security. We’re only safer if you catch the one explosive liquid amongst the millions of containers of water, shampoo, and toothpaste. I have described two ways to get large amounts of liquids onto airplanes—large bottles labeled “saline solution” and trying until the screeners miss the liquid—not to mention combining multiple little bottles of liquid into one big bottle after the security checkpoint.

I want to assume the TSA is both intelligent and motivated to protect us. I’m taking your word for it that there is an actual threat—lots of chemists disagree—but your liquid ban isn’t mitigating it. Instead, I have the sinking feeling that you’re defending us against a terrorist smart enough to develop his own liquid explosive, yet too stupid to read the rules on TSA’s own website.

KH: I think your premise is wrong. There are consequences to coming to an airport with a bomb and having some of the materials taken away at the checkpoint. Putting aside our layers of security for the moment, there are things you can do to get a TSO’s attention at the checkpoint. If a TSO finds you or the contents of your bag suspicious, you might get interviewed and/or have your bags more closely examined. If the TSO throws your liquids in the trash, they don’t find you a threat.

I often read blog posts about how someone could just take all their three-ounce bottles—or take bottles from others on the plane—and combine them into a larger container to make a bomb. I can’t get into the specifics, but our explosives research shows this is not a viable option.

The current system is not the best we’ll ever come up with. In the near future, we’ll come up with an automated system to take care of liquids, and everyone will be happier.

In the meantime, we have begun using hand-held devices that can recognize threat liquids through factory-sealed containers (we will increase their number through the rest of the year) and we have different test strips that are effective when a bottle is opened. Right now, we’re using them on exempt items like medicines, as well as undeclared liquids TSOs find in bags. This will help close the vulnerability and strengthen the deterrent.

BS: People regularly point to security checkpoints missing a knife in their handbag as evidence that security screening isn’t working. But that’s wrong. Complete effectiveness is not the goal; the checkpoints just have to be effective enough so that the terrorists are worried their plan will be uncovered. But in Denver earlier this year, testers sneaked 90% of weapons through. And other tests aren’t much better. Why are these numbers so poor, and why didn’t they get better when the TSA took over airport security?

KH: Your first point is dead on and is the key to how we look at security. The stories about 90% failures are wrong or extremely misleading. We do many kinds of effectiveness tests at checkpoints daily. We use them to guide training and decisions on technology and operating procedures. We also do extensive and very sophisticated Red Team testing, and one of their jobs is to observe checkpoints and go back and figure out—based on inside knowledge of what we do—ways to beat the system. They isolate one particular thing: for example, a particular explosive, made and placed in a way that exploits a particular weakness in technology; our procedures; or the way TSOs do things in practice. Then they will test that particular thing over and over until they identify what corrective action is needed. We then change technology or procedure, or plain old focus on execution. And we repeat the process—forever.

So without getting into specifics on the test results, of course there are times that our evaluations can generate high failure rate numbers on specific scenarios. Overall, though, our ability to detect bomb components is vastly improved and it will keep getting better. (Older scores you may have seen may be “feel good” numbers based on old, easy tests. Don’t go for the sound-bite; today’s TSOs are light-years ahead of even where they were two years ago.)

Part 2: When can we keep our shoes on?

Posted on July 30, 2007 at 6:12 AMView Comments

Computer Repair Technicians Accused of Copying Customer Files

We all know that it’s possible, but we assume the people who repair our computers don’t do this:

In recent months, allegations of agents copying pornography, music and alluring photos from customers’ computers have circulated on the Internet. Some bloggers now call it the “Peek Squad.”Any attractive young woman who drops off her computer with the Geek Squad should assume that her photos will be looked at,” said Brett Haddock, a former Geek Squad technician.

Just how much are these people paid? And how much money can you make with a few good identity thefts?

Posted on July 26, 2007 at 3:00 PM

MRI Lie Detectors

Long and interesting article on fMRI lie detectors.

I was particularly struck by this paragraph, about why people are bad at detecting lies:

Maureen O’Sullivan, a deception researcher at the University of San Francisco, studies why humans are so bad at recognizing lies. Many people, she says, base assessments of truthfulness on irrelevant factors, such as personality or appearance. “Baby-faced, non-weird, and extroverted people are more likely to be judged truthful,” she says. (Maybe this explains my trust in Steve Glass.) People are also blinkered by the “truthfulness bias”: the vast majority of questions we ask of other people—the time, the price off the breakfast special—are answered honestly, and truth is therefore our default expectation. Then, there’s the “learning-curve problem.” We don’t have a refined idea of what a successful lie looks and sounds like, since we almost never receive feedback on the fibs that we’ve been told; the co-worker who, at the corporate retreat, assured you that she loved your presentation doesn’t usually reveal later that she hated it. As O’Sullivan puts it, “By definition, the most convincing lies go undetected.”

EDITED TO ADD (8/28): The New York Times has an article on the topic.

Posted on July 25, 2007 at 6:26 AMView Comments

U.S. Government Contractor Injects Malicious Software into Critical Military Computers

This is just a frightening story. Basically, a contractor with a top secret security clearance was able to inject malicious code and sabotage computers used to track Navy submarines.

Yeah, it was annoying to find and fix the problem, but hang on. How is it possible for a single disgruntled idiot to damage a multi-billion-dollar weapons system? Why aren’t there any security systems in place to prevent this? I’ll bet anything that there was absolutely no control or review over who put what code in where. I’ll bet that if this guy had been just a little bit cleverer, he could have done a whole lot more damage without ever getting caught.

One of the ways to deal with the problem of trusted individuals is by making sure they’re trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.

I’ll also bet you anything that Microsoft has more security around its critical code than the U.S. military does.

Posted on April 13, 2007 at 12:33 PMView Comments

Social Engineering Diamond Theft

Nice story:

In what may be the biggest robbery committed by one person, the conman burgled safety deposit boxes at an ABN Amro bank in Antwerp’s diamond quarter, stealing gems weighing 120,000 carats. Posing as a successful businessman, the thief visited the bank frequently, befriending staff and gradually winning their confidence. He even brought them chocolates, according to one diamond industry official.

[…]

Mr Claes said of the thief: “He used no violence. He used one weapon—and that is his charm—to gain confidence. He bought chocolates for the personnel, he was a nice guy, he charmed them, got the original of keys to make copies and got information on where the diamonds were.

“You can have all the safety and security you want, but if someone uses their charm to mislead people it won’t help.”

People are the weakest security link, almost always.

Posted on March 19, 2007 at 3:42 PMView Comments

Privacy Law and Confidentiality

Interesting article: Neil M. Richards & Daniel J. Solove, “Privacy’s Other Path: Recovering the Law of Confidentiality,” 96 Georgetown Law Journal, 2007.

Abstract:

The familiar legend of privacy law holds that Samuel Warren and Louis Brandeis “invented” the right to privacy in 1890, and that William Prosser aided its development by recognizing four privacy torts in 1960. In this article, Professors Richards and Solove contend that Warren, Brandeis, and Prosser did not invent privacy law, but took it down a new path. Well before 1890, a considerable body of Anglo-American law protected confidentiality, which safeguards the information people share with others. Warren, Brandeis, and later Prosser turned away from the law of confidentiality to create a new conception of privacy based on the individual’s “inviolate personality.” English law, however, rejected Warren and Brandeis’s conception of privacy and developed a conception of privacy as confidentiality from the same sources used by Warren and Brandeis. Today, in contrast to the individualistic conception of privacy in American law, the English law of confidence recognizes and enforces expectations of trust within relationships. Richards and Solove explore how and why privacy law developed so differently in America and England. Understanding the origins and developments of privacy law’s divergent paths reveals that each body of law’s conception of privacy has much to teach the other.

Posted on March 19, 2007 at 6:39 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.