Entries Tagged "trust"

Page 10 of 16

Interview with a Nigerian Internet Scammer

Really interesting reading.

Scam-Detective: How did you find victims for your scams?

John: First you need to understand how the gangs work. At the bottom are the “foot soldiers”, kids who spend all of their time online to find email addresses and send out the first emails to get people interested. When they receive a reply, the victim is passed up the chain, to someone who has better English to get copies of ID from them like copies of their passport and driving licenses and build up trust. Then when they are ready to ask for money, they are passed further up again to someone who will pretend to be a barrister or shipping agent who will tell the victim that they need to pay charges or even a bribe to get the big cash amount out of the country. When they pay up, the gang master will collect the money from the Western Union office, using fake ID that they have taken from other scam victims.

[…]

Scam-Detective: Ok, I also want to talk more about how you managed to get your victims to trust you. I know it can be difficult for legitimate businesses to persuade customers to buy their products, yet you were able to convince people to part with their cash to get their hands on money that never existed in the first place, with at least one taking an international flight on top. That’s quite a skill, how did you learn to do it?

John: Once I had spent some time as a “foot soldier” (* sending out initial approaches and passing serious victims to other scammers) I was promoted to act as either a barrister, shipping agent or bank official. In the early days I had a supervisor who would read my emails and suggest responses, then I was left to do it myself. I had lots of different documents that I would use to convince the victim that I was genuine, including photographs of an official looking man in an office, fake ID and storage manifests, bank statements showing the money, whatever would best convince the victim that I, and the money, was real. I think the English term is to “worm my way” into their trust, taking it slowly and carefully so I didn’t scare them away by asking for too much money too soon.

Scam-Detective: What would you do if a victim had sent money and couldn’t afford to send more, or got cold feet?

John: I would use whatever tactics were needed to get more money. I would send faked letters which stated that the money was about to be taken out of the account by the bank or seized by the government to make them think it was urgent, or tell them that this was definitely the last obstacle to the money being released. I would encourage them to take out loans or borrow money from friends to make the last payment, but tell them that it was important that they didn’t tell anyone what the money was for. I promised them that the expenses would be paid back on top of their share of the money.

[…]

John: We had something called the recovery approach. A few months after the original scam, we would approach the victim again, this time pretending to be from the FBI, or the Nigerian Authorities. The email would tell the victim that we had caught a scammer and had found all of the details of the original scam, and that the money could be recovered. Of course there would be fees involved as well. Victims would often pay up again to try and get their money back.

This sounds just like any other confidence game; in fact, it’s a modern variation on a classic con game called the Spanish Prisoner. The only difference is that this one uses the Internet.

Posted on February 11, 2010 at 7:19 AMView Comments

Virtual Mafia in Online Worlds

If you allow players in an online world to penalize each other, you open the door to extortion:

One of the features that supported user socialization in the game was the ability to declare that another user was a trusted friend. The feature involved a graphical display that showed the faces of users who had declared you trustworthy outlined in green, attached in a hub-and-spoke pattern to your face in the center.

[…]

That feature was fine as far as it went, but unlike other social networks, The Sims Online allowed users to declare other users untrustworthy too. The face of an untrustworthy user appeared circled in bright red among all the trustworthy faces in a user’s hub.

It didn’t take long for a group calling itself the Sims Mafia to figure out how to use this mechanic to shake down new users when they arrived in the game. The dialog would go something like this:

“Hi! I see from your hub that you’re new to the area. Give me all your Simoleans or my friends and I will make it impossible to rent a house.”

“What are you talking about?”

“I’m a member of the Sims Mafia, and we will all mark you as untrustworthy, turning your hub solid red (with no more room for green), and no one will play with you. You have five minutes to comply. If you think I’m kidding, look at your hub-three of us have already marked you red. Don’t worry, we’ll turn it green when you pay…”

If you think this is a fun game, think again-a typical response to this shakedown was for the user to decide that the game wasn’t worth $10 a month. Playing dollhouse doesn’t usually involve gangsters.

EDITED TO ADD (12/12): SIM Mafia existed in 2004.

Posted on November 25, 2009 at 6:36 AMView Comments

A Taxonomy of Social Networking Data

At the Internet Governance Forum in Sharm El Sheikh this week, there was a conversation on social networking data. Someone made the point that there are several different types of data, and it would be useful to separate them. This is my taxonomy of social networking data.

  1. Service data. Service data is the data you need to give to a social networking site in order to use it. It might include your legal name, your age, and your credit card number.
  2. Disclosed data. This is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  3. Entrusted data. This is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data—someone else does.
  4. Incidental data. Incidental data is data the other people post about you. Again, it’s basically the same stuff as disclosed data, but the difference is that 1) you don’t have control over it, and 2) you didn’t create it in the first place.
  5. Behavioral data. This is data that the site collects about your habits by recording what you do and who you do it with.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

And people should have different rights with respect to each data type. It’s clear that people should be allowed to change and delete their disclosed data. It’s less clear what rights they have for their entrusted data. And far less clear for their incidental data. If you post pictures of a party with me in them, can I demand you remove those pictures—or at least blur out my face? And what about behavioral data? It’s often a critical part of a social networking site’s business model. We often don’t mind if they use it to target advertisements, but are probably less sanguine about them selling it to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, this taxonomy will be useful.

EDITED TO ADD (12/12): Another categorization centered on destination instead of trust level.

Posted on November 19, 2009 at 12:51 PMView Comments

Public Reactions to Terrorist Threats

Interesting research:

For the last five years we have researched the connection between times of terrorist threats and public opinion. In a series of tightly designed experiments, we expose subsets of research participants to a news story not unlike the type that aired last week. We argue that attitudes, evaluations, and behaviors change in at least three politically-relevant ways when terror threat is more prominent in the news. Some of these transformations are in accord with conventional wisdom concerning how we might expect the public to react. Others are more surprising, and more disconcerting in their implications for the quality of democracy.

One way that public opinion shifts is toward increased expressions of distrust. In some ways this strategy has been actively promoted by our political leaders. The Bush administration repeatedly reminded the public to keep eyes and ears open to help identify dangerous persons. A strategy of vigilance has also been endorsed by the new secretary of Homeland Security, Janet Napolitano.

Nonetheless, the breadth of increased distrust that the public puts into practice is striking. Individuals threatened by terrorism become less trusting of others, even their own neighbors. Other studies have shown that they become less supportive of the rights of Arab and Muslim Americans. In addition, we found that such effects extend to immigrants and, as well, to a group entirely remote from the subject of terrorism: gay Americans. The specter of terrorist threat creates ruptures in our social fabric, some of which may be justified as necessary tactics in the fight against terrorism and others that simply cannot.

Another way public opinion shifts under a terrorist threat is toward inflated evaluations of certain leaders. To look for strong leadership makes sense: crises should impel us toward leadership bold enough to confront the threat and strong enough to protect us from it. But the public does more than call for heroes in times of crisis. It projects leadership qualities onto political figures, with serious political consequences.

In studies conducted in 2004, we found that individuals threatened by terrorism perceived George W. Bush as more charismatic and stronger than did non-threatened individuals. This projection of leadership had important consequences for voting decisions. Individuals threatened by terrorism were more likely to base voting decisions on leadership qualities rather than on their own issue positions or partisanship. You did read that correctly. Threatened individuals responded with elevated evaluations of Bush’s capacity for leadership and then used those inflated evaluations as the primary determinant in their voting decision.

These findings did not just occur among Republicans, but also among Independents and Democrats. All partisan groups who perceived Bush as more charismatic were also less willing to blame him for policy failures such as faulty intelligence that led to the war in Iraq.

[…]

A third way public opinion shifts in response to terrorism is toward greater preferences for policies that protect the homeland, even at the expense of civil liberties, and active engagement against terrorists abroad. Such a strategy was advocated and implemented by the Bush administration. Again, however, we found that preferences shifted toward these objectives regardless of one’s partisan stripes and, as well, outside the U.S.

Nothing surprising here. Fear makes people deferential, docile, and distrustful, and both politicians and marketers have learned to take advantage of this.

Jennifer Merolla and Elizabeth Zechmeister have written a book, Democracy at Risk: How Terrorist Threats Affect the Public. I haven’t read it yet.

Posted on November 16, 2009 at 6:39 AMView Comments

Security in a Reputation Economy

In the past, our relationship with our computers was technical. We cared what CPU they had and what software they ran. We understood our networks and how they worked. We were experts, or we depended on someone else for expertise. And security was part of that expertise.

This is changing. We access our email via the web, from any computer or from our phones. We use Facebook, Google Docs, even our corporate networks, regardless of hardware or network. We, especially the younger of us, no longer care about the technical details. Computing is infrastructure; it’s a commodity. It’s less about products and more about services; we simply expect it to work, like telephone service or electricity or a transportation network.

Infrastructures can be spread on a broad continuum, ranging from generic to highly specialized. Power and water are generic; who supplies them doesn’t really matter. Mobile phone services, credit cards, ISPs, and airlines are mostly generic. More specialized infrastructure services are restaurant meals, haircuts, and social networking sites. Highly specialized services include tax preparation for complex businesses; management consulting, legal services, and medical services.

Sales for these services are driven by two things: price and trust. The more generic the service is, the more price dominates. The more specialized it is, the more trust dominates. IT is something of a special case because so much of it is free. So, for both specialized IT services where price is less important and for generic IT services—think Facebook—where there is no price, trust will grow in importance. IT is becoming a reputation-based economy, and this has interesting ramifications for security.

Some years ago, the major credit card companies became concerned about the plethora of credit-card-number thefts from sellers’ databases. They worried that these might undermine the public’s trust in credit cards as a secure payment system for the internet. They knew the sellers would only protect these databases up to the level of the threat to the seller, and not to the greater level of threat to the industry as a whole. So they banded together and produced a security standard called PCI. It’s wholly industry-enforced ­ by an industry that realized its reputation was more valuable than the sellers’ databases.

A reputation-based economy means that infrastructure providers care more about security than their customers do. I realized this 10 years ago with my own company. We provided network-monitoring services to large corporations, and our internal network security was much more extensive than our customers’. Our customers secured their networks—that’s why they hired us, after all—but only up to the value of their networks. If we mishandled any of our customers’ data, we would have lost the trust of all of our customers.

I heard the same story at an ENISA conference in London last June, when an IT consultant explained that he had begun encrypting his laptop years before his customers did. While his customers might decide that the risk of losing their data wasn’t worth the hassle of dealing with encryption, he knew that if he lost data from one customer, he risked losing all of his customers.

As IT becomes more like infrastructure, more like a commodity, expect service providers to improve security to levels greater than their customers would have done themselves.

In IT, customers learn about company reputation from many sources: magazine articles, analyst reviews, recommendations from colleagues, awards, certifications, and so on. Of course, this only works if customers have accurate information. In a reputation economy, companies have a motivation to hide their security problems.

You’ve all experienced a reputation economy: restaurants. Some restaurants have a good reputation, and are filled with regulars. When restaurants get a bad reputation, people stop coming and they close. Tourist restaurants—whose main attraction is their location, and whose customers frequently don’t know anything about their reputation—can thrive even if they aren’t any good. And sometimes a restaurant can keep its reputation—an award in a magazine, a special occasion restaurant that “everyone knows” is the place to go—long after its food and service have declined.

The reputation economy is far from perfect.

This essay originally appeared in The Guardian.

Posted on November 12, 2009 at 6:30 AMView Comments

Risks of Cloud Computing

Excellent essay by Jonathan Zittrain on the risks of cloud computing:

The cloud, however, comes with real dangers.

Some are in plain view. If you entrust your data to others, they can let you down or outright betray you. For example, if your favorite music is rented or authorized from an online subscription service rather than freely in your custody as a compact disc or an MP3 file on your hard drive, you can lose your music if you fall behind on your payments—or if the vendor goes bankrupt or loses interest in the service. Last week Amazon apparently conveyed a publisher’s change-of-heart to owners of its Kindle e-book reader: some purchasers of Orwell’s “1984” found it removed from their devices, with nothing to show for their purchase other than a refund. (Orwell would be amused.)

Worse, data stored online has less privacy protection both in practice and under the law. A hacker recently guessed the password to the personal e-mail account of a Twitter employee, and was thus able to extract the employee’s Google password. That in turn compromised a trove of Twitter’s corporate documents stored too conveniently in the cloud. Before, the bad guys usually needed to get their hands on people’s computers to see their secrets; in today’s cloud all you need is a password.

Thanks in part to the Patriot Act, the federal government has been able to demand some details of your online activities from service providers—and not to tell you about it. There have been thousands of such requests lodged since the law was passed, and the F.B.I.’s own audits have shown that there can be plenty of overreach—perhaps wholly inadvertent—in requests like these.

Here’s me on cloud computing.

Posted on July 30, 2009 at 7:06 AMView Comments

The Psychology of Being Scammed

Fascinating research on the psychology of con games. “The psychology of scams: Provoking and committing errors of judgement” was prepared for the UK Office of Fair Trading by the University of Exeter School of Psychology.

From the executive summary, here’s some stuff you may know:

Appeals to trust and authority: people tend to obey authorities so scammers use, and victims fall for, cues that make the offer look like a legitimate one being made by a reliable official institution or established reputable business.

Visceral triggers: scams exploit basic human desires and needs—such as greed, fear, avoidance of physical pain, or the desire to be liked—in order to provoke intuitive reactions and reduce the motivation of people to process the content of the scam message deeply. For example, scammers use triggers that make potential victims focus on the huge prizes or benefits on offer.

Scarcity cues. Scams are often personalised to create the impression that the offer is unique to the recipient. They also emphasise the urgency of a response to reduce the potential victim’s motivation to process the scam content objectively.

Induction of behavioural commitment. Scammers ask their potential victims to make small steps of compliance to draw them in, and thereby cause victims to feel committed to continue sending money.

The disproportionate relation between the size of the alleged reward and the cost of trying to obtain it. Scam victims are led to focus on the alleged big prize or reward in comparison to the relatively small amount of money they have to send in order to obtain their windfall; a phenomenon called ‘phantom fixation’. The high value reward (often life-changing, medically, financially, emotionally or physically) that scam victims thought they could get by responding, makes the money to be paid look rather small by comparison.

Lack of emotional control. Compared to non-victims, scam victims report being less able to regulate and resist emotions associated with scam offers. They seem to be unduly open to persuasion, or perhaps unduly undiscriminating about who they allow to persuade them. This creates an extra vulnerability in those who are socially isolated, because social networks often induce us to regulate our emotions when we otherwise might not.

And some stuff that surprised me:

…it was striking how some scam victims kept their decision to respond private and avoided speaking about it with family members or friends. It was almost as if with some part of their minds, they knew that what they were doing was unwise, and they feared the confirmation of that that another person would have offered. Indeed to some extent they hide their response to the scam from their more rational selves.

Another counter-intuitive finding is that scam victims often have better than average background knowledge in the area of the scam content. For example, it seems that people with experience of playing legitimate prize draws and lotteries are more likely to fall for a scam in this area than people with less knowledge and experience in this field. This also applies to those with some knowledge of investments. Such knowledge
can increase rather than decrease the risk of becoming a victim.

…scam victims report that they put more cognitive effort into analysing scam content than non-victims. This contradicts the intuitive suggestion that people fall victim to scams because they invest too little cognitive energy in investigating their content, and thus overlook potential information that might betray the scam. This may, however, reflect the victim being ‘drawn in’ to the scam whilst non-victims include many people who discard scams without giving them a second glance.

Related: the psychology of con games.

Posted on June 17, 2009 at 2:05 PMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Second SHB Workshop Liveblogging (2)

The first session was about deception, moderated by David Clark.

Frank Stajano, Cambridge University (suggested reading: Understanding victims: Six principles for systems security), presented research with Paul Wilson, who films actual scams for “The Real Hustle.” His point is that we build security systems based on our “logic,” but users don’t always follow our logic. It’s fraudsters who really understand what people do, so we need to understand what the fraudsters understand. Things like distraction, greed, unknown accomplices, social compliance are important.

David Livingstone Smith, University of New England (suggested reading: Less than human: self-deception in the imagining of others; Talk on Lying at La Ciudad de Las Ideas; a subsequent discussion; Why War?), is a philosopher by training, and goes back to basics: “What are we talking about?” A theoretical definition—”that which something has to have to fall under a term”—of deception is difficult to define. “Cause to have a false belief,” from the Oxford English Dictionary, is inadequate. “To deceive is intentionally have someone to have a false belief” also doesn’t work. “Intentionally causing someone to have a false belief that the speaker knows to be false” still isn’t good enough. The fundamental problem is that these are anthropocentric definitions. Deception is not unique to humans; it gives organisms an evolutionary edge. For example, the mirror orchid fools a wasp into landing on it by looking like and giving off chemicals that mimic the female wasp. This example shows that we need a broader definition of “purpose.” His formal definition: “For systems A and B, A deceives B iff A possesses some character C with proper function F, and B possesses a mechanism C* with the proper function F* of producing representations, such that the proper function of C is to cause C* to fail to perform F* by causing C* to form false representations, and C does so in virtue of performing F, and B’s falsely representing enables some feature of A to perform its proper function.”

I spoke next, about the psychology of Conficker, how the human brain buys security, and why science fiction writers shouldn’t be hired to think about terrorism risks (to be published on Wired.com next week).

Dominic Johnson, University of Edinburgh (suggested reading: Paradigm Shifts in Security Strategy; Perceptions of victory and defeat), talked about his chapter in the book Natural Security: A Darwinian Approach to a Dangerous World. Life has 3.5 billion years of experience in security innovation; let’s look at how biology approaches security. Biomimicry, ecology, paleontology, animal behavior, evolutionary psychology, immunology, epidemiology, selection, and adaption are all relevant. Redundancy is a very important survival tool for species. Here’s an adaption example: The 9/11 threat was real and we knew about it, but we didn’t do anything. His thesis: Adaptation to novel security threats tends to occur after major disasters. There are many historical examples of this; Pearl Harbor, for example. Causes include sensory biases, psychological biases, leadership biases, organizational biases, and political biases—all pushing us towards maintaining the status quo. So it’s natural for us to poorly adapt to security threats in the modern world. A questioner from the audience asked whether control theory had any relevance to this model.

Jeff Hancock, Cornell University (suggested reading: On Lying and Being Lied To: A Linguistic Analysis of Deception in Computer-Mediated Communication; Separating Fact From Fiction: An Examination of Deceptive Self-Presentation in Online Dating Profiles), studies interpersonal deception: how the way we lie to each other intersects with communications technologies; and how technologies change the way we lie, and can technology be used to detect lying? Despite new technology, people lie for traditional reasons. For example: on dating sites, men tend to lie about their height and women tend to lie about their weight. The recordability of the Internet also changes how we lie. The use of the first person singular tends to go down the more people lie. He verified this in many spheres, such as how people describe themselves in chat rooms, and true versus false statements that the Bush administration made about 9/11 and Iraq. The effect was more pronounced when administration officials were answering questions than when they were reading prepared remarks.

EDITED TO ADD (6/11): Adam Shostack liveblogged this session, too. And Ross’s liveblogging is in his blog post’s comments.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 9:37 AMView Comments

Cloud Computing

This year’s overhyped IT concept is cloud computing. Also called software as a service (Saas), cloud computing is when you run software over the internet and access it via a browser. The Salesforce.com customer management software is an example of this. So is Google Docs. If you believe the hype, cloud computing is the future.

But, hype aside, cloud computing is nothing new . It’s the modern version of the timesharing model from the 1960s, which was eventually killed by the rise of the personal computer. It’s what Hotmail and Gmail have been doing all these years, and it’s social networking sites, remote backup companies, and remote email filtering companies such as MessageLabs. Any IT outsourcing—network infrastructure, security monitoring, remote hosting—is a form of cloud computing.

The old timesharing model arose because computers were expensive and hard to maintain. Modern computers and networks are drastically cheaper, but they’re still hard to maintain. As networks have become faster, it is again easier to have someone else do the hard work. Computing has become more of a utility; users are more concerned with results than technical details, so the tech fades into the background.

But what about security? Isn’t it more dangerous to have your email on Hotmail’s servers, your spreadsheets on Google’s, your personal conversations on Facebook’s, and your company’s sales prospects on salesforce.com’s? Well, yes and no.

IT security is about trust. You have to trust your CPU manufacturer, your hardware, operating system and software vendors—and your ISP. Any one of these can undermine your security: crash your systems, corrupt data, allow an attacker to get access to systems. We’ve spent decades dealing with worms and rootkits that target software vulnerabilities. We’ve worried about infected chips. But in the end, we have no choice but to blindly trust the security of the IT providers we use.

Saas moves the trust boundary out one step further—you now have to also trust your software service vendors—but it doesn’t fundamentally change anything. It’s just another vendor we need to trust.

There is one critical difference. When a computer is within your network, you can protect it with other security systems such as firewalls and IDSs. You can build a resilient system that works even if those vendors you have to trust may not be as trustworthy as you like. With any outsourcing model, whether it be cloud computing or something else, you can’t. You have to trust your outsourcer completely. You not only have to trust the outsourcer’s security, but its reliability, its availability, and its business continuity.

You don’t want your critical data to be on some cloud computer that abruptly disappears because its owner goes bankrupt . You don’t want the company you’re using to be sold to your direct competitor. You don’t want the company to cut corners, without warning, because times are tight. Or raise its prices and then refuse to let you have your data back. These things can happen with software vendors, but the results aren’t as drastic.

There are two different types of cloud computing customers. The first only pays a nominal fee for these services—and uses them for free in exchange for ads: e.g., Gmail and Facebook. These customers have no leverage with their outsourcers. You can lose everything. Companies like Google and Amazon won’t spend a lot of time caring. The second type of customer pays considerably for these services: to Salesforce.com, MessageLabs, managed network companies, and so on. These customers have more leverage, providing they write their service contracts correctly. Still, nothing is guaranteed.

Trust is a concept as old as humanity, and the solutions are the same as they have always been. Be careful who you trust, be careful what you trust them with, and be careful how much you trust them. Outsourcing is the future of computing. Eventually we’ll get this right, but you don’t want to be a casualty along the way.

This essay originally appeared in The Guardian.

EDITED TO ADD (6/4): Another opinion.

EDITED TO ADD (6/5): A rebuttal. And an apology for the tone of the rebuttal. The reason I am talking so much about cloud computing is that reporters and inverviewers keep asking me about it. I feel kind of dragged into this whole thing.

EDITED TO ADD (6/6): At the Computers, Freedom, and Privacy conference last week, Bob Gellman said (this, by him, is worth reading) that the nine most important words in cloud computing are: “terms of service,” “location, location, location,” and “provider, provider, provider”—basically making the same point I did. You need to make sure the terms of service you sign up to are ones you can live with. You need to make sure the location of the provider doesn’t subject you to any laws that you can’t live with. And you need to make sure your provider is someone you’re willing to work with. Basically, if you’re going to give someone else your data, you need to trust them.

Posted on June 4, 2009 at 6:14 AM

1 8 9 10 11 12 16

Sidebar photo of Bruce Schneier by Joe MacInnis.