Entries Tagged "trust"

Page 11 of 16

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization—domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation—or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant—even though it occurred at the phone company switching office and not in the target’s home or office—the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Unfair and Deceptive Data Trade Practices

Do you know what your data did last night? Almost none of the more than 27 million people who took the RealAge quiz realized that their personal health data was being used by drug companies to develop targeted e-mail marketing campaigns.

There’s a basic consumer protection principle at work here, and it’s the concept of “unfair and deceptive” trade practices. Basically, a company shouldn’t be able to say one thing and do another: sell used goods as new, lie on ingredients lists, advertise prices that aren’t generally available, claim features that don’t exist, and so on.

Buried in RealAge’s 2,400-word privacy policy is this disclosure: “If you elect to say yes to becoming a free RealAge Member, we will periodically send you free newsletters and e-mails that directly promote the use of our site(s) or the purchase of our products or services and may contain, in whole or in part, advertisements for third parties which relate to marketed products of selected RealAge partners.”

They maintain that when you join the website, you consent to receiving pharmaceutical company spam. But since that isn’t spelled out, it’s not really informed consent. That’s deceptive.

Cloud computing is another technology where users entrust their data to service providers. Salesforce.com, Gmail, and Google Docs are examples; your data isn’t on your computer—it’s out in the “cloud” somewhere—and you access it from your web browser. Cloud computing has significant benefits for customers and huge profit potential for providers. It’s one of the fastest growing IT market segments—69% of Americans now use some sort of cloud computing services—but the business is rife with shady, if not outright deceptive, advertising.

Take Google, for example. Last month, the Electronic Privacy Information Center (I’m on its board of directors) filed a complaint with the Federal Trade Commission concerning Google’s cloud computing services. On its website, Google repeatedly assures customers that their data is secure and private, while published vulnerabilities demonstrate that it is not. Google’s not foolish, though; its Terms of Service explicitly disavow any warranty or any liability for harm that might result from Google’s negligence, recklessness, malevolent intent, or even purposeful disregard of existing legal obligations to protect the privacy and security of user data. EPIC claims that’s deceptive.

Facebook isn’t much better. Its plainly written (and not legally binding) Statement of Principles contains an admirable set of goals, but its denser and more legalistic Statement of Rights and Responsibilities undermines a lot of it. One research group who studies these documents called it “democracy theater“: Facebook wants the appearance of involving users in governance, without the messiness of actually having to do so. Deceptive.

These issues are not identical. RealAge is hiding what it does with your data. Google is trying to both assure you that your data is safe and duck any responsibility when it’s not. Facebook wants to market a democracy but run a dictatorship. But they all involve trying to deceive the customer.

Cloud computing services like Google Docs, and social networking sites like RealAge and Facebook, bring with them significant privacy and security risks over and above traditional computing models. Unlike data on my own computer, which I can protect to whatever level I believe prudent, I have no control over any of these sites, nor any real knowledge of how these companies protect my privacy and security. I have to trust them.

This may be fine—the advantages might very well outweigh the risks—but users often can’t weigh the trade-offs because these companies are going out of their way to hide the risks.

Of course, companies don’t want people to make informed decisions about where to leave their personal data. RealAge wouldn’t get 27 million members if its webpage clearly stated “you are signing up to receive e-mails containing advertising from pharmaceutical companies,” and Google Docs wouldn’t get five million users if its webpage said “We’ll take some steps to protect your privacy, but you can’t blame us if something goes wrong.”

And of course, trust isn’t black and white. If, for example, Amazon tried to use customer credit card info to buy itself office supplies, we’d all agree that that was wrong. If it used customer names to solicit new business from their friends, most of us would consider this wrong. When it uses buying history to try to sell customers new books, many of us appreciate the targeted marketing. Similarly, no one expects Google’s security to be perfect. But if it didn’t fix known vulnerabilities, most of us would consider that a problem.

This is why understanding is so important. For markets to work, consumers need to be able to make informed buying decisions. They need to understand both the costs and benefits of the products and services they buy. Allowing sellers to manipulate the market by outright lying, or even by hiding vital information, about their products breaks capitalism—and that’s why the government has to step in to ensure markets work smoothly.

Last month, Mary K. Engle, Acting Deputy Director of the FTC’s Bureau of Consumer Protection said: “a company’s marketing materials must be consistent with the nature of the product being offered. It’s not enough to disclose the information only in a fine print of a lengthy online user agreement.” She was speaking about Digital Rights Management and, specifically, an incident where Sony used a music copy protection scheme without disclosing that it secretly installed software on customers’ computers. DRM is different from cloud computing or even online surveys and quizzes, but the principle is the same.

Engle again: “if your advertising giveth and your EULA [license agreement] taketh away don’t be surprised if the FTC comes calling.” That’s the right response from government.

A version of this article originally appeared on The Wall Street Journal.

EDITED TO ADD (2/29): Two rebuttals.

Posted on April 27, 2009 at 6:16 AMView Comments

Tweenbots

Tweenbots:

Tweenbots are human-dependent robots that navigate the city with the help of pedestrians they encounter. Rolling at a constant speed, in a straight line, Tweenbots have a destination displayed on a flag, and rely on people they meet to read this flag and to aim them in the right direction to reach their goal.

Given their extreme vulnerability, the vastness of city space, the dangers posed by traffic, suspicion of terrorism, and the possibility that no one would be interested in helping a lost little robot, I initially conceived the Tweenbots as disposable creatures which were more likely to struggle and die in the city than to reach their destination. Because I built them with minimal technology, I had no way of tracking the Tweenbot’s progress, and so I set out on the first test with a video camera hidden in my purse. I placed the Tweenbot down on the sidewalk, and walked far enough away that I would not be observed as the Tweenbot—a smiling 10-inch tall cardboard missionary—bumped along towards his inevitable fate.

The results were unexpected. Over the course of the following months, throughout numerous missions, the Tweenbots were successful in rolling from their start point to their far-away destination assisted only by strangers. Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, “You can’t go that way, it’s toward the road.”

It’s a measure of our restored sanity that no one called the TSA. Or maybe it’s just that no one has tried this in Boston yet. Or maybe it’s a lesson for terrorists: paint smiley faces on your bombs.

Posted on April 13, 2009 at 6:14 AMView Comments

Social Networking Identity Theft Scams

Clever:

I’m going to tell you exactly how someone can trick you into thinking they’re your friend. Now, before you send me hate mail for revealing this deep, dark secret, let me assure you that the scammers, crooks, predators, stalkers and identity thieves are already aware of this trick. It works only because the public is not aware of it. If you’re scamming someone, here’s what you’d do:

Step 1: Request to be “friends” with a dozen strangers on MySpace. Let’s say half of them accept. Collect a list of all their friends.

Step 2: Go to Facebook and search for those six people. Let’s say you find four of them also on Facebook. Request to be their friends on Facebook. All accept because you’re already an established friend.

Step 3: Now compare the MySpace friends against the Facebook friends. Generate a list of people that are on MySpace but are not on Facebook. Grab the photos and profile data on those people from MySpace and use it to create false but convincing profiles on Facebook. Send “friend” requests to your victims on Facebook.

As a bonus, others who are friends of both your victims and your fake self will contact you to be friends and, of course, you’ll accept. In fact, Facebook itself will suggest you as a friend to those people.

(Think about the trust factor here. For these secondary victims, they not only feel they know you, but actually request “friend” status. They sought you out.)

Step 4: Now, you’re in business. You can ask things of these people that only friends dare ask.

Like what? Lend me $500. When are you going out of town? Etc.

The author has no evidence that anyone has actually done this, but certainly someone will do this sometime in the future.

We have seen attacks by people hijacking existing social networking accounts:

Rutberg was the victim of a new, targeted version of a very old scam—the “Nigerian,” or “419,” ploy. The first reports of such scams emerged back in November, part of a new trend in the computer underground—rather than sending out millions of spam messages in the hopes of trapping a tiny fractions of recipients, Web criminals are getting much more personal in their attacks, using social networking sites and other databases to make their story lines much more believable.

In Rutberg’s case, criminals managed to steal his Facebook login password, steal his Facebook identity, and change his page to make it appear he was in trouble. Next, the criminals sent e-mails to dozens of friends, begging them for help.

“Can you just get some money to us,” the imposter implored to one of Rutberg’s friends. “I tried Amex and it’s not going through. … I’ll refund you as soon as am back home. Let me know please.”

Posted on April 8, 2009 at 6:43 AMView Comments

The Kindness of Strangers

When I was growing up, children were commonly taught: “don’t talk to strangers.” Strangers might be bad, we were told, so it’s prudent to steer clear of them.

And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

These two pieces of advice may seem to contradict each other, but they don’t. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it’s not a random choice. It’s more likely, although still unlikely, that the stranger is up to no good.

As a species, we tend help each other, and a surprising amount of our security and safety comes from the kindness of strangers. During disasters: floods, earthquakes, hurricanes, bridge collapses. In times of personal tragedy. And even in normal times.

If you’re sitting in a café working on your laptop and need to get up for a minute, ask the person sitting next to you to watch your stuff. He’s very unlikely to steal anything. Or, if you’re nervous about that, ask the three people sitting around you. Those three people don’t know each other, and will not only watch your stuff, but they’ll also watch each other to make sure no one steals anything.

Again, this works because you’re selecting the people. If three people walk up to you in the café and offer to watch your computer while you go to the bathroom, don’t take them up on that offer. Your odds of getting three honest people are much lower.

Some computer systems rely on the kindness of strangers, too. The Internet works because nodes benevolently forward packets to each other without any recompense from either the sender or receiver of those packets. Wikipedia works because strangers are willing to write for, and edit, an encyclopedia—with no recompense.

Collaborative spam filtering is another example. Basically, once someone notices a particular e-mail is spam, he marks it, and everyone else in the network is alerted that it’s spam. Marking the e-mail is a completely altruistic task; the person doing it gets no benefit from the action. But he receives benefit from everyone else doing it for other e-mails.

Tor is a system for anonymous Web browsing. The details are complicated, but basically, a network of Tor servers passes Web traffic among each other in such a way as to anonymize where it came from. Think of it as a giant shell game. As a Web surfer, I put my Web query inside a shell and send it to a random Tor server. That server knows who I am but not what I am doing. It passes that shell to another Tor server, which passes it to a third. That third server—which knows what I am doing but not who I am—processes the Web query. When the Web page comes back to that third server, the process reverses itself and I get my Web page. Assuming enough Web surfers are sending enough shells through the system, even someone eavesdropping on the entire network can’t figure out what I’m doing.

It’s a very clever system, and it protects a lot of people, including journalists, human rights activists, whistleblowers, and ordinary people living in repressive regimes around the world. But it only works because of the kindness of strangers. No one gets any benefit from being a Tor server; it uses up bandwidth to forward other people’s packets around. It’s more efficient to be a Tor client and use the forwarding capabilities of others. But if there are no Tor servers, then there’s no Tor. Tor works because people are willing to set themselves up as servers, at no benefit to them.

Alibi clubs work along similar lines. You can find them on the Internet, and they’re loose collections of people willing to help each other out with alibis. Sign up, and you’re in. You can ask someone to pretend to be your doctor and call your boss. Or someone to pretend to be your boss and call your spouse. Or maybe someone to pretend to be your spouse and call your boss. Whatever you want, just ask and some anonymous stranger will come to your rescue. And because your accomplice is an anonymous stranger, it’s safer than asking a friend to participate in your ruse.

There are risks in these sorts of systems. Regularly, marketers and other people with agendas try to manipulate Wikipedia entries to suit their interests. Intelligence agencies can, and almost certainly have, set themselves up as Tor servers to better eavesdrop on traffic. And a do-gooder could join an alibi club just to expose other members. But for the most part, strangers are willing to help each other, and systems that harvest this kindness work very well on the Internet.

This essay originally appeared on the Wall Street Journal website.

Posted on March 13, 2009 at 7:41 AMView Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PMView Comments

Audit

As the first digital president, Barack Obama is learning the hard way how difficult it can be to maintain privacy in the information age. Earlier this year, his passport file was snooped by contract workers in the State Department. In October, someone at Immigration and Customs Enforcement leaked information about his aunt’s immigration status. And in November, Verizon employees peeked at his cell phone records.

What these three incidents illustrate is not that computerized databases are vulnerable to hacking—we already knew that, and anyway the perpetrators all had legitimate access to the systems they used—but how important audit is as a security measure.

When we think about security, we commonly think about preventive measures: locks to keep burglars out of our homes, bank safes to keep thieves from our money, and airport screeners to keep guns and bombs off airplanes. We might also think of detection and response measures: alarms that go off when burglars pick our locks or dynamite open bank safes, sky marshals on airplanes who respond when a hijacker manages to sneak a gun through airport security. But audit, figuring out who did what after the fact, is often far more important than any of those other three.

Most security against crime comes from audit. Of course we use locks and alarms, but we don’t wear bulletproof vests. The police provide for our safety by investigating crimes after the fact and prosecuting the guilty: that’s audit.

Audit helps ensure that people don’t abuse positions of trust. The cash register, for example, is basically an audit system. Cashiers have to handle the store’s money. To ensure they don’t skim from the till, the cash register keeps an audit trail of every transaction. The store owner can look at the register totals at the end of the day and make sure the amount of money in the register is the amount that should be there.

The same idea secures us from police abuse, too. The police have enormous power, including the ability to intrude into very intimate aspects of our life in order to solve crimes and keep the peace. This is generally a good thing, but to ensure that the police don’t abuse this power, we put in place systems of audit like the warrant process.

The whole NSA warrantless eavesdropping scandal was about this. Some misleadingly painted it as allowing the government to eavesdrop on foreign terrorists, but the government always had that authority. What the government wanted was to not have to submit a warrant, even after the fact, to a secret FISA court. What they wanted was to not be subject to audit.

That would be an incredibly bad idea. Law enforcement systems that don’t have good audit features designed in, or are exempt from this sort of audit-based oversight, are much more prone to abuse by those in power—because they can abuse the system without the risk of getting caught. Audit is essential as the NSA increases its domestic spying. And large police databases, like the FBI Next Generation Identification System, need to have strong audit features built in.

For computerized database systems like that—systems entrusted with other people’s information—audit is a very important security mechanism. Hospitals need to keep databases of very personal health information, and doctors and nurses need to be able to access that information quickly and easily. A good audit record of who accessed what when is the best way to ensure that those trusted with our medical information don’t abuse that trust. It’s the same with IRS records, credit reports, police databases, telephone records – anything personal that someone might want to peek at during the course of his job.

Which brings us back to President Obama. In each of those three examples, someone in a position of trust inappropriately accessed personal information. The difference between how they played out is due to differences in audit. The State Department’s audit worked best; they had alarm systems in place that alerted superiors when Obama’s passport files were accessed and who accessed them. Verizon’s audit mechanisms worked less well; they discovered the inappropriate account access and have narrowed the culprits down to a few people. Audit at Immigration and Customs Enforcement was far less effective; they still don’t know who accessed the information.

Large databases filled with personal information, whether managed by governments or corporations, are an essential aspect of the information age. And they each need to be accessed, for legitimate purposes, by thousands or tens of thousands of people. The only way to ensure those people don’t abuse the power they’re entrusted with is through audit. Without it, we will simply never know who’s peeking at what.

This essay first appeared on the Wall Street Journal website.

Posted on December 10, 2008 at 2:21 PMView Comments

The Neuroscience of Cons

Fascinating:

The key to a con is not that you trust the conman, but that he shows he trusts you. Conmen ply their trade by appearing fragile or needing help, by seeming vulnerable. Because of THOMAS [The Human Oxytocin Mediated Attachment System], the human brain makes us feel good when we help others—this is the basis for attachment to family and friends and cooperation with strangers. “I need your help” is a potent stimulus for action.

This is interesting. They say that all cons rely on the mark’s greed to work. But this short essay implies that greed is only a secondary factor.

Posted on November 18, 2008 at 6:32 AMView Comments

1 9 10 11 12 13 16

Sidebar photo of Bruce Schneier by Joe MacInnis.