Entries Tagged "privacy"

Page 123 of 126

Garbage Cans that Spy on You

From The Guardian:

Though he foresaw many ways in which Big Brother might watch us, even George Orwell never imagined that the authorities would keep a keen eye on your bin.

Residents of Croydon, south London, have been told that the microchips being inserted into their new wheely bins may well be adapted so that the council can judge whether they are producing too much rubbish.

I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law.

And there are security risks.

If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant.

To me, this is just another example of those implementing policy not being the ones who bear the costs. How long would the policy last if it were made clear to those implementing it that they would be held personally liable, even if only via their departmental budgets or careers, for any losses to residents if the database did get hacked?

Posted on March 4, 2005 at 10:32 AMView Comments

Sensitive Information on Used Hard Drives

A research team bought over a hundred used hard drives for about a thousand dollars, and found more than half still contained personal and commercially sensitive information — some of it blackmail material.

People have repeated this experiment again and again, in a variety of countries, and the results have been pretty much the same. People don’t understand the risks of throwing away hard drives containing sensitive information.

What struck me about this story was the wide range of dirt they were able to dig up: insurance company records, a school’s file on its children, evidence of an affair, and so on. And although it cost them a grand to get this, they still had a grand’s worth of salable computer hardware at the end of their experiment.

Posted on March 2, 2005 at 9:40 AMView Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive — both in money and time — to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs — whether through regulation or liability — it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PMView Comments

Security Risks of Frequent-Shopper Cards

This is from Richard M. Smith:

Tukwila, Washington firefighter, Philip Scott Lyons found out the hard way that supermarket loyalty cards come with a huge price. Lyons was arrested last August and charged with attempted arson. Police alleged at the time that Lyons tried to set fire to his own house while his wife and children were inside. According to the KOMO-TV and the Seattle Times, a major piece of evidence used against Lyons in his arrest was the record of his supermarket purchases that he made with his Safeway Club Card. Police investigators had discovered that his Club Card was used to buy fire starters of the same type used in the arson attempt.

For Lyons, the story did have a happy ending. All charges were dropped against him in January 2005 because another person stepped forward saying he set the fire and not Lyons. Lyons is now back at work after more than 5 months of being on administrative leave from his firefighter job.

The moral of this story is that even the most innocent database can be used against a person in a criminal investigation turning their lives completely upside down.

Safeway needs to be more up-front with customers about the potential downsides of shopper cards. They should also provide the details of their role in the arrest or Mr. Lyons and other criminal cases in which the company provided Club Card purchase information to police investigators.

Here is how Safeway currently describes their Club Card program in the Club Card application:

We respect your privacy. Safeway does not sell or lease personally identifying information (i.e., your name, address, telephone number, and bank and credit card account numbers) to non-affiliated companies or entities. We do record information regarding the purchases made with your Safeway Club Card to help us provide you with special offers and other information. Safeway also may use this information to provide you with personally tailored coupons, offers or other information that may be provided to Safeway by other companies. If you do not wish to receive personally tailored coupons, offers or other information, please check the box below. Must be at least 18 years of age.

Links:

Firefighter Arrested For Attempted Arson

Fireman attempted to set fire to house, charges say

Tukwila Firefighter Cleared Of Arson Charges

Posted on February 18, 2005 at 8:00 AMView Comments

T-Mobile Hack

For at least seven months last year, a hacker had access to T-Mobile’s customer network. He’s known to have accessed information belonging to 400 customers — names, Social Security numbers, voicemail messages, SMS messages, photos — and probably had the ability to access data belonging to any of T-Mobile’s 16.3 million U.S. customers. But in its fervor to report on the security of cell phones, and T-Mobile in particular, the media missed the most important point of the story: The security of much of our data is not under our control.

This is new. A dozen years ago, if someone wanted to look through your mail, they would have to break into your house. Now they can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your house; now it’s on a computer owned by a telephone company. Your financial data is on Websites protected only by passwords. The list of books you browse, and the books you buy, is stored in the computers of some online bookseller. Your affinity card allows your supermarket to know what food you like. Data that used to be under your direct control is now controlled by others.

We have no choice but to trust these companies with our privacy, even though the companies have little incentive to protect that privacy. T-Mobile suffered some bad press for its lousy security, nothing more. It’ll spend some money improving its security, but it’ll be security designed to protect its reputation from bad PR, not security designed to protect the privacy of its customers.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as that data is held by others. The police need a warrant to read the e-mail on your computer; but they don’t need one to read it off the backup tapes at your ISP. According to the Supreme Court, that’s not a search as defined by the 4th Amendment.

This isn’t a technology problem, it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant — even though it occurred at the phone company switching office — the Supreme Court must recognize that reading e-mail at an ISP is no different.

This essay appeared in eWeek.

Posted on February 14, 2005 at 4:26 PMView Comments

Authentication and Expiration

There’s a security problem with many Internet authentication systems that’s never talked about: there’s no way to terminate the authentication.

A couple of months ago, I bought something from an e-commerce site. At the checkout page, I wasn’t able to just type in my credit-card number and make my purchase. Instead, I had to choose a username and password. Usually I don’t like doing that, but in this case I wanted to be able to access my account at a later date. In fact, the password was useful because I needed to return an item I purchased.

Months have passed, and I no longer want an ongoing relationship with the e-commerce site. I don’t want a username and password. I don’t want them to have my credit-card number on file. I’ve received my purchase, I’m happy, and I’m done. But because that username and password have no expiration date associated with them, they never end. It’s not a subscription service, so there’s no mechanism to sever the relationship. I will have access to that e-commerce site for as long as it remembers that username and password.

In other words, I am liable for that account forever.

Traditionally, passwords have indicated an ongoing relationship between a user and some computer service. Sometimes it’s a company employee and the company’s servers. Sometimes it’s an account and an ISP. In both cases, both parties want to continue the relationship, so expiring a password and then forcing the user to choose another is a matter of security.

In cases with this ongoing relationship, the security consideration is damage minimization. Nobody wants some bad guy to learn the password, and everyone wants to minimize the amount of damage he can do if he does. Regularly changing your password is a solution to that problem.

This approach works because both sides want it to; they both want to keep the authentication system working correctly, and minimize attacks.

In the case of the e-commerce site, the interests are much more one-sided. The e-commerce site wants me to live in their database forever. They want to market to me, and entice me to come back. They want to sell my information. (This is the kind of information that might be buried in the privacy policy or terms of service, but no one reads those because they’re unreadable. And all bets are off if the company changes hands.)

There’s nothing I can do about this, but a username and password that never expire is another matter entirely. The e-commerce site wants me to establish an account because it increases the chances that I’ll use them again. But I want a way to terminate the business relationship, a way to say: “I am no longer taking responsibility for items purchased using that username and password.”

Near as I can tell, the username and password I typed into that e-commerce site puts my credit card at risk until it expires. If the e-commerce site uses a system that debits amounts from my checking account whenever I place an order, I could be at risk forever. (The US has legal liability limits, but they’re not that useful. According to Regulation E, the electronic transfers regulation, a fraudulent transaction must be reported within two days to cap liability at US$50; within 60 days, it’s capped at $500. Beyond that, you’re out of luck.)

This is wrong. Every e-commerce site should have a way to purchase items without establishing a username and password. I like sites that allow me to make a purchase as a “guest,” without setting up an account.

But just as importantly, every e-commerce site should have a way for customers to terminate their accounts and should allow them to delete their usernames and passwords from the system. It’s okay to market to previous customers. It’s not okay to needlessly put them at financial risk.

This essay also appeared in the Jan/Feb 05 issue of IEEE Security & Privacy.

Posted on February 10, 2005 at 7:55 AMView Comments

Implanting Chips in People at a Distance

I have no idea if this is real or not. But even if it’s not real, it’s just a matter of time before it becomes real. How long before people can surreptitiously have RFID tags injected into them?

What is the ID SNIPER rifle?

It is used to implant a GPS-microchip in the body of a human being, using a high powered sniper rifle as the long distance injector. The microchip will enter the body and stay there, causing no internal damage, and only a very small amount of physical pain to the target. It will feel like a mosquito-bite lasting a fraction of a second. At the same time a digital camcorder with a zoom-lense fitted within the scope will take a high-resolution picture of the target. This picture will be stored on a memory card for later image-analysis.

Edited to add: This is a hoax.

Posted on February 4, 2005 at 8:00 AMView Comments

GovCon

There’s a conference in Washington, DC, in March that explores technologies for intelligence and terrorism prevention.

The 4th Annual Government Convention on Emerging Technologies will focus on the impact of the Intelligence Reform and Terrorism Prevention Act signed into law by President Bush in December 2004.

The departments and agencies of the National Security Community are currently engaged in the most comprehensive transformation of policy, structure, doctrine, and capabilities since the National Security Act of 1947.

Many of the legal, policy, organizational, and cultural challenges to manage the National Security Community as an enterprise and provide a framework for fielding new capabilities are being addressed. However, there are many emerging technologies and commercial best practices available to help the National Security Community achieve its critical mission of keeping America safe and secure.

There’s a lot of interesting stuff on the agenda, including some classified sessions. I’m especially interested in this track:

Track Two: Attaining Tailored Persistence

Explore the technologies required to attain persistent surveillance and tailored persistence.

What does “persistent surveillance” mean, anyway?

Posted on February 3, 2005 at 9:07 AMView Comments

TSA's Secure Flight

As I wrote previously, I am participating in a working group to study the security and privacy of Secure Flight, the U.S. government’s program to match airline passengers with a terrorist watch list. In the end, I signed the NDA allowing me access to SSI (Sensitive Security Information) documents, but managed to avoid filling out the paperwork for a SECRET security clearance.

Last week the group had its second meeting.

So far, I have four general conclusions. One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement — in almost every way — over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

Unfortunately, Congress has mandated that Secure Flight be implemented, so it is unlikely that the program will be killed. And analyzing the effectiveness of the program in general, potential mission creep, and whether the general idea is a worthwhile one, is beyond the scope of our little group. In other words, my first conclusion is basically all that they’re interested in hearing.

But that means I can write about everything else.

To speak to my fourth conclusion: Imagine for a minute that Secure Flight is perfect. That is, we can ensure that no one can fly under a false identity, that the watch lists have perfect identity information, and that Secure Flight can perfectly determine if a passenger is on the watch list: no false positives and no false negatives. Even if we could do all that, Secure Flight wouldn’t be worth it.

Secure Flight is a passive system. It waits for the bad guys to buy an airplane ticket and try to board. If the bad guys don’t fly, it’s a waste of money. If the bad guys try to blow up shopping malls instead of airplanes, it’s a waste of money.

If I had some millions of dollars to spend on terrorism security, and I had a watch list of potential terrorists, I would spend that money investigating those people. I would try to determine whether or not they were a terrorism threat before they got to the airport, or even if they had no intention of visiting an airport. I would try to prevent their plot regardless of whether it involved airplanes. I would clear the innocent people, and I would go after the guilty. I wouldn’t build a complex computerized infrastructure and wait until one of them happened to wander into an airport. It just doesn’t make security sense.

That’s my usual metric when I think about a terrorism security measure: Would it be more effective than taking that money and funding intelligence, investigation, or emergency response — things that protect us regardless of what the terrorists are planning next. Money spent on security measures that only work against a particular terrorist tactic, forgetting that terrorists are adaptable, is largely wasted.

Posted on January 31, 2005 at 9:26 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.