Entries Tagged "privacy"

Page 130 of 138

RFID Passport Security Revisited

I’ve written previously (including this op ed in the International Herald Tribune) about RFID chips in passports. An article in today’s USA Today (the paper version has a really good graphic) summarizes the latest State Department proposal, and it looks pretty good. They’re addressing privacy concerns, and they’re doing it right.

The most important feature they’ve included is an access-control system for the RFID chip. The data on the chip is encrypted, and the key is printed on the passport. The officer swipes the passport through an optical reader to get the key, and then the RFID reader uses the key to communicate with the RFID chip. This means that the passport-holder can control who has access to the information on the chip; someone cannot skim information from the passport without first opening it up and reading the information inside. Good security.

The new design also includes a thin radio shield in the cover, protecting the chip when the passport is closed. More good security.

Assuming that the RFID passport works as advertised (a big “if,” I grant you), then I am no longer opposed to the idea. And, more importantly, we have an example of an RFID identification system with good privacy safeguards. We should demand that any other RFID identification cards have similar privacy safeguards.

EDITED TO ADD: There’s more information in a Wired story:

The 64-KB chips store a copy of the information from a passport’s data page, including name, date of birth and a digitized version of the passport photo. To prevent counterfeiting or alterations, the chips are digitally signed….

“We are seriously considering the adoption of basic access control,” [Frank] Moss [the State Department’s deputy assistant secretary for passport services] said, referring to a process where chips remain locked until a code on the data page is first read by an optical scanner. The chip would then also transmit only encrypted data in order to prevent eavesdropping.

So it sounds like this access-control mechanism is not definite. In any case, I believe the system described in the USA Today article is a good one.

Posted on August 9, 2005 at 1:27 PMView Comments

Wireless Interception Distance Records

Don’t believe wireless distance limitations. Again and again they’re proven wrong.

At DefCon earlier this month, a group was able to set up an unamplified 802.11 network at a distance of 124.9 miles.

The record holders relied on more than just a pair of wireless laptops. The equipment required for the feat, according to the event website, included a “collection of homemade antennas, surplus 12 foot satellite dishes, home-welded support structures, scaffolds, ropes and computers”.

Bad news for those of us who rely on physical distance to secure our wireless networks.

Even more important, the world record for communicating with a passive RFID device was set at 69 feet. (Pictures here.) Remember that the next time someone tells you that it’s impossible to read RFID identity cards at a distance.

Whenever you hear a manufacturer talk about a distance limitation for any wireless technology—wireless LANs, RFID, Bluetooth, anything—assume he’s wrong. If he’s not wrong today, he will be in a couple of years. Assume that someone who spends some money and effort building more sensitive technology can do much better, and that it will take less money and effort over the years. Technology always gets better; it never gets worse. If something is difficult and expensive now, it will get easier and cheaper in the future.

Posted on August 8, 2005 at 1:37 PMView Comments

Technological Parenting

Salon has an interesting article about parents turning to technology to monitor their children, instead of to other people in their community.

“What is happening is that parents now assume the worst possible outcome, rather than seeing other adults as their allies,” says Frank Furedi, a professor of sociology at England’s University of Kent and the author of “Paranoid Parenting.” “You never hear stories about asking neighbors to care for kids or coming together as community. Instead we become insular, privatized communities, and look for
technological solutions to what are really social problems.” Indeed, while our parents’ generation was taught to “honor thy neighbor,” the mantra for today’s kids is “stranger danger,” and the message is clear—expect the worst of anyone unfamiliar—anywhere, and at any time.

This is security based on fear, not reason. And I think people who act this way make their families less safe.

EDITED TO ADD: Here’s a link to the book Paranoid Parenting.

Posted on August 3, 2005 at 8:38 AMView Comments

Eavesdropping on Bluetooth Automobiles

This is impressive:

This new toool is called The Car Whisperer and allows people equipped with a Linux Laptop and a directional antenna to inject audio to, and record audio from bypassing cars that have an unconnected Bluetooth handsfree unit running. Since many manufacturers use a standard passkey which often is the only authentication that is needed to connect.

This tool allows to interact with other drivers when traveling or maybe used in order to talk to that pushy Audi driver right behind you 😉 . It also allows to eavesdrop conversations in the inside of the car by accessing the microphone.

EDITED TO ADD: Another article.

Posted on August 2, 2005 at 1:41 PMView Comments

Hacking Hotel Infrared Systems

From Wired:

A vulnerability in many hotel television infrared systems can allow a hacker to obtain guests’ names and their room numbers from the billing system.

It can also let someone read the e-mail of guests who use web mail through the TV, putting business travelers at risk of corporate espionage. And it can allow an intruder to add or delete charges on a hotel guest’s bill or watch pornographic films and other premium content on their hotel TV without paying for it….

“No one thinks about the security risks of infrared because they think it’s used for minor things like garage doors and TV remotes,” Laurie said. “But infrared uses really simple codes, and they don’t put any kind of authentication (in it)…. If the system was designed properly, I shouldn’t be able to do what I can do.”

Posted on August 1, 2005 at 1:21 PMView Comments

Dog Poop Girl

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passangers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed—and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.

The Internet is changing our notions of personal privacy, and how the public enforces social norms.

Daniel Solove writes:

The dog-shit-girl case involves a norm that most people would seemingly agree to—clean up after your dog. Who could argue with that one? But what about when norm enforcement becomes too extreme? Most norm enforcement involves angry scowls or just telling a person off. But having a permanent record of one’s norm violations is upping the sanction to a whole new level. The blogosphere can be a very powerful norm-enforcing tool, allowing bloggers to act as a cyber-posse, tracking down norm violators and branding them with digital scarlet letters.

And that is why the law might be necessary—to modulate the harmful effects when the norm enforcement system gets out of whack. In the United States, privacy law is often the legal tool called in to address the situation. Suppose the dog poop incident occurred in the United States. Should the woman have legal redress under the privacy torts?

If this incident is any guide, then anyone acting outside the accepted norms of whatever segment of humanity surrounds him had better tread lightly. The question we need to answer is: is this the sort of society we want to live in? And if not, what technological or legal controls do we need to put in place to ensure that we don’t?

Solove again:

I believe that, as complicated as it might be, the law must play a role here. The stakes are too important. While entering law into the picture could indeed stifle freedom of discussion on the Internet, allowing excessive norm enforcement can be stifling to freedom as well.

All the more reason why we need to rethink old notions of privacy. Under existing notions, privacy is often thought of in a binary way ­ something either is private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy would suggest that this case involved taking an event that occurred in one context and significantly altering its nature ­ by making it permanent and widespread. The dog-shit-girl would have been just a vague image in a few people’s memory if it hadn’t been for the photo entering cyberspace and spreading around faster than an epidemic. Despite the fact that the event occurred in public, there was no need for her image and identity to be spread across the Internet.

Could the law provide redress? This is a complicated question; certainly under existing doctrine, making a case would have many hurdles. And some will point to practical problems. Bloggers often don’t have deep pockets. But perhaps the possibility of lawsuits might help shape the norms of the Internet. In the end, I strongly doubt that the law alone can address this problem; but its greatest contribution might be to help along the development of blogging norms that will hopefully prevent more cases such as this one from having crappy endings.

Posted on July 29, 2005 at 4:21 PMView Comments

Automatic Surveillance Via Cell Phone

Your cell phone company knows where you are all the time. (Well, it knows where your phone is whenever it’s on.) Turns out there’s a lot of information to be mined in that data.

Eagle’s Realty Mining project logged 350,000 hours of data over nine months about the location, proximity, activity and communication of volunteers, and was quickly able to guess whether two people were friends or just co-workers….

He and his team were able to create detailed views of life at the Media Lab, by observing how late people stayed at the lab, when they called one another and how much sleep students got.

Given enough data, Eagle’s algorithms were able to predict what people—especially professors and Media Lab employees—would do next and be right up to 85 percent of the time.

This is worrisome from a number of angles: government surveillance, corporate surveillance for marketing purposes, criminal surveillance. I am not mollified by this comment:

People should not be too concerned about the data trails left by their phone, according to Chris Hoofnagle, associate director of the Electronic Privacy Information Center.

“The location data and billing records is protected by statute, and carriers are under a duty of confidentiality to protect it,” Hoofnagle said.

We’re building an infrastructure of surveillance as a side effect of the convenience of carrying our cell phones everywhere.

Posted on July 28, 2005 at 4:09 PM

Risks of Losing Portable Devices

As PDAs become more powerful, and memory becomes cheaper, more people are carrying around a lot of personal information in an easy-to-lose format. The Washington Post has a story about this:

Personal devices “are carrying incredibly sensitive information,” said Joel Yarmon, who, as technology director for the staff of Sen. Ted Stevens (R-Alaska), had to scramble over a weekend last month after a colleague lost one of the office’s wireless messaging devices. In this case, the data included “personal phone numbers of leaders of Congress. . . . If that were to leak, that would be very embarrassing,” Yarmon said.

I’ve noticed this in my own life. If I didn’t make a special effort to limit the amount of information on my Treo, it would include detailed scheduling information from the past six years. My small laptop would include every e-mail I’ve sent and received in the past dozen years. And so on. A lot of us are carrying around an enormous amount of very personal data.

And some of us are carrying around personal data about other people, too:

Companies are seeking to avoid becoming the latest example of compromised security. Earlier this year, a laptop computer containing the names and Social Security numbers of 16,500 current and former MCI Inc. employees was stolen from the car of an MCI financial analyst in Colorado. In another case, a former Morgan Stanley employee sold a used BlackBerry on the online auction site eBay with confidential information still stored on the device. And in yet another incident, personal information for 665 families in Japan was recently stolen along with a handheld device belonging to a Japanese power-company employee.

There are several ways to deal with this—password protection and encryption, of course. More recently, some communications devices can be remotely erased if lost.

Posted on July 28, 2005 at 11:40 AMView Comments

The Sorting Door Project

From The Register:

A former CIA intelligence analyst and researchers from SAP plan to study how RFID tags might be used to profile and track individuals and consumer goods.

“I believe that tags will be readily used for surveillance, given the interests of various parties able to deploy readers,” said Ross Stapleton-Gray, former CIA analyst and manager of the study, called the Sorting Door Project.

Sorting Door will be a test-bed for studying the massive databases that will be created by RFID tags and readers, once they become ubiquitous. The project will help legislators, regulators and businesses make policies that balance the interests of industry, national security and civil liberties, said Stapleton-Gray.

In Sorting Door, RFID readers (whether in doorways, walls or floors, or the hands of workers) will collect data from RFID tags and feed them into databases.

Sorting Door participants will then investigate how the RFID tag’s unique serial numbers, called EPCs, can be merged with other data to identify dangerous people and gather intelligence in a particular location.

Posted on July 26, 2005 at 9:31 AMView Comments

Secure Flight

Last Friday the GAO issued a new report on Secure Flight. It’s couched in friendly language, but it’s not good:

During the course of our ongoing review of the Secure Flight program, we found that TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act.

Get that? The TSA violated federal law when it secretly expanded Secure Flight’s use of commercial data about passengers. It also lied to Congress and the public about it.

Much of this isn’t new. Last month we learned that:

The federal agency in charge of aviation security revealed that it bought and is storing commercial data about some passengers—even though officials said they wouldn’t do it and Congress told them not to.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone. And it is turning Secure Flight from a simple program to match airline passengers against terrorist watch lists into a complex program that compiles dossiers on passengers in order to give them some kind of score indicating the likelihood that they are a terrorist.

Which is exactly what it was not supposed to do in the first place.

Let’s review:

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

EPIC has more background information.

Back in January, Secure Flight was intended to just be a more efficient system of matching airline passengers with terrorist watch lists.

I am on a working group that is looking at the security and privacy implications of Secure Flight. Before joining the group I signed an NDA agreeing not to disclose any information learned within the group, and to not talk about deliberations within the group. But there’s no reason to believe that the TSA is lying to us any less than they’re lying to Congress, and there’s nothing I learned within the working group that I wish I could talk about. Everything I say here comes from public documents.

In January I gave some general conclusions about Secure Flight. These have not changed.

One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

What has changed is the scope of Secure Flight. First, it started using data from commercial sources, like Acxiom. (The details are even worse.) Technically, they’re testing the use of commercial data, but it’s still a violation. Even the DHS started investigating:

The Department of Homeland Security’s top privacy official said Wednesday that she is investigating whether the agency’s airline passenger screening program has violated federal privacy laws by failing to properly disclose its mission.

The privacy officer, Nuala O’Connor Kelly, said the review will focus on whether the program’s use of commercial databases and other details were properly disclosed to the public.

The TSA’s response to being caught violating their own Privacy Act statements? Revise them:

According to previous official notices, TSA had said it would not store commercial data about airline passengers.

The Privacy Act of 1974 prohibits the government from keeping a secret database. It also requires agencies to make official statements on the impact of their record keeping on privacy.

The TSA revealed its use of commercial data in a revised Privacy Act statement to be published in the Federal Register on Wednesday.

TSA spokesman Mark Hatfield said the program was being developed with a commitment to privacy, and that it was routine to change Privacy Act statements during testing.

Actually, it’s not. And it’s better to change the Privacy Act statement before violating the old one. Changing it after the fact just looks bad.

The point of Secure Flight match airline passengers against lists of suspected terrorists. But the vast majority of people flagged by this list simply have the same name, or a similar name, as the suspected terrorist: Ted Kennedy and Cat Stevens are two famous examples. The question is whether combining commercial data with the PNR (Passenger Name Record) supplied by the airline could reduce this false-positive problem. Maybe knowing the passenger’s address, or phone number, or date of birth, could reduce false positives. Or maybe not; it depends what data is on the terrorist lists. In any case, it’s certainly a smart thing to test.

But using commercial data has serious privacy implications, which is why Congress mandated all sorts of rules surrounding the TSA testing of commercial data—and more rules before it could deploy a final system—rules that the TSA has decided it can ignore completely.

Commercial data had another use under CAPPS-II In that now-dead program, every passenger would be subjected to a computerized background check to determine their “risk” to airline safety. The system would assign a risk score based on commercial data: their credit rating, how recently they moved, what kind of job they had, etc. This capability was removed from Secure Flight, but now it’s back:

The government will try to determine whether commercial data can be used to detect terrorist “sleeper cells” when it checks airline passengers against watch lists, the official running the project says….

Justin Oberman, in charge of Secure Flight at TSA, said the agency intends to do more testing of commercial data to see if it will help identify known or suspected terrorists not on the watch lists.

“We are trying to use commercial data to verify the identities of people who fly because we are not going to rely on the watch list,” he said. “If we just rise and fall on the watch list, it’s not adequate.”

Also this Congressional hearing (emphasis mine):

THOMPSON: There are a couple of questions I’d like to get answered in my mind about Secure Flight. Would Secure Flight pick up a person with strong community roots but who is in a terrorist sleeper cell or would a person have to be a known terrorist in order for Secure Flight to pick him up?

OBERMAN: Let me answer that this way: It will identify people who are known or suspected terrorists contained in the terrorist screening database, and it ought to be able to identify people who may not be on the watch list. It ought to be able to do that. We’re not in a position today to say that it does, but we think it’s absolutely critical that it be able to do that.

And so we are conducting this test of commercially available data to get at that exact issue.: Very difficult to do, generally. It’s particularly difficult to do when you have a system that transports 1.8 million people a day on 30,000 flights at 450 airports. That is a very high bar to get over.

It’s also very difficult to do with a threat described just like you described it, which is somebody who has sort of burrowed themselves into society and is not readily apparent to us when they’re walking through the airport. And so I cannot stress enough how important we think it is that it be able to have that functionality. And that’s precisely the reason we have been conducting this ommercial data test, why we’ve extended the testing period and why we’re very hopeful that the results will prove fruitful to us so that we can then come up here, brief them to you and explain to you why we need to include that in the system.

My fear is that TSA has already decided that they’re going to use commercial data, regardless of any test results. And once you have commercial data, why not build a dossier on every passenger and give them a risk score? So we’re back to CAPPS-II, the very system Congress killed last summer. Actually, we’re very close to TIA (Total/Terrorism Information Awareness), that vast spy-on-everyone data-mining program that Congress killed in 2003 because it was just too invasive.

Secure Flight is a mess in lots of other ways, too. A March GAO report said that Secure Flight had not met nine out of the ten conditions mandated by Congress before TSA could spend money on implementing the program. (If you haven’t read this report, it’s pretty scathing.) The redress problem—helping people who cannot fly because they share a name with a terrorist—is not getting any better. And Secure Flight is behind schedule and over budget.

It’s also a rogue program that is operating in flagrant disregard for the law. It can’t be killed completely; the Intelligence Reform and Terrorism Prevention Act of 2004 mandates that TSA implement a program of passenger prescreening. And until we have Secure Flight, airlines will still be matching passenger names with terrorist watch lists under the CAPPS-I program. But it needs some serious public scrutiny.

EDITED TO ADD: Anita Ramasastry’s commentary is worth reading.

Posted on July 24, 2005 at 9:10 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.