Entries Tagged "privacy"

Page 137 of 144

Secure Flight

Last Friday the GAO issued a new report on Secure Flight. It’s couched in friendly language, but it’s not good:

During the course of our ongoing review of the Secure Flight program, we found that TSA did not fully disclose to the public its use of personal information in its fall 2004 privacy notices as required by the Privacy Act. In particular, the public was not made fully aware of, nor had the opportunity to comment on, TSA’s use of personal information drawn from commercial sources to test aspects of the Secure Flight program. In September 2004 and November 2004, TSA issued privacy notices in the Federal Register that included descriptions of how such information would be used. However, these notices did not fully inform the public before testing began about the procedures that TSA and its contractors would follow for collecting, using, and storing commercial data. In addition, the scope of the data used during commercial data testing was not fully disclosed in the notices. Specifically, a TSA contractor, acting on behalf of the agency, collected more than 100 million commercial data records containing personal information such as name, date of birth, and telephone number without informing the public. As a result of TSA’s actions, the public did not receive the full protections of the Privacy Act.

Get that? The TSA violated federal law when it secretly expanded Secure Flight’s use of commercial data about passengers. It also lied to Congress and the public about it.

Much of this isn’t new. Last month we learned that:

The federal agency in charge of aviation security revealed that it bought and is storing commercial data about some passengers—even though officials said they wouldn’t do it and Congress told them not to.

Secure Flight is a disaster in every way. The TSA has been operating with complete disregard for the law or Congress. It has lied to pretty much everyone. And it is turning Secure Flight from a simple program to match airline passengers against terrorist watch lists into a complex program that compiles dossiers on passengers in order to give them some kind of score indicating the likelihood that they are a terrorist.

Which is exactly what it was not supposed to do in the first place.

Let’s review:

For those who have not been following along, Secure Flight is the follow-on to CAPPS-I. (CAPPS stands for Computer Assisted Passenger Pre-Screening.) CAPPS-I has been in place since 1997, and is a simple system to match airplane passengers to a terrorist watch list. A follow-on system, CAPPS-II, was proposed last year. That complicated system would have given every traveler a risk score based on information in government and commercial databases. There was a huge public outcry over the invasiveness of the system, and it was cancelled over the summer. Secure Flight is the new follow-on system to CAPPS-I.

EPIC has more background information.

Back in January, Secure Flight was intended to just be a more efficient system of matching airline passengers with terrorist watch lists.

I am on a working group that is looking at the security and privacy implications of Secure Flight. Before joining the group I signed an NDA agreeing not to disclose any information learned within the group, and to not talk about deliberations within the group. But there’s no reason to believe that the TSA is lying to us any less than they’re lying to Congress, and there’s nothing I learned within the working group that I wish I could talk about. Everything I say here comes from public documents.

In January I gave some general conclusions about Secure Flight. These have not changed.

One, assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place. (And by this I mean the matching program, not any potential uses of commercial or other third-party data.)

Two, the security system surrounding Secure Flight is riddled with security holes. There are security problems with false IDs, ID verification, the ability to fly on someone else’s ticket, airline procedures, etc.

Three, the urge to use this system for other things will be irresistible. It’s just too easy to say: “As long as you’ve got this system that watches out for terrorists, how about also looking for this list of drug dealers…and by the way, we’ve got the Super Bowl to worry about too.” Once Secure Flight gets built, all it’ll take is a new law and we’ll have a nationwide security checkpoint system.

And four, a program of matching airline passengers with names on terrorism watch lists is not making us appreciably safer, and is a lousy way to spend our security dollars.

What has changed is the scope of Secure Flight. First, it started using data from commercial sources, like Acxiom. (The details are even worse.) Technically, they’re testing the use of commercial data, but it’s still a violation. Even the DHS started investigating:

The Department of Homeland Security’s top privacy official said Wednesday that she is investigating whether the agency’s airline passenger screening program has violated federal privacy laws by failing to properly disclose its mission.

The privacy officer, Nuala O’Connor Kelly, said the review will focus on whether the program’s use of commercial databases and other details were properly disclosed to the public.

The TSA’s response to being caught violating their own Privacy Act statements? Revise them:

According to previous official notices, TSA had said it would not store commercial data about airline passengers.

The Privacy Act of 1974 prohibits the government from keeping a secret database. It also requires agencies to make official statements on the impact of their record keeping on privacy.

The TSA revealed its use of commercial data in a revised Privacy Act statement to be published in the Federal Register on Wednesday.

TSA spokesman Mark Hatfield said the program was being developed with a commitment to privacy, and that it was routine to change Privacy Act statements during testing.

Actually, it’s not. And it’s better to change the Privacy Act statement before violating the old one. Changing it after the fact just looks bad.

The point of Secure Flight match airline passengers against lists of suspected terrorists. But the vast majority of people flagged by this list simply have the same name, or a similar name, as the suspected terrorist: Ted Kennedy and Cat Stevens are two famous examples. The question is whether combining commercial data with the PNR (Passenger Name Record) supplied by the airline could reduce this false-positive problem. Maybe knowing the passenger’s address, or phone number, or date of birth, could reduce false positives. Or maybe not; it depends what data is on the terrorist lists. In any case, it’s certainly a smart thing to test.

But using commercial data has serious privacy implications, which is why Congress mandated all sorts of rules surrounding the TSA testing of commercial data—and more rules before it could deploy a final system—rules that the TSA has decided it can ignore completely.

Commercial data had another use under CAPPS-II In that now-dead program, every passenger would be subjected to a computerized background check to determine their “risk” to airline safety. The system would assign a risk score based on commercial data: their credit rating, how recently they moved, what kind of job they had, etc. This capability was removed from Secure Flight, but now it’s back:

The government will try to determine whether commercial data can be used to detect terrorist “sleeper cells” when it checks airline passengers against watch lists, the official running the project says….

Justin Oberman, in charge of Secure Flight at TSA, said the agency intends to do more testing of commercial data to see if it will help identify known or suspected terrorists not on the watch lists.

“We are trying to use commercial data to verify the identities of people who fly because we are not going to rely on the watch list,” he said. “If we just rise and fall on the watch list, it’s not adequate.”

Also this Congressional hearing (emphasis mine):

THOMPSON: There are a couple of questions I’d like to get answered in my mind about Secure Flight. Would Secure Flight pick up a person with strong community roots but who is in a terrorist sleeper cell or would a person have to be a known terrorist in order for Secure Flight to pick him up?

OBERMAN: Let me answer that this way: It will identify people who are known or suspected terrorists contained in the terrorist screening database, and it ought to be able to identify people who may not be on the watch list. It ought to be able to do that. We’re not in a position today to say that it does, but we think it’s absolutely critical that it be able to do that.

And so we are conducting this test of commercially available data to get at that exact issue.: Very difficult to do, generally. It’s particularly difficult to do when you have a system that transports 1.8 million people a day on 30,000 flights at 450 airports. That is a very high bar to get over.

It’s also very difficult to do with a threat described just like you described it, which is somebody who has sort of burrowed themselves into society and is not readily apparent to us when they’re walking through the airport. And so I cannot stress enough how important we think it is that it be able to have that functionality. And that’s precisely the reason we have been conducting this ommercial data test, why we’ve extended the testing period and why we’re very hopeful that the results will prove fruitful to us so that we can then come up here, brief them to you and explain to you why we need to include that in the system.

My fear is that TSA has already decided that they’re going to use commercial data, regardless of any test results. And once you have commercial data, why not build a dossier on every passenger and give them a risk score? So we’re back to CAPPS-II, the very system Congress killed last summer. Actually, we’re very close to TIA (Total/Terrorism Information Awareness), that vast spy-on-everyone data-mining program that Congress killed in 2003 because it was just too invasive.

Secure Flight is a mess in lots of other ways, too. A March GAO report said that Secure Flight had not met nine out of the ten conditions mandated by Congress before TSA could spend money on implementing the program. (If you haven’t read this report, it’s pretty scathing.) The redress problem—helping people who cannot fly because they share a name with a terrorist—is not getting any better. And Secure Flight is behind schedule and over budget.

It’s also a rogue program that is operating in flagrant disregard for the law. It can’t be killed completely; the Intelligence Reform and Terrorism Prevention Act of 2004 mandates that TSA implement a program of passenger prescreening. And until we have Secure Flight, airlines will still be matching passenger names with terrorist watch lists under the CAPPS-I program. But it needs some serious public scrutiny.

EDITED TO ADD: Anita Ramasastry’s commentary is worth reading.

Posted on July 24, 2005 at 9:10 PMView Comments

Visa and Amex Drop CardSystems

Remember CardSystems Solutions, the company that exposed over 40 million identities to potential fraud? (The actual number of identities that will be the victims of fraud is almost certainly much, much lower.)

Both Visa and American Express are dropping them as a payment processor:

Within hours of the disclosure that Visa was seeking a replacement for CardSystems Solutions, American Express said Tuesday it would no longer do business with the company beginning in October.

The biggest problem with CardSystems’ actions wasn’t that it had bad computer security practices, but that it had bad business practices. It was holding exception files with personal information even though it was not supposed to. It was not for marketing, as I originally surmised, but to find out why transactions were not being authorized. It was disregrading the rules it agreed to follow.

Technical problems can be remediated. A dishonest corporate culture is much harder to fix. This is what I sense reading between the lines:

Visa had been weighing the decision for a few weeks but as recently as mid-June said that it was working with CardSystems to correct the problem. CardSystems hired an outside security assessor this month to review its policies and practices, and it promised to make any necessary upgrades by the end of August. CardSystems, in its statement yesterday, said the company’s executives had been “in almost daily contact” with Visa since the problems were discovered in May.

Visa, however, said that despite “some remediation efforts” since the incident was reported, the actions by CardSystems were not enough.

And this:

CardSystems Solutions Inc. “has not corrected, and cannot at this point correct, the failure to provide proper data security for Visa accounts,” said Rosetta Jones, a spokeswoman for Foster City, Calif.-based Visa….

Visa said that while CardSystems has taken some remediating actions since the breach was disclosed, those could not overcome the fact that it was inappropriately holding on to account information—purportedly for “research purposes”—when the breach occurred, in violation of Visa’s security rules.

At this point, it is unclear what MasterCard and Discover will do.

MasterCard International Inc. is taking a different tack with CardSystems. The credit card company expects CardSystems to develop a plan for improving its security by Aug. 31, “and as of today, we are not aware of any deficiencies in its systems that are incapable of being remediated,” spokeswoman Sharon Gamsin said.

“However, if CardSystems cannot demonstrate that they are in compliance by that date, their ability to provide services to MasterCard members will be at risk,” she said.

Jennifer Born, a spokeswoman for Discover Financial Services Inc., which also has a relationship with CardSystems, said the Riverwoods, Ill.-based company was “doing our due diligence and will make our decision once that process is completed.”

I think this is a positive development. I have long said that companies like CardSystems won’t clean up their acts unless there are consequences for not doing so. Credit card companies dropping CardSystems sends a strong message to the other payment processors: improve your security if you want to stay in business.

(Some interesting legal opinions on the larger issue of disclosure are here.)

Posted on July 21, 2005 at 11:49 AMView Comments

Security Risks of Airplane WiFi

I’ve already written about the stupidity of worrying about cell phones on airplanes. Now the Department of Homeland Security is worried about broadband Internet.

Federal law enforcement officials, fearful that terrorists will exploit emerging in-flight broadband services to remotely activate bombs or coordinate hijackings, are asking regulators for the power to begin eavesdropping on any passenger’s internet use within 10 minutes of obtaining court authorization.

In joint comments filed with the FCC last Tuesday, the Justice Department, the FBI and the Department of Homeland Security warned that a terrorist could use on-board internet access to communicate with confederates on other planes, on the ground or in different sections of the same plane—all from the comfort of an aisle seat.

“There is a short window of opportunity in which action can be taken to thwart a suicidal terrorist hijacking or remedy other crisis situations on board an aircraft, and law enforcement needs to maximize its ability to respond to these potentially lethal situations,” the filing reads.

Terrorists never use SSH, after all. (I suppose that’s the next thing the DHS is going to try to ban.)

Posted on July 14, 2005 at 12:02 PMView Comments

Surveillance Cameras and Terrorism

I was going to write something about the foolishness of adding cameras to public spaces as a response to terrorism threats, but Scott Henson said it already:

Homeland Security Ubermeister Michael Chertoff just told NBC’s Tim Russert on Meet the Press this morning that the United States should invest in “cameras and dogs” to protect subway, rail and bus transit systems from terrorist attacks.

B.S.

Surveillance cameras didn’t deter the terrorist attacks in London. They didn’t stop the courthouse killing spree in Atlanta. But they’re prone to abuse. And at the end of they day they don’t reduce crime.

Posted on July 12, 2005 at 8:13 AMView Comments

The Doghouse: Privacy.li

This company has a heartwarming description on its website:

PRIVACY.LI – Privacy from the Principality of Liechtenstein, in the heart of the Alps, nestled between Switzerland and Austria. In times of turmoil and insecurity, witch hunt and suspicions, expropriations and diminishing credibility of our world leaders it’s always good to have a place you can turn to. This is the humble effort to provide a place to the privacy and freedom concerned world citizens to meet, discuss, help each other and foster ones desire for liberty and freedom.

But they have no intention of letting their customers know anything about themselves.

Company Profile

Actually, this is not to be published here:-) A privacy service like ours is best if not too many details are known, we hope you fully understand and support this. The makers of this page are veterans at the chosen subject, and will under no circumstances jeopardize your privacy.

Oh yeah, and their “DriveCrypt” product includes “real Time, 1344 bit – Military Strength encryption.”

Somehow, my heart is no longer warm.

Posted on July 8, 2005 at 8:36 AMView Comments

Russia's Black-Market Data Trade

Interesting story on the market for data in Moscow:

This Gorbushka vendor offers a hard drive with cash transfer records from Russia’s central bank for $1,500 (Canadian).

And:

At the Gorbushka kiosk, sales are so brisk that the vendor excuses himself to help other customers while the foreigner considers his options: $43 for a mobile phone company’s list of subscribers? Or $100 for a database of vehicles registered in the Moscow region?

The vehicle database proves irresistible. It appears to contain names, birthdays, passport numbers, addresses, telephone numbers, descriptions of vehicles, and vehicle identification (VIN) numbers for every driver in Moscow.

I don’t know whether you can buy data about people in other countries, but it is certainly plausible.

Posted on July 6, 2005 at 6:10 AMView Comments

Noticing Data Misuse

Everyone seems to be looking at their databases for personal information leakages.

Tax liens, mortgage papers, deeds, and other real estate-related documents are publicly available in on-line databases run by registries of deeds across the state. The Globe found documents in free databases of all but three Massachusetts counties containing the names and Social Security numbers of Massachusetts residents….

Although registers of deeds said that they are unaware of cases in which criminals used information from their databases maliciously, the information contained in the documents would be more than enough to steal an identity and open new lines of credit….

Isn’t that part of the problem, though? It’s easy to say “we haven’t seen any cases of fraud using our information,” because there’s rarely a way to tell where information comes from. The recent epidemic of public leaks comes from people noticing the leak process, not the effects of the leaks. So everyone thinks their data practices are good, because there have never been any documented abuses stemming from leaks of their data, and everyone is fooling themselves.

Posted on July 5, 2005 at 8:47 AMView Comments

Wired on Identity Theft

This is a good editorial from Wired on identity theft.

Following are the fixes we think Congress should make:

Require businesses to secure data and levy fines against those who don’t. Congress has mandated tough privacy and security standards for companies that handle health and financial data. But the rules for credit agencies are woefully inadequate. And they don’t cover other businesses and organizations that handle sensitive personal information, such as employers, academic institutions and data brokers. Congress should mandate strict privacy and security standards for anyone who handles sensitive information, and apply tough financial penalties against companies that fail to comply.

Require companies to encrypt all sensitive customer data. Any standard created to protect data should include technical requirements to scramble the data—both in storage and during transit when data is transferred from one place to another. Recent incidents involving unencrypted Bank of America and CitiFinancial data tapes that went missing while being transferred to backup centers make it clear that companies think encryption is necessary only in certain circumstances.

Keep the plan simple and provide authority and funds to the FTC to ensure legislation is enforced. Efforts to secure sensitive data in the health and financial industries led to laws so complicated and confusing that few have been able to follow them faithfully. And efforts to monitor compliance have been inadequate. Congress should develop simpler rules tailored to each specific industry segment, and give the FTC the necessary funding to enforce them.

Keep Social Security numbers for Social Security. Social Security numbers appear on medical and voter-registration forms as well as on public records that are available through a simple internet search. This makes it all too easy for a thief to obtain the single identifying number that can lead to financial ruin for victims. Americans need a different unique identifying number specifically for credit records, with guarantees that it will never be used for authentication purposes.

Force credit agencies to scrutinize credit-card applications and verify the identity of credit-card applicants. Giving Americans easy access to credit has superseded all other considerations in the cutthroat credit-card business, helping thieves open accounts in victims’ names. Congress needs to bring sane safeguards back into the process of approving credit—even if it means adding costs and inconveniencing powerful banking and financial interests.

Extend fraud alerts beyond 90 days. The Fair Credit Reporting Act allows anyone who suspects that their personal information has been stolen to place a fraud alert on their credit record. This currently requires a creditor to take “reasonable” steps to verify the identity of anyone who applies for credit in the individual’s name. It also requires the creditor to contact the individual who placed the fraud alert on the account if they’ve provided their phone number. Both conditions apply for 90 days. Of course, nothing prevents identity thieves from waiting until the short-lived alert period expires before taking advantage of stolen information. Congress should extend the default window for credit alerts to a minimum of one year.

Allow individuals to freeze their credit records so that no one can access the records without the individuals’ approval. The current credit system opens credit reports to almost anyone who requests them. Individuals should be able to “freeze” their records and have them opened to others only when the individual contacts a credit agency and requests that it release a report to a specific entity.

Require opt-in rather than opt-out permission before companies can share or sell data. Many businesses currently allow people to decline inclusion in marketing lists, but only if customers actively request it. This system, known as opt-out, inherently favors companies by making it more difficult for consumers to escape abusive data-sharing practices. In many cases, consumers need to wade through confusing instructions, and send a mail-in form in order to be removed from pre-established marketing lists. The United States should follow an opt-in model, where companies would be forced to collect permission from individuals before they can traffic in personal data.

Require companies to notify consumers of any privacy breaches, without preventing states from enacting even tougher local laws. Some 37 states have enacted or are considering legislation requiring businesses to notify consumers of data breaches that affect them. A similar federal measure has also been introduced in the Senate. These are steps in the right direction. But the federal bill has a major flaw: It gives companies an easy out in the case of massive data breaches, where the number of people affected exceeds 500,000, or the cost of notification would exceeds $250,000. In those cases, companies would not be required to notify individuals, but could comply simply by posting a notice on their websites. Congress should close these loopholes. In addition, any federal law should be written to ensure that it does not pre-empt state notification laws that take a tougher stance.

As I’ve written previously, this won’t solve identity theft. But it will make it harder and protect the privacy of everyone. These are good recommendations.

Posted on June 29, 2005 at 7:18 AMView Comments

Your ISP May Be Spying on You

From News.com:

The U.S. Department of Justice is quietly shopping around the explosive idea of requiring Internet service providers to retain records of their customers’ online activities.

Data retention rules could permit police to obtain records of e-mail chatter, Web browsing or chat-room activity months after Internet providers ordinarily would have deleted the logs—that is, if logs were ever kept in the first place. No U.S. law currently mandates that such logs be kept.

I think the big idea here is that the Internet makes a massive surveillance society so easy. And data storage will only get cheaper.

Posted on June 28, 2005 at 8:16 AMView Comments

Interview with Marcus Ranum

There’s some good stuff in this interview.

There’s enough blame for everyone.

Blame the users who don’t secure their systems and applications.

Blame the vendors who write and distribute insecure shovel-ware.

Blame the sleazebags who make their living infecting innocent people with spyware, or sending spam.

Blame Microsoft for producing an operating system that is bloated and has an ineffective permissions model and poor default configurations.

Blame the IT managers who overrule their security practitioners’ advice and put their systems at risk in the interest of convenience. Etc.

Truly, the only people who deserve a complete helping of blame are the hackers. Let’s not forget that they’re the ones doing this to us. They’re the ones who are annoying an entire planet. They’re the ones who are costing us billions of dollars a year to secure our systems against them. They’re the ones who place their desire for fun ahead of everyone on earth’s desire for peace and [the] right to privacy.

Posted on June 27, 2005 at 1:14 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.