Entries Tagged "privacy"
Page 127 of 131
Sunday I blogged about Transportation Security Administration’s Secure Flight program, and said that the Government Accountability Office will be issuing a report this week.
Here it is.
The AP says:
The government’s latest computerized airline passenger screening program doesn’t adequately protect travelers’ privacy, according to a congressional report that could further delay a project considered a priority after the Sept. 11 attacks.
Congress last year passed a law that said the Transportation Security Administration could spend no money to implement the program, called Secure Flight, until the Government Accountability Office reported that it met 10 conditions. Those include privacy protections, accuracy of data, oversight, cost and safeguards to ensure the system won’t be abused or accessed by unauthorized people.
The GAO found nine of the 10 conditions hadn’t yet been met and questioned whether Secure Flight would ultimately work.
- TSA plans to include the capability for criminal checks within Secure Flight (p. 12).
- The timetable has slipped by four months (p. 17).
- TSA might not be able to get personally identifiable passenger data in PNRs because of costs to the industry and lack of money (p.18).
- TSA plans to have intelligence analysts staffed within TSA to identify false positives (p.33).
- The DHS Investment Review Board has withheld approval from the “Transportation Vetting Platform” (p.39).
- TSA doesn’t know how much the program will cost (p.51).
- Final privacy rule to be issued in April (p. 56).
Any of you who read the report, please post other interesting tidbits as comments.
As you all probably know, I am a member of a working group to help evaluate the privacy of Secure Flight. While I believe that a program to match airline passengers against terrorist watch lists is a colossal waste of money that isn’t going to make us any safer, I said “…assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place.” I still believe that, but unfortunately I am prohibited by NDA from describing the improvements. I wish someone at TSA would get himself in front of reporters and do so.
According to the AP:
The Transportation Security Administration misled the public about its role in obtaining personal information about 12 million airline passengers to test a new computerized system that screens for terrorists, according to a government investigation.
The report, released Friday by Homeland Security Department Acting Inspector General Richard Skinner, said the agency misinformed individuals, the press and Congress in 2003 and 2004. It stopped short of saying TSA lied.
I’ll say it: the TSA lied.
Here’s the report. It’s worth reading. And when you read it, keep in mind that it’s written by the DHS’s own Inspector General. I presume a more independent investigator would be even more severe. Not that the report isn’t severe, mind you.
Another AP article has more details:
The report cites several occasions where TSA officials made inaccurate statements about passenger data:
- In September 2003, the agency’s Freedom of Information Act staff received hundreds of requests from Jet Blue passengers asking if the TSA had their records. After a cursory search, the FOIA staff posted a notice on the TSA Web site that it had no JetBlue passenger data. Though the FOIA staff found JetBlue passenger records in TSA’s possession in May, the notice stayed on the Web site for more than a year.
- In November 2003, TSA chief James Loy incorrectly told the Governmental Affairs Committee that certain kinds of passenger data were not being used to test passenger prescreening.
- In September 2003, a technology magazine reporter asked a TSA spokesman whether real data were used to test the passenger prescreening system. The spokesman said only fake data were used; the responses “were not accurate,” the report said.
There’s much more. The report reveals that TSA ordered Delta Air Lines to turn over passenger data in February 2002 to help the Secret Service determine whether terrorists or their associates were traveling in the vicinity of the Salt Lake City Olympics.
It also reveals that TSA used passenger data from JetBlue in the spring of 2003 to figure out how to change the number of people who would be selected for more screening under the existing system.
The report says that one of the TSA’s contractors working on passenger prescreening, Lockheed Martin, used a data sample from ChoicePoint.
The report also details how outside contractors used the data for their own purposes. And that “the agency neglected to inquire whether airline passenger data used by the vendors had been returned or destroyed.” And that “TSA did not consistently apply privacy protections in the course of its involvement in airline passenger data transfers.”
This is major stuff. It shows that the TSA lied to the public about its use of personal data again and again and again.
Right now the TSA is in a bit of a bind. It is prohibited by Congress from fielding Secure Flight until it meets a series of criteria. The Government Accountability Office is expected to release a report this week that details how the TSA has not met these criteria.
I’m not sure the TSA cares. It’s already announced plans to roll out Secure Flight.
With little fanfare, the Transportation Security Administration late last month announced plans to roll out in August its highly contentious Secure Flight program. Considered by some travel industry experts a foray into operational testing, rather than a viable implementation, the program will begin, in limited release, with two airlines not yet named by TSA.
My own opinions of Secure Flight are well-known. I am participating in a Working Group to help evaluate the privacy of Secure Flight. (I’ve blogged about it here and here.) We’ve met three times, and it’s unclear if we’ll ever meet again or if we’ll ever produce the report we’re supposed to. Near as I can tell, it’s all a big mess right now.
Edited to add: The GAO report is online (PDF format).
The chance to win theatre tickets is enough to make people give away their identity, reveals a survey.
Of those taking part 92% revealed details such as mother’s maiden name, first school and birth date.
Fraud due to impersonation—commonly called “identity theft”—works for two reasons. One, identity information is easy to obtain. And two, identity information is easy to use to commit fraud.
Studies like this show why attacking the first reason is futile; there are just too many ways to get the information. If we want to reduce the risks associated with identity theft, we have to make identity information less valuable. Too much of our security is based on identity, and it’s not working.
In this story, someone took control of a webcam using the Subseven Trojan.
In other cases, it’s even easier. There are lots of webcams out there that are completely open to anyone who logs into them. You can even search for them using Google.
EPIC just published a very good paper by Daniel Solove and Chris Hoofnagle that offers suggested proposals for privacy reform in the wake of all the recent privacy breaches (ChoicePoint, Lexis/Nexis, Bank of America, DWS, etc.).
According to ChoicePoint’s most recent 8-K filing:
Based on information currently available, we estimate that approximately 145,000 consumers from 50 states and other territories may have had their personal information improperly accessed as a result of the recent Los Angeles incident and certain other instances of unauthorized access to our information products. Approximately 35,000 of these consumers are California residents, and approximately 110,000 are residents of other states. These numbers were determined by conducting searches of our databases that matched searches conducted by customers who we believe may have had unauthorized access to our information products on or after July 1, 2003, the effective date of the California notification law. Because our databases are constantly updated, our search results will never be identical to the search results of these customers.
Catch that? ChoicePoint actually has no idea if only 145,000 customers were affected by its recent security debacle. But it’s not doing any work to determine if more than 145,000 customers were affected—or if any customers before July 1, 2003 were affected—because there’s no law compelling it to do so.
I have no idea why ChoicePoint has decided to tape a huge “Please Regulate My Industry” sign to its back, but it’s increasingly obvious that it has. There’s a class-action shareholders’ lawsuit, but I don’t think that will be enough.
And, by the way, Choicepoint’s database is riddled with errors.
Here’s the abstract:
We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device’s known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall, and also when the device’s system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.
And an article. Really nice work.
From The Guardian:
Though he foresaw many ways in which Big Brother might watch us, even George Orwell never imagined that the authorities would keep a keen eye on your bin.
Residents of Croydon, south London, have been told that the microchips being inserted into their new wheely bins may well be adapted so that the council can judge whether they are producing too much rubbish.
I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law.
And there are security risks.
If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant.
To me, this is just another example of those implementing policy not being the ones who bear the costs. How long would the policy last if it were made clear to those implementing it that they would be held personally liable, even if only via their departmental budgets or careers, for any losses to residents if the database did get hacked?
Sidebar photo of Bruce Schneier by Joe MacInnis.