Entries Tagged "accountability"

Page 2 of 3

Hiding Behind Terrorism Law

The Bayer company is refusing to talk about a fatal accident at a West Virginia plant, citing a 2002 terrorism law.

CSB had intended to hear community concerns, gather more information on the accident, and inform residents of the status of its investigation. However, Bayer attorneys contacted CSB Chairman John Bresland and set up a Feb. 12 conference at the board’s Washington, D.C., headquarters. There, they warned CSB not to reveal details of the accident or the facility’s layout at the community meeting.

“This is where it gets a little strange,” Bresland tells C&EN. To justify their request, Bayer attorneys cited the Maritime Transportation Security Act of 2002, an antiterrorism law that requires companies with plants on waterways to develop security plans to minimize the threat of a terrorist attack. Part of the plans can be designated as “sensitive security information” that can be disseminated only on a “need-to-know basis.” Enforcement of the act is overseen by the Coast Guard and covers some 3,200 facilities, including 320 chemical and petrochemical facilities. Among those facilities is the Bayer plant.

Bayer argued that CSB’s planned public meeting could reveal sensitive plant-specific security information, Bresland says, and therefore would be a violation of the maritime transportation law. The board got cold feet and canceled the meeting.

Bresland contends that CSB wasn’t agreeing with Bayer, but says it was better to put off the meeting than to hold it and be unable to answer questions posed by the public.

The board then met with Coast Guard officials, Bresland says, and formally canceled the community meeting. The outcome of the Coast Guard meeting remains murky. It is unclear what role the Coast Guard might have in editing or restricting release of future CSB reports of accidents at covered facilities, the board says. “This could really cause difficulties for us,” Bresland says. “We could find ourselves hemming and hawing about what actually happened in an accident.”

This isn’t the first time that the specter of terrorism has been used to keep embarrassing information secret.

EDITED TO ADD (3/20): The meeting has been rescheduled. No word on how forthcoming Bayer will be.

Posted on March 18, 2009 at 12:45 PMView Comments

HIPAA Accountability in Stimulus Bill

On page 379 of the current stimulus bill, there’s a bit about establishing a website of companies that lost patient information:

(4) POSTING ON HHS PUBLIC WEBSITE—The Secretary shall make available to the public on the Internet website of the Department of Health and Human Services a list that identifies each covered entity involved in a breach described in subsection (a) in which the unsecured protected health information of more than 500 individuals is acquired or disclosed.

I’m not sure if this passage survived the final bill, but it will be interesting if it is now law.

EDITED TO ADD (3/13): It’s law.

Posted on February 18, 2009 at 12:28 PMView Comments

Unisys Blamed for DHS Data Breaches

This story has been percolating around for a few days. Basically, Unisys was hired by the U.S. Department of Homeland Security to manage and monitor the department’s network security. After data breaches were discovered, DHS blamed Unisys—and I figured that everyone would be in serious CYA mode and that we’d never know what really happened. But it seems that there was a cover-up at Unisys, and that’s a big deal:

As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys’s failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006. Through October of that year, Thompson said, 150 DHS computers—including one in the Office of Procurement Operations, which handles contract data—were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.

The contractor also allegedly falsely certified that the network had been protected to cover up its lax oversight, according to the committee.

What interests me the most (as someone with a company that does network security management and monitoring) is that there might be some liability here:

“For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor,” [Congressman] Thompson said in an interview. “If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted.”

And, as an aside, we see how useless certifications can be:

She said that Unisys has provided DHS “with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today.”

Posted on October 3, 2007 at 6:50 AMView Comments

Identification Technology in Personal-Use Tasers

Taser—yep, that’s the company’s name as well as the product’s name—is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and—presumably—the owner.

Anti-Felon Identification (AFID)

A system to deter misuse through enhanced accountability, AFID includes bar-coded serialization of each cartridge and disperses confetti-like ID tags upon activation.

Posted on August 22, 2007 at 6:57 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 3)

This is Part 3 of a five-part series. Link to whole thing.

BS: Let’s talk about ID checks. I’ve called the no-fly list a list of people so dangerous they cannot be allowed to fly under any circumstance, yet so innocent we can’t arrest them even under the Patriot Act. Except that’s not even true; anyone, no matter how dangerous they are, can fly without an ID ­or by using someone else’s boarding pass. And the list itself is filled with people who shouldn’t be on it—dead people, people in jail, and so on—and primarily catches innocents with similar names. Why are you bothering?

KH: Because it works. We just completed a scrub of every name on the no-fly list and cut it in half—essentially cleaning out people who were no longer an active terror threat. We do not publicize how often the no-fly system stops people you would not want on your flight. Several times a week would low-ball it.

Your point about the no-ID and false boarding pass people is a great one. We are moving people who have tools and training to get at that problem. The bigger issue is that TSA is moving in the direction of security that picks up on behavior versus just keying on what we see in your bag. It really would be security theater if all we did was try to find possible weapons in that crunched fifteen seconds and fifteen feet after you anonymously walk through the magnetometer. We do a better job, with less aggravation of ordinary passengers, if we put people-based layers further ahead in the process—behavior observation based on involuntary, observable muscle behavior, canine teams, document verification, etc.

BS: We’ll talk about behavioral profiling later; no fair defending one security measure by pointing to another, completely separate, one. How can you claim ID cards work? Like the liquid ban, all it does is annoy innocent travelers without doing more than inconveniencing any future terrorists. Is it really good enough for you to defend me from terrorists too dumb to Google “print your own boarding pass”?

KH: We are getting at the fake boarding pass and ID issues with our proposal to Congress that would allow us to replace existing document checkers with more highly trained people with tools that would close those gaps. Without effective identity verification, watch lists don’t do much, so this is a top priority.

Having highly trained TSOs performing the document checking function closes a security gap, adds another security layer, and pushes TSA’s security program out in front of the checkpoint.

BS: Let’s move on. Air travelers think you’re capricious. Remember in April when the story went around about the Princeton professor being on a no-fly list because he spoke out against President Bush? His claims were easily debunked, but the real story is that so many people believed it. People believe political activity puts them on the list. People are afraid to complain about being mistreated at checkpoints because they’re afraid it puts them on a list. Is there anything you can do to make this process more transparent?

KH: We need some help on this one. This is the biggest public pain point, dwarfing shoes and baggies.

First off, TSA does not add people to the watch-lists, no matter how cranky you are at a checkpoint. Second, political views have nothing to do with no-flys or selectees. These myths have taken on urban legend status. There are very strict criteria and they are reviewed by lots of separate people in separate agencies: it is for live terror concerns only. The problem comes from random selectees (literally mathematically random) or people who have the same name and birth date as real no-flys. If you can get a boarding pass, you are not on the no-fly list. This problem will go away when Secure Flight starts in 2008, but we can’t seem to shake the false impression that ordinary Americans get put on a “list.” I am open for suggestions on how to make the public “get it.”

BS: It’s hard to believe that there could be hundreds of thousands of people meeting those very strict criteria, and that’s after the list was cut in half! I know the TSA does not control the no-fly and watch lists, but you’re the public face of those lists. You’re the aspect of homeland security that people come into direct contact with. Some people might find out they’re on the list by being arrested, or being shipped off to Syria for torture, but most people find out they’re on the list by being repeatedly searched and questioned for hours at airports.

The main problem with the list is that it’s secret. Who is on the list is secret. Why someone’s on is secret. How someone can get off is secret. There’s no accountability and there’s no transparency. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states.

The best thing you can do to improve the problem is redress. People need the ability to see the evidence against them, challenge their accuser, and have a hearing in a neutral court. If they’re guilty of something, arrest them. And if they’re innocent, stop harassing them. It’s basic liberty.

I don’t actually expect you to fix this; the problem is larger than the TSA. But can you tell us something about redress? It’s been promised to us for years now.

KH: Redress issues are divided into two categories: people on the no-fly list and people who have names similar to them.

In our experience, the first group is not a heavy user of the redress process. They typically don’t want anything to do with the U.S. government. Still, if someone is either wrongly put on or kept on, the Terrorist Screening Center (TSC) removes him or her immediately. In fact, TSA worked with the TSC to review every name, and that review cut the no-fly list in half. Having said that, once someone is really on the no-fly list, I totally agree with what you said about appeal rights. This is true across the board, not just with no-flys. DHS has recently consolidated redress for all DHS activities into one process called DHS TRIP. If you are mistaken for a real no-fly, you can let TSA know and we provide your information to the airlines, who right now are responsible for identifying no-flys trying to fly. Each airline uses its own system, so some can get you cleared to use kiosks, while others still require a visit to the ticket agent. When Secure Flight is operating, we’ll take that in-house at TSA and the problem should go away.

BS: I still don’t see how that will work, as long as the TSA doesn’t have control over who gets on or off the list.

Part 4: Registered Traveler and behavioral profiling

Posted on August 1, 2007 at 6:12 AMView Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly’s rant against anonymity. Today I wrote about it for Wired.com:

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then—heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy—and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AMView Comments

More Erosion of Police Oversight in the U.S.

From EPIC:

Documents obtained by EPIC in a Freedom of Information Act lawsuit reveal FBI agents expressing frustration that the Office of Intelligence Policy and Review, an office that reviews FBI search requests, had not approved applications for orders under Section 215 of the Patriot Act. A subsequent memo refers to “recent changes” allowing the FBI to “bypass”; the office. EPIC is expecting to receive further information about this matter.

Some background:

Under Section 215, the FBI must show only “relevance” to a foreign intelligence or terrorism investigation to obtain vast amounts of personal information. It is unclear why the Office of Intelligence Policy and Review did not approve these applications. The FBI has not revealed this information, nor did it explain whether other search methods had failed.

Remember, the issue here is not whether or not the FBI can engage in counterterrorism. The issue is the erosion of judicial oversight—the only check we have on police power. And this power grab is dangerous regardless of which party is in the White House at the moment.

Posted on December 16, 2005 at 10:03 AMView Comments

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database—probably close to a million people overall—that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value—not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AMView Comments

Taser Cam

Here’s an excellent use for cameras:

Now, to help better examine how Tasers are used, manufacturer Taser International Inc. has developed a Taser Cam, which company executives hope will illuminate why Tasers are needed—and add another layer of accountability for any officer who would abuse the weapon.

The Taser Cam is an audio and video recorder that attaches to the butt of the gun and starts taping when the weapon is turned on. It continues recording until the weapon is turned off. The Taser doesn’t have to be fired to use the camera.

It’s the same idea as having cameras record all police interrogations, or record all police-car stops. It helps protect the populace against police abuse, and helps protect the police of accusations of abuse.

This is where cameras do good: when they lessen a power imbalance. Imagine if they were continuously recording the actions of elected officials—when they were acting in their official capacity, that is.

Of course, cameras are only as useful as their data. If critical recordings are “lost,” then there’s no accountability. The system is pretty kludgy:

The Taser Cam records in black and white but is equipped with infrared technology to record images in very low light. The camera will have at least one hour of recording time, the company said, and the video can be downloaded to a computer over a USB cable.

How soon before the cameras simply upload their recordings, in real time, to some trusted vault somewhere?

EDITED TO ADD: CNN has a story.

Posted on November 9, 2005 at 8:46 AMView Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.