Entries Tagged "transparency"

Page 3 of 5

Who Should be in Charge of U.S. Cybersecurity?

U.S. government cybersecurity is an insecure mess, and fixing it is going to take considerable attention and resources. Trying to make sense of this, President Barack Obama ordered a 60-day review of government cybersecurity initiatives. Meanwhile, the U.S. House Subcommittee on Emerging Threats, Cybersecurity, Science and Technology is holding hearings on the same topic.

One of the areas of contention is who should be in charge. The FBI, DHS and DoD—specifically, the NSA—all have interests here. Earlier this month, Rod Beckström resigned from his position as director of the DHS’s National Cybersecurity Center, warning of a power grab by the NSA.

Putting national cybersecurity in the hands of the NSA is an incredibly bad idea. An entire parade of people, ranging from former FBI director Louis Freeh to Microsoft’s Trusted Computing Group Vice President and former Justice Department computer crime chief Scott Charney, have told Congress the same thing at this month’s hearings.

Cybersecurity isn’t a military problem, or even a government problem—it’s a universal problem. All networks, military, government, civilian and commercial, use the same computers, the same networking hardware, the same Internet protocols and the same software packages. We all are the targets of the same attack tools and tactics. It’s not even that government targets are somehow more important; these days, most of our nation’s critical IT infrastructure is in commercial hands. Government-sponsored Chinese hackers go after both military and civilian targets.

Some have said that the NSA should be in charge because it has specialized knowledge. Earlier this month, Director of National Intelligence Admiral Dennis Blair made this point, saying “There are some wizards out there at Ft. Meade who can do stuff.” That’s probably not true, but if it is, we’d better get them out of Ft. Meade as soon as possible—they’re doing the nation little good where they are now.

Not that government cybersecurity failings require any specialized wizardry to fix. GAO reports indicate that government problems include insufficient access controls, a lack of encryption where necessary, poor network management, failure to install patches, inadequate audit procedures, and incomplete or ineffective information security programs. These aren’t super-secret NSA-level security issues; these are the same managerial problems that every corporate CIO wrestles with.

We’ve all got the same problems, so solutions must be shared. If the government has any clever ideas to solve its cybersecurity problems, certainly a lot of us could benefit from those solutions. If it has an idea for improving network security, it should tell everyone. The best thing the government can do for cybersecurity world-wide is to use its buying power to improve the security of the IT products everyone uses. If it imposes significant security requirements on its IT vendors, those vendors will modify their products to meet those requirements. And those same products, now with improved security, will become available to all of us as the new standard.

Moreover, the NSA’s dual mission of providing security and conducting surveillance means it has an inherent conflict of interest in cybersecurity. Inside the NSA, this is called the “equities issue.” During the Cold War, it was easy; the NSA used its expertise to protect American military information and communications, and eavesdropped on Soviet information and communications. But what happens when both the good guys the NSA wants to protect, and the bad guys the NSA wants to eavesdrop on, use the same systems? They all use Microsoft Windows, Oracle databases, Internet email, and Skype. When the NSA finds a vulnerability in one of those systems, does it alert the manufacturer and fix it—making both the good guys and the bad guys more secure? Or does it keep quiet about the vulnerability and not tell anyone—making it easier to spy on the bad guys but also keeping the good guys insecure? Programs like the NSA’s warrantless wiretapping program have created additional vulnerabilities in our domestic telephone networks.

Testifying before Congress earlier this month, former DHS National Cyber Security division head Amit Yoran said “the intelligence community has always and will always prioritize its own collection efforts over the defensive and protection mission of our government’s and nation’s digital systems.”

Maybe the NSA could convince us that it’s putting cybersecurity first, but its culture of secrecy will mean that any decisions it makes will be suspect. Under current law, extended by the Bush administration’s extravagant invocation of the “state secrets” privilege when charged with statutory and constitutional violations, the NSA’s activities are not subject to any meaningful public oversight. And the NSA’s tradition of military secrecy makes it harder for it to coordinate with other government IT departments, most of which don’t have clearances, let alone coordinate with local law enforcement or the commercial sector.

We need transparent and accountable government processes, using commercial security products. We need government cybersecurity programs that improve security for everyone. The NSA certainly has an advisory and a coordination role in national cybersecurity, and perhaps a more supervisory role in DoD cybersecurity—both offensive and defensive—but it should not be in charge.

A version of this essay appeared on The Wall Street Journal website.

Posted on April 2, 2009 at 6:09 AMView Comments

Hiding Behind Terrorism Law

The Bayer company is refusing to talk about a fatal accident at a West Virginia plant, citing a 2002 terrorism law.

CSB had intended to hear community concerns, gather more information on the accident, and inform residents of the status of its investigation. However, Bayer attorneys contacted CSB Chairman John Bresland and set up a Feb. 12 conference at the board’s Washington, D.C., headquarters. There, they warned CSB not to reveal details of the accident or the facility’s layout at the community meeting.

“This is where it gets a little strange,” Bresland tells C&EN. To justify their request, Bayer attorneys cited the Maritime Transportation Security Act of 2002, an antiterrorism law that requires companies with plants on waterways to develop security plans to minimize the threat of a terrorist attack. Part of the plans can be designated as “sensitive security information” that can be disseminated only on a “need-to-know basis.” Enforcement of the act is overseen by the Coast Guard and covers some 3,200 facilities, including 320 chemical and petrochemical facilities. Among those facilities is the Bayer plant.

Bayer argued that CSB’s planned public meeting could reveal sensitive plant-specific security information, Bresland says, and therefore would be a violation of the maritime transportation law. The board got cold feet and canceled the meeting.

Bresland contends that CSB wasn’t agreeing with Bayer, but says it was better to put off the meeting than to hold it and be unable to answer questions posed by the public.

The board then met with Coast Guard officials, Bresland says, and formally canceled the community meeting. The outcome of the Coast Guard meeting remains murky. It is unclear what role the Coast Guard might have in editing or restricting release of future CSB reports of accidents at covered facilities, the board says. “This could really cause difficulties for us,” Bresland says. “We could find ourselves hemming and hawing about what actually happened in an accident.”

This isn’t the first time that the specter of terrorism has been used to keep embarrassing information secret.

EDITED TO ADD (3/20): The meeting has been rescheduled. No word on how forthcoming Bayer will be.

Posted on March 18, 2009 at 12:45 PMView Comments

Privacy and Power

When I write and speak about privacy, I am regularly confronted with the mutual disclosure argument. Explained in books like David Brin’s The Transparent Society, the argument goes something like this: In a world of ubiquitous surveillance, you’ll know all about me, but I will also know all about you. The government will be watching us, but we’ll also be watching the government. This is different than before, but it’s not automatically worse. And because I know your secrets, you can’t use my secrets as a weapon against me.

This might not be everybody’s idea of utopia—and it certainly doesn’t address the inherent value of privacy—but this theory has a glossy appeal, and could easily be mistaken for a way out of the problem of technology’s continuing erosion of privacy. Except it doesn’t work, because it ignores the crucial dissimilarity of power.

You cannot evaluate the value of privacy and disclosure unless you account for the relative power levels of the discloser and the disclosee.

If I disclose information to you, your power with respect to me increases. One way to address this power imbalance is for you to similarly disclose information to me. We both have less privacy, but the balance of power is maintained. But this mechanism fails utterly if you and I have different power levels to begin with.

An example will make this clearer. You’re stopped by a police officer, who demands to see identification. Divulging your identity will give the officer enormous power over you: He or she can search police databases using the information on your ID; he or she can create a police record attached to your name; he or she can put you on this or that secret terrorist watch list. Asking to see the officer’s ID in return gives you no comparable power over him or her. The power imbalance is too great, and mutual disclosure does not make it OK.

You can think of your existing power as the exponent in an equation that determines the value, to you, of more information. The more power you have, the more additional power you derive from the new data.

Another example: When your doctor says “take off your clothes,” it makes no sense for you to say, “You first, doc.” The two of you are not engaging in an interaction of equals.

This is the principle that should guide decision-makers when they consider installing surveillance cameras or launching data-mining programs. It’s not enough to open the efforts to public scrutiny. All aspects of government work best when the relative power between the governors and the governed remains as small as possible—when liberty is high and control is low. Forced openness in government reduces the relative power differential between the two, and is generally good. Forced openness in laypeople increases the relative power, and is generally bad.

Seventeen-year-old Erik Crespo was arrested in 2005 in connection with a shooting in a New York City elevator. There’s no question that he committed the shooting; it was captured on surveillance-camera videotape. But he claimed that while being interrogated, Detective Christopher Perino tried to talk him out of getting a lawyer, and told him that he had to sign a confession before he could see a judge.

Perino denied, under oath, that he ever questioned Crespo. But Crespo had received an MP3 player as a Christmas gift, and surreptitiously recorded the questioning. The defense brought a transcript and CD into evidence. Shortly thereafter, the prosecution offered Crespo a better deal than originally proffered (seven years rather than 15). Crespo took the deal, and Perino was separately indicted on charges of perjury.

Without that recording, it was the detective’s word against Crespo’s. And who would believe a murder suspect over a New York City detective? That power imbalance was reduced only because Crespo was smart enough to press the “record” button on his MP3 player. Why aren’t all interrogations recorded? Why don’t defendants have the right to those recordings, just as they have the right to an attorney? Police routinely record traffic stops from their squad cars for their own protection; that video record shouldn’t stop once the suspect is no longer a threat.

Cameras make sense when trained on police, and in offices where lawmakers meet with lobbyists, and wherever government officials wield power over the people. Open-government laws, giving the public access to government records and meetings of governmental bodies, also make sense. These all foster liberty.

Ubiquitous surveillance programs that affect everyone without probable cause or warrant, like the National Security Agency’s warrantless eavesdropping programs or various proposals to monitor everything on the internet, foster control. And no one is safer in a political system of control.

This essay originally appeared on Wired.com.

Commentary by David Brin.

Posted on March 11, 2008 at 6:09 AMView Comments

Interview with National Intelligence Director Mike McConnell

Mike McConnell, U.S. National Intelligence Director, gave an interesting interview to the El Paso Times.

I don’t think he’s ever been so candid before. For example, he admitted that the nation’s telcos assisted the NSA in their massive eavesdropping efforts. We already knew this, of course, but the government has steadfastly maintained that either confirming or denying this would compromise national security.

There are, of course, moments of surreality. He said that it takes 200 hours to prepare a FISA warrant. Ryan Single calculated that since there were 2,167 such warrants in 2006, there must be “218 government employees with top secret clearances sitting in rooms, writing only FISA warrants.” Seems unlikely.

But most notable is this bit:

Q. So you’re saying that the reporting and the debate in Congress means that some Americans are going to die?

A. That’s what I mean. Because we have made it so public. We used to do these things very differently, but for whatever reason, you know, it’s a democratic process and sunshine’s a good thing. We need to have the debate.

Ah, the politics of fear. I don’t care if it’s the terrorists or the politicians, refuse to be terrorized. (More interesting discussions on the interview here, here, here, here, here, and here.)

Posted on August 24, 2007 at 6:30 AMView Comments

Conversation with Kip Hawley, TSA Administrator (Part 3)

This is Part 3 of a five-part series. Link to whole thing.

BS: Let’s talk about ID checks. I’ve called the no-fly list a list of people so dangerous they cannot be allowed to fly under any circumstance, yet so innocent we can’t arrest them even under the Patriot Act. Except that’s not even true; anyone, no matter how dangerous they are, can fly without an ID ­or by using someone else’s boarding pass. And the list itself is filled with people who shouldn’t be on it—dead people, people in jail, and so on—and primarily catches innocents with similar names. Why are you bothering?

KH: Because it works. We just completed a scrub of every name on the no-fly list and cut it in half—essentially cleaning out people who were no longer an active terror threat. We do not publicize how often the no-fly system stops people you would not want on your flight. Several times a week would low-ball it.

Your point about the no-ID and false boarding pass people is a great one. We are moving people who have tools and training to get at that problem. The bigger issue is that TSA is moving in the direction of security that picks up on behavior versus just keying on what we see in your bag. It really would be security theater if all we did was try to find possible weapons in that crunched fifteen seconds and fifteen feet after you anonymously walk through the magnetometer. We do a better job, with less aggravation of ordinary passengers, if we put people-based layers further ahead in the process—behavior observation based on involuntary, observable muscle behavior, canine teams, document verification, etc.

BS: We’ll talk about behavioral profiling later; no fair defending one security measure by pointing to another, completely separate, one. How can you claim ID cards work? Like the liquid ban, all it does is annoy innocent travelers without doing more than inconveniencing any future terrorists. Is it really good enough for you to defend me from terrorists too dumb to Google “print your own boarding pass”?

KH: We are getting at the fake boarding pass and ID issues with our proposal to Congress that would allow us to replace existing document checkers with more highly trained people with tools that would close those gaps. Without effective identity verification, watch lists don’t do much, so this is a top priority.

Having highly trained TSOs performing the document checking function closes a security gap, adds another security layer, and pushes TSA’s security program out in front of the checkpoint.

BS: Let’s move on. Air travelers think you’re capricious. Remember in April when the story went around about the Princeton professor being on a no-fly list because he spoke out against President Bush? His claims were easily debunked, but the real story is that so many people believed it. People believe political activity puts them on the list. People are afraid to complain about being mistreated at checkpoints because they’re afraid it puts them on a list. Is there anything you can do to make this process more transparent?

KH: We need some help on this one. This is the biggest public pain point, dwarfing shoes and baggies.

First off, TSA does not add people to the watch-lists, no matter how cranky you are at a checkpoint. Second, political views have nothing to do with no-flys or selectees. These myths have taken on urban legend status. There are very strict criteria and they are reviewed by lots of separate people in separate agencies: it is for live terror concerns only. The problem comes from random selectees (literally mathematically random) or people who have the same name and birth date as real no-flys. If you can get a boarding pass, you are not on the no-fly list. This problem will go away when Secure Flight starts in 2008, but we can’t seem to shake the false impression that ordinary Americans get put on a “list.” I am open for suggestions on how to make the public “get it.”

BS: It’s hard to believe that there could be hundreds of thousands of people meeting those very strict criteria, and that’s after the list was cut in half! I know the TSA does not control the no-fly and watch lists, but you’re the public face of those lists. You’re the aspect of homeland security that people come into direct contact with. Some people might find out they’re on the list by being arrested, or being shipped off to Syria for torture, but most people find out they’re on the list by being repeatedly searched and questioned for hours at airports.

The main problem with the list is that it’s secret. Who is on the list is secret. Why someone’s on is secret. How someone can get off is secret. There’s no accountability and there’s no transparency. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states.

The best thing you can do to improve the problem is redress. People need the ability to see the evidence against them, challenge their accuser, and have a hearing in a neutral court. If they’re guilty of something, arrest them. And if they’re innocent, stop harassing them. It’s basic liberty.

I don’t actually expect you to fix this; the problem is larger than the TSA. But can you tell us something about redress? It’s been promised to us for years now.

KH: Redress issues are divided into two categories: people on the no-fly list and people who have names similar to them.

In our experience, the first group is not a heavy user of the redress process. They typically don’t want anything to do with the U.S. government. Still, if someone is either wrongly put on or kept on, the Terrorist Screening Center (TSC) removes him or her immediately. In fact, TSA worked with the TSC to review every name, and that review cut the no-fly list in half. Having said that, once someone is really on the no-fly list, I totally agree with what you said about appeal rights. This is true across the board, not just with no-flys. DHS has recently consolidated redress for all DHS activities into one process called DHS TRIP. If you are mistaken for a real no-fly, you can let TSA know and we provide your information to the airlines, who right now are responsible for identifying no-flys trying to fly. Each airline uses its own system, so some can get you cleared to use kiosks, while others still require a visit to the ticket agent. When Secure Flight is operating, we’ll take that in-house at TSA and the problem should go away.

BS: I still don’t see how that will work, as long as the TSA doesn’t have control over who gets on or off the list.

Part 4: Registered Traveler and behavioral profiling

Posted on August 1, 2007 at 6:12 AMView Comments

Buildings You Can't Photograph

Very Kafkaesque:

The bottom line is that McCammon was caught in a classic logical trap. If he had only known the building was off-limits to photographers, he would have avoided it. But he was not allowed to know that fact. “Reasonable, law-abiding people tend to avoid these types of things when it can be helped,” McCammon wrote. “Thus, my request for a list of locations within Arlington County that are unmarked, but at which photography is either prohibited or discouraged according to some (public or private) policy. Of course, such a list does not exist. Catch-22.”

The only antidote to this security mania is sunshine. Only when more and more Americans do as McCammon has done and take the time and effort to chronicle these excesses and insist on answers from authorities will we stand a chance of restoring balance and sanity to the blend of liberty and security that we are madly remixing in these confused times.

Here’s the relevent map. It’s the building on the NW/upper-left side of the intersection.

Posted on July 19, 2007 at 2:25 PMView Comments

Security Plus Privacy

The Royal Academy of Engineering (in the UK) has just published a report: “Dilemmas of Privacy And Surveillance: Challenges of Technological Change” (press release here) where they argue that security and privacy are not in opposition, and that we can have both if we’re sensible about it.

Recommendations

R1 Systems that involve the collection, checking and processing of personal information should be designed in order to diminish the risk of failure as far as reasonably practicable. Development of such systems should make the best use of engineering expertise in assessing and managing vulnerabilities and risks. Public sector organisations should take the lead in this area, as they collect and process a great deal of sensitive personal data, often on a non-voluntary basis.

R2 Many failures can be foreseen. It is essential to have procedures in place to deal with the consequences of failure in systems used to collect, store or process personal information. These should include processes for aiding and compensating individuals who are affected.

R3 Human rights law already requires that everyone should have their reasonable expectation of privacy respected and protected. Clarification of what counts as a reasonable expectation of privacy is necessary in order to protect this right and a public debate, including the legal, technical and political communities, should be encouraged in order to work towards a consensus on the definition of what is a ‘reasonable expectation’. This debate should take into account the effect of an easily searchable Internet when deciding what counts as a reasonable expectation of privacy.

R4 The powers of the Information Commissioner should be extended. Significant penalties—including custodial sentences—should be imposed on individuals or organisations that misuse data. The Information Commissioner should also have the power to perform audits and to direct that audits be performed by approved auditors in order to encourage organisations to always process data in accordance with the Data Protection Act. A public debate should be held on whether the primary control should be on the collection of data, or whether it is the processing and use of data that should be controlled, with penalties for improper use.

R5 Organisations should not seek to identify the individuals with whom they have dealings if all they require is authentication of rightful access to goods or services. Systems that allow automated access to a service such as public transport should be developed to use only the minimal authenticating information necessary. When organisations do desire identification, they should be required to justify why identification, rather than authentication, is needed. In such circumstances, a minimum of identifying information should be expected.

R6 Research into the effectiveness of camera surveillance is necessary, to judge whether its potential intrusion into people’s privacy is outweighed by its benefits. Effort should be put into researching ways of monitoring public spaces that minimise the impact on privacy—for example, pursuing engineering research into developing effective means of automated surveillance which ignore law-abiding activities.

R7 Information technology services should be designed to maintain privacy. Research should be pursued into the possibility of ‘designing for privacy’ and a concern for privacy should be encouraged amongst practising engineers and engineering teachers. Possibilities include designing methods of payment for travel and other goods and services without revealing identity and protecting electronic personal information by using similar methods to those used for protecting copyrighted electronic material.

R8 There is need for clarity on the rights and expectations that individuals have over their personal information. A digital charter outlining an individual’s rights and expectations over how their data are managed, shared and protected would deliver that clarity. Access by individuals to their personal data should also be made easier; for example, by automatically providing free copies of credit reports annually. There should be debate on how personal data are protected—how it can be ensured that the data are accurate, secure and private. Companies, or other trusted, third-party organisations, could have the role of data banks—trusted guardians of personal data. Research into innovative business models for such companies should be encouraged.

R9 Commercial organisations that select their customers or vary their offers to individuals on the basis of profiling should be required, on request, to divulge to the data subjects that profiling has been used. Profiling will always be used to differentiate between customers, but unfair or excessively discriminating profiling systems should not be permitted.

R10 Data collection and use systems should be designed so that there is reciprocity between data subjects and owners of the system. This includes transparency about the kinds of data collected and the uses intended for it; and data subjects having the right to receive clear explanations and justifications for data requests. In the case of camera surveillance, there should be debate on and research into ways to allow the public some level of access to the images captured by surveillance cameras.

The whole thing is worth reading, as is this article from The Register.

Posted on March 29, 2007 at 11:11 AMView Comments

Money Laundering Inside the U.S.

With all the attention on foreign money laundering, we’re ignoring the problem in our own country.

How widespread is the problem? No one really knows for sure because the states “have no idea who is behind the companies they have incorporated,” says Senator Carl Levin (D—Mich.), who is trying to force the states to insist on greater transparency. “The United States should never be the situs of choice for international crime, but that is exactly what the lax regulatory regimes in some of our states are inviting.” The Financial Crimes Enforcement Network, the U.S. Treasury bureau investigating money laundering, says roughly $14 billion worth of suspicious transactions involving private U.S. shells and overseas bank accounts came in from banks from 2004 to 2005, the latest Treasury data available. That’s up from $4 billion for the long stretch between April 1996 and January 2004. Now, estimates the FBI, anonymously held U.S. shell companies have laundered $36 billion to date just from the former Soviet Union.

State governments provide plenty of cover for bad guys. Every year they incorporate 1.9 million or so private companies, but no state verifies or records the identities of owners, much less screens ownership information against criminal watch lists, according to a study by the Government Accountability Office. “You have to supply more information to get a driver’s license than you do to form one of these nonpublicly traded corporations,” says Senator Levin.

Posted on February 28, 2007 at 7:59 AMView Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.