Entries Tagged "secrecy"

Page 15 of 21

America's Newfound Love of Secrecy

Really good Washington Post article on secrecy:

But the notion that information is more credible because it’s secret is increasingly unfounded. In fact, secret information is often more suspect because it hasn’t been subjected to open debate. Those with their own agendas can game the system, over-classifying or stove-piping self-serving intelligence to shield it from scrutiny. Those who cherry-picked intelligence in the run-up to the Iraq war could ignore anything that contradicted it. Even now, some members of Congress tell me that they avoid reading classified reports for fear that if they do, the edicts of secrecy will bar them from discussing vital public issues.

Real secrets—blueprints for nuclear weapons, specific troop movements, the identities of covert operatives in the field—deserve to be safeguarded. But when secrecy is abused, the result is a dangerous disdain that leads to officials exploiting secrecy for short-term advantage (think of the Valerie Plame affair or the White House leaking selected portions of National Intelligence Estimates to bolster flagging support for the Iraq war). Then disregard for the real need for secrecy spreads to the public. WhosaRat.com reveals the names of government witnesses in criminal cases. Other Web sites seek to out covert operatives or to post sensitive security documents online.

Back in 2002 I wrote about the relationship between secrecy and security.

Posted on June 27, 2007 at 6:58 AMView Comments

Seventh Harry Potter Hacked?

Someone claims to have hacked the Bloomsbury Publishing network, and has posted what he says is the ending to the last Harry Potter book.

I don’t believe it, actually. Sure, it’s possible—probably even easy. But the posting just doesn’t read right to me.

The attack strategy was the easiest one. The usual milw0rm downloaded exploit delivered by email/click-on-the-link/open-browser/click-on-this-animated-icon/back-connect to some employee of Bloomsbury Publishing, the company that’s behind the Harry crap.

And I would expect someone who really got their hands on a copy of the manuscript to post the choice bits of text, not just a plot summary. It’s easier, and it’s more proof.

Sorry; I don’t buy it.

EDITED TO ADD (7/25): I was right; none of his “predictions” were correct.

Posted on June 21, 2007 at 2:30 PM

Does Secrecy Help Protect Personal Information?

Personal information protection is an economic problem, not a security problem. And the problem can be easily explained: The organizations we trust to protect our personal information do not suffer when information gets exposed. On the other hand, individuals who suffer when personal information is exposed don’t have the capability to protect that information.

There are actually two problems here: Personal information is easy to steal, and it’s valuable once stolen. We can’t solve one problem without solving the other. The solutions aren’t easy, and you’re not going to like them.

First, fix the economic problem. Credit card companies make more money extending easy credit and making it trivial for customers to use their cards than they lose from fraud. They won’t improve their security as long as you (and not they) are the one who suffers from identity theft. It’s the same for banks and brokerages: As long as you’re the one who suffers when your account is hacked, they don’t have any incentive to fix the problem. And data brokers like ChoicePoint are worse; they don’t suffer if they reveal your information. You don’t have a business relationship with them; you can’t even switch to a competitor in disgust.

Credit card security works as well as it does because the 1968 Truth in Lending Law limits consumer liability for fraud to $50. If the credit card companies could pass fraud losses on to the consumers, they would be spending far less money to stop those losses. But once Congress forced them to suffer the costs of fraud, they invented all sorts of security measures—real-time transaction verification, expert systems patrolling the transaction database and so on—to prevent fraud. The lesson is clear: Make the party in the best position to mitigate the risk responsible for the risk. What this will do is enable the capitalist innovation engine. Once it’s in the financial interest of financial institutions to protect us from identity theft, they will.

Second, stop using personal information to authenticate people. Watch how credit cards work. Notice that the store clerk barely looks at your signature, or how you can use credit cards remotely where no one can check your signature. The credit card industry learned decades ago that authenticating people has only limited value. Instead, they put most of their effort into authenticating the transaction, and they’re much more secure because of it.

This won’t solve the problem of securing our personal information, but it will greatly reduce the threat. Once the information is no longer of value, you only have to worry about securing the information from voyeurs rather than the more common—and more financially motivated—fraudsters.

And third, fix the other economic problem: Organizations that expose our personal information aren’t hurt by that exposure. We need a comprehensive privacy law that gives individuals ownership of their personal information and allows them to take action against organizations that don’t care for it properly.

“Passwords” like credit card numbers and mother’s maiden name used to work, but we’ve forever left the world where our privacy comes from the obscurity of our personal information and the difficulty others have in accessing it. We need to abandon security systems that are based on obscurity and difficulty, and build legal protections to take over where technological advances have left us exposed.

This essay appeared in the January issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 14, 2007 at 12:24 PMView Comments

Least Risk Bomb Location

This fascinating tidbit is from Aviation Week and Space Technology (April 9, 2007, p. 21), in David Bond’s “Washington Outlook” column (unfortunately, not online).

Need to Know

Security and society’s litigious bent combine to make airlines unsuited for figuring out the best place to put a suspected explosive device discovered during a flight, AirTran Airways tells the FAA (Federal Aviation Administration). Commenting on a proposed rule that would require, among other things, designation of a “least risk bomb location” (LRBL)—the place on an aircraft where a bomb would do the least damage if it exploded—AirTran engineering director Rick Shideler says it’s hard for airlines to get aircraft design information related to such a location because of agreements between manufacturers and the Homeland Security Department. The carrier got LRBL information for its 717s and 737s from Boeing but can’t find out why the locations were chosen, “or even who specifically picked them,” because of liability laws.

I’d never heard of an LRBL before, but the FAA has public proposed guidelines on them. Apparently flight crews are trained to stash suspicious objects there.

But liability seems to be getting in the way of security and common sense here. It seems reasonable that an airline’s engineering director should be allowed to understand the technical reasoning behind the choice of LRBL, and maybe even give the manufacturer feedback on it.

EDITED TO ADD (4/21): Comment (below) from a pilot: The designation of a “least risk bomb location” is nothing new. All planes have a designated area where potentially dangerous packages should be placed. Usually it’s in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.

Posted on April 20, 2007 at 1:39 PMView Comments

Stealing Data from Disk Drives in Photocopiers

This is a threat I hadn’t thought of before:

Now, experts are warning that photocopiers could be a culprit as well.

That’s because most digital copiers manufactured in the past five years have disk drives—the same kind of data-storage mechanism found in computers—to reproduce documents.

As a result, the seemingly innocuous machines that are commonly used to spit out copies of tax returns for millions of Americans can retain the data being scanned.

If the data on the copier’s disk aren’t protected with encryption or an overwrite mechanism, and if someone with malicious motives gets access to the machine, industry experts say sensitive information from original documents could get into the wrong hands.

Posted on March 21, 2007 at 12:10 PMView Comments

Canadian Anti-Terrorism Law News

Big news:

The court said the men, who are accused of having ties to al-Qaeda, have the right to see and respond to evidence against them. It pointed to a law in Britain that allows special advocates or lawyers to see sensitive intelligence material, but not share details with their clients.

In its ruling, the court said while it’s important to protect Canada’s national security, the government can do more to protect individual rights.

But the court suspended the judgment from taking legal effect for a year, giving Parliament time to write a new law complying with constitutional principles.

Critics have long denounced the certificates, which can lead to deportation of non-citizens on the basis of secret intelligence presented to a Federal Court judge at closed-door hearings.

Those who fight the allegations can spend years in jail while the case works its way through the legal system. In the end, they can sometimes face removal to countries with a track record of torture, say critics.

And that’s not the only piece of good news from Canada. Two provisions from an anti-terrorism law passed at the end of 2001 were due to expire at the end of February. The House of Commons has voted against extending them:

One of the anti-terrorism measures allows police to arrest suspects without a warrant and detain them for three days without charges, provided police believe a terrorist act may be committed. The other measure allows judges to compel witnesses to testify in secret about past associations or pending acts. The witnesses could go to jail if they don’t comply.

The two measures, introduced by a previous Liberal government in 2001, have never been used.

“These two provisions especially have done nothing to fight against terrorism,” Dion said Tuesday. “[They] have not been helpful and have continued to create some risk for civil liberties.”

Another article here.

Posted on March 2, 2007 at 6:54 AMView Comments

Corsham Bunker

Fascinating article on the Corsham bunker, the secret underground UK site the government was to retreat to in the event of a nuclear war.

Until two years ago, the existence of this complex, variously codenamed Burlington, Stockwell, Turnstile or 3-Site, was classified. It was a huge yet very secret complex, where the government and 6,000 apparatchiks would have taken refuge for 90 days during all-out thermonuclear war. Solid yet cavernous, surrounded by 100ft-deep reinforced concrete walls within a subterranean 240-acre limestone quarry just outside Corsham, it drives one to imagine the ghosts of people who, thank God, never took refuge here.

Posted on February 7, 2007 at 2:40 PMView Comments

Excessive Secrecy and Security Helps Terrorists

I’ve said it, and now so has the director of the Canadian Security Intelligence Service:

Canada’s spy master, of all people, is warning that excessive government secrecy and draconian counterterrorism measures will only play into the hands of terrorists.

“The response to the terrorist threat, whether now or in the future, should follow the long-standing principle of ‘in all things moderation,'” Jim Judd, director of the Canadian Security Intelligence Service, said in a recent Toronto speech.

Posted on February 2, 2007 at 7:25 AMView Comments

Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

1 13 14 15 16 17 21

Sidebar photo of Bruce Schneier by Joe MacInnis.