Entries Tagged "secrecy"

Page 13 of 21

The NSA Teams Up with the Chinese Government to Limit Internet Anonymity

Definitely strange bedfellows:

A United Nations agency is quietly drafting technical standards, proposed by the Chinese government, to define methods of tracing the original source of Internet communications and potentially curbing the ability of users to remain anonymous.

The U.S. National Security Agency is also participating in the “IP Traceback” drafting group, named Q6/17, which is meeting next week in Geneva to work on the traceback proposal. Members of Q6/17 have declined to release key documents, and meetings are closed to the public.

[…]

A second, apparently leaked ITU document offers surveillance and monitoring justifications that seem well-suited to repressive regimes:

A political opponent to a government publishes articles putting the government in an unfavorable light. The government, having a law against any opposition, tries to identify the source of the negative articles but the articles having been published via a proxy server, is unable to do so protecting the anonymity of the author.

This is being sold as a way to go after the bad guys, but it won’t help. Here’s Steve Bellovin on that issue:

First, very few attacks these days use spoofed source addresses; the real IP address already tells you where the attack is coming from. Second, in case of a DDoS attack, there are too many sources; you can’t do anything with the information. Third, the machine attacking you is almost certainly someone else’s hacked machine and tracking them down (and getting them to clean it up) is itself time-consuming.

TraceBack is most useful in monitoring the activities of large masses of people. But of course, that’s why the Chinese and the NSA are so interested in this proposal in the first place.

It’s hard to figure out what the endgame is; the U.N. doesn’t have the authority to impose Internet standards on anyone. In any case, this idea is counter to the U.N. Universal Declaration of Human Rights, Article 19: “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.” In the U.S., it’s counter to the First Amendment, which has long permitted anonymous speech. On the other hand, basic human and constitutional rights have been jettisoned left and right in the years after 9/11; why should this be any different?

But when the Chinese government and the NSA get together to enhance their ability to spy on us all, you have to wonder what’s gone wrong with the world.

Posted on September 18, 2008 at 6:34 AMView Comments

UK Ministry of Defense Loses Memory Stick with Military Secrets

Oops:

The USB stick, outlining training for 70 soldiers from the 3rd Battalion, Yorkshire Regiment, was found on the floor of The Beach in Newquay in May.

Times, locations and travel and accommodation details for the troops were included in files on the device.

It’s not the first time:

More than 120 USB memory sticks, some containing secret information, have been lost or stolen from the Ministry of Defence since 2004, it was reported earlier this year.

Some 26 of those disappeared this year == including three which contained information classified as “secret”, and 19 which were “restricted”.

I’ve written about this general problem before: we’re storing ever more data in ever smaller devices.

The point is that it’s now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I’d never know it.

The solution? Encrypt them.

Posted on September 16, 2008 at 6:21 AMView Comments

DNA Matching and the Birthday Paradox

Nice essay:

Is it possible that the F.B.I. is right about the statistics it cites, and that there could be 122 nine-out-of-13 matches in Arizona’s database?

Perhaps surprisingly, the answer turns out to be yes. Let’s say that the chance of any two individuals matching at any one locus is 7.5 percent. In reality, the frequency of a match varies from locus to locus, but I think 7.5 percent is pretty reasonable. For instance, with a 7.5 percent chance of matching at each locus, the chance that any 2 random people would match at all 13 loci is about 1 in 400 trillion. If you choose exactly 9 loci for 2 random people, the chance that they will match all 9 is 1 in 13 billion. Those are the sorts of numbers the F.B.I. tosses around, I think.

So under these same assumptions, how many pairs would we expect to find matching on at least 9 of 13 loci in the Arizona database? Remarkably, about 100. If you start with 65,000 people and do a pairwise match of all of them, you are actually making over 2 billion separate comparisons (65,000 * 64,999/2). And if you aren’t just looking for a match on 9 specific loci, but rather on any 9 of 13 loci, then for each of those pairs of people there are over 700 different combinations that are being searched.

So all told, you end up doing about 1.4 trillion searches! If 1 in 13 billion searches yields a positive match as noted above, this leads to roughly 100 expected matches on 9 of 13 loci in a database the size of Arizona’s. (The way I did the calculations, I am allowing for 2 individuals to match on different sets of loci; so to get 100 different pairs of people who match, I need a match rate of slightly higher than 7.5 percent per locus.)

EDITED TO ADD (9/14): The FBI is trying to suppress the analysis.

Posted on September 11, 2008 at 6:21 AMView Comments

Secret Military Technology

On 60 Minutes, in an interview with Scott Pelley, reporter Bob Woodward claimed that the U.S. military has a new secret technique that’s so revolutionary, it’s on par with the tank and the airplane:

Woodward: This is very sensitive and very top secret, but there are secret operational capabilities that have been developed by the military to locate, target, and kill leaders of al Qaeda in Iraq, insurgent leaders, renegade militia leaders, that is one of the true breakthroughs.

Pelley: What are we talking about here? Some kind of surveillance, some kind of targeted way of taking out just the people that you’re looking for, the leadership of the enemy?

[…]

Woodward: It is the stuff of which military novels are written.

Pelley: Do you mean to say that this special capability is such an advance in military technique and technology that it reminds you of the advent of the tank and the airplane?

Woodward: Yeah.

It’s here, 7 minutes and 55 seconds in.

Anyone have any ideas?

EDITED TO ADD (9/11): One idea:

I’m going to make a wager about what I think Woodward is talking about, and I’ll be curious to see what Danger Room readers have to say. I believe he is talking about the much ballyhooed (in defense geek circles) "Tagging, Tracking and Locating" program; here’s a briefing on it from Special Operations Command. These are newfangled technologies designed to track people from long distances, without the targeted people realizing they are being tracked. That can theoretically include thermal signatures, or some sort of "taggant" placed on a person. Think Will Smith in Enemy of the State. Well, not so many cameras, maybe.

Posted on September 10, 2008 at 11:35 AMView Comments

Full Disclosure and the Boston Farecard Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.

EDITD TO ADD (9/12): A good legal analysis.

Posted on August 26, 2008 at 6:04 AMView Comments

Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

The Case of the Stolen BlackBerry and the Awesome Chinese Hacking Skills

A high-level British government employee had his BlackBerry stolen by Chinese intelligence:

The aide, a senior Downing Street adviser who was with the prime minister on a trip to China earlier this year, had his BlackBerry phone stolen after being picked up by a Chinese woman who had approached him in a Shanghai hotel disco.

The aide agreed to return to his hotel with the woman. He reported the BlackBerry missing the next morning.

That can’t look good on your annual employee review.

But it’s this part of the article that has me confused:

Experts say that even if the aide’s device did not contain anything top secret, it might enable a hostile intelligence service to hack into the Downing Street server, potentially gaining access to No 10’s e-mail traffic and text messages.

Um, what? I assume the IT department just turned off the guy’s password. Was this nonsense peddled to the press by the UK government, or is some “expert” trying to sell us something? The article doesn’t say.

EDITED TO ADD (7/22): The first commenter makes a good point, which I didn’t think of. The article says that it’s Chinese intelligence:

A senior official said yesterday that the incident had all the hallmarks of a suspected honeytrap by Chinese intelligence.

But Chinese intelligence would be far more likely to clone the BlackBerry and then return it. Much better information that way. This is much more likely to be petty theft.

EDITED TO ADD (7/23): The more I think about this story, the less sense it makes. If you’re a Chinese intelligence officer and you manage to get an aide to the British Prime Minister to have sex with one of your agents, you’re not going to immediately burn him by stealing his BlackBerry. That’s just stupid.

Posted on July 22, 2008 at 10:05 AMView Comments

Chertoff Says Fingerprints Aren't Personal Data

Homeland Security Secretary Michael Chertoff says:

QUESTION: Some are raising that the privacy aspects of this thing, you know, sharing of that kind of data, very personal data, among four countries is quite a scary thing.

SECRETARY CHERTOFF: Well, first of all, a fingerprint is hardly personal data because you leave it on glasses and silverware and articles all over the world, they’re like footprints. They’re not particularly private.

Sounds like he’s confusing “secret” data with “personal” data. Lots of personal data isn’t particularly secret.

Posted on April 21, 2008 at 6:54 AMView Comments

1 11 12 13 14 15 21

Sidebar photo of Bruce Schneier by Joe MacInnis.