Entries Tagged "disclosure"

Page 6 of 10

Lockpicking and the Internet

Physical locks aren’t very good. They keep the honest out, but any burglar worth his salt can pick the common door lock pretty quickly.

It used to be that most people didn’t know this. Sure, we all watched television criminals and private detectives pick locks with an ease only found on television and thought it realistic, but somehow we still held onto the belief that our own locks kept us safe from intruders.

The Internet changed that.

First was the MIT Guide to Lockpicking, written by the late Bob (“Ted the Tool”) Baldwin. Then came Matt Blaze’s 2003 paper on breaking master key systems. After that, came a flood of lock picking information on the Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many of these techniques were already known in both the criminal and locksmith communities. The locksmiths tried to suppress the knowledge, believing their guildlike secrecy was better than openness. But they’ve lost: Never has there been more public information about lock picking—or safecracking, for that matter.

Lock companies have responded with more complicated locks, and more complicated disinformation campaigns.

There seems to be a limit to how secure you can make a wholly mechanical lock, as well as a limit to how large and unwieldy a key the public will accept. As a result, there is increasing interest in other lock technologies.

As a security technologist, I worry that if we don’t fully understand these technologies and the new sorts of vulnerabilities they bring, we may be trading a flawed technology for an even worse one. Electronic locks are vulnerable to attack, often in new and surprising ways.

Start with keypads, more and more common on house doors. These have the benefit that you don’t have to carry a physical key around, but there’s the problem that you can’t give someone the key for a day and then take it away when that day is over. As such, the security decays over time—the longer the keypad is in use, the more people know how to get in. More complicated electronic keypads have a variety of options for dealing with this, but electronic keypads work only when the power is on, and battery-powered locks have their own failure modes. Plus, far too many people never bother to change the default entry code.

Keypads have other security failures, as well. I regularly see keypads where four of the 10 buttons are more worn than the other six. They’re worn from use, of course, and instead of 10,000 possible entry codes, I now have to try only 24.

Fingerprint readers are another technology, but there are many known security problems with those. And there are operational problems, too: They’re hard to use in the cold or with sweaty hands; and leaving a key with a neighbor to let the plumber in starts having a spy-versus-spy feel.

Some companies are going even further. Earlier this year, Schlage launched a series of locks that can be opened either by a key, a four-digit code, or the Internet. That’s right: The lock is online. You can send the lock SMS messages or talk to it via a Website, and the lock can send you messages when someone opens it—or even when someone tries to open it and fails.

Sounds nifty, but putting a lock on the Internet opens up a whole new set of problems, none of which we fully understand. Even worse: Security is only as strong as the weakest link. Schlage’s system combines the inherent “pickability” of a physical lock, the new vulnerabilities of electronic keypads, and the hacking risk of online. For most applications, that’s simply too much risk.

This essay previously appeared on DarkReading.com.

Posted on August 12, 2009 at 5:48 AMView Comments

The ATM Vulnerability You Won't Hear About

The talk has been pulled from the BlackHat conference:

Barnaby Jack, a researcher with Juniper Networks, was to present a demonstration showing how he could jackpot a popular ATM brand by exploiting a vulnerability in its software.

Jack was scheduled to present his talk at the upcoming Black Hat security conference being held in Las Vegas at the end of July.

But on Monday evening, his employer released a statement saying it was canceling the talk due to the vendor’s intervention.

More:

“The vulnerability Barnaby was to discuss has far reaching consequences, not only to the affected ATM vendor, but to other ATM vendors and—ultimately—the public,” wrote Brendan Lewis, director of corporate social media relations for Juniper in a statement posted to the company’s official blog last week. “To publicly disclose the research findings before the affected vendor could properly mitigate the exposure would have potentially placed their customers at risk. That is something we don’t want to see happen.”

More news articles: 1, 2, 3, 4, and 5.

Posted on July 9, 2009 at 12:56 PMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Software Problems with a Breath Alcohol Detector

This is an excellent lesson in the security problems inherent in trusting proprietary software:

After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.

Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.

3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

4. Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

Basically, the system was designed to return some sort of result regardless.

This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. “You can’t look at our code because we don’t want you to” simply isn’t good enough.

Posted on May 13, 2009 at 2:07 PMView Comments

HIPAA Accountability in Stimulus Bill

On page 379 of the current stimulus bill, there’s a bit about establishing a website of companies that lost patient information:

(4) POSTING ON HHS PUBLIC WEBSITE—The Secretary shall make available to the public on the Internet website of the Department of Health and Human Services a list that identifies each covered entity involved in a breach described in subsection (a) in which the unsecured protected health information of more than 500 individuals is acquired or disclosed.

I’m not sure if this passage survived the final bill, but it will be interesting if it is now law.

EDITED TO ADD (3/13): It’s law.

Posted on February 18, 2009 at 12:28 PMView Comments

Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

This is the second half of a point/counterpoint with Marcus Ranum. Marcus’s essay is here.

Posted on January 21, 2009 at 6:59 AMView Comments

DNA Matching and the Birthday Paradox

Nice essay:

Is it possible that the F.B.I. is right about the statistics it cites, and that there could be 122 nine-out-of-13 matches in Arizona’s database?

Perhaps surprisingly, the answer turns out to be yes. Let’s say that the chance of any two individuals matching at any one locus is 7.5 percent. In reality, the frequency of a match varies from locus to locus, but I think 7.5 percent is pretty reasonable. For instance, with a 7.5 percent chance of matching at each locus, the chance that any 2 random people would match at all 13 loci is about 1 in 400 trillion. If you choose exactly 9 loci for 2 random people, the chance that they will match all 9 is 1 in 13 billion. Those are the sorts of numbers the F.B.I. tosses around, I think.

So under these same assumptions, how many pairs would we expect to find matching on at least 9 of 13 loci in the Arizona database? Remarkably, about 100. If you start with 65,000 people and do a pairwise match of all of them, you are actually making over 2 billion separate comparisons (65,000 * 64,999/2). And if you aren’t just looking for a match on 9 specific loci, but rather on any 9 of 13 loci, then for each of those pairs of people there are over 700 different combinations that are being searched.

So all told, you end up doing about 1.4 trillion searches! If 1 in 13 billion searches yields a positive match as noted above, this leads to roughly 100 expected matches on 9 of 13 loci in a database the size of Arizona’s. (The way I did the calculations, I am allowing for 2 individuals to match on different sets of loci; so to get 100 different pairs of people who match, I need a match rate of slightly higher than 7.5 percent per locus.)

EDITED TO ADD (9/14): The FBI is trying to suppress the analysis.

Posted on September 11, 2008 at 6:21 AMView Comments

Full Disclosure and the Boston Farecard Hack

In eerily similar cases in the Netherlands and the United States, courts have recently grappled with the computer-security norm of “full disclosure,” asking whether researchers should be permitted to disclose details of a fare-card vulnerability that allows people to ride the subway for free.

The “Oyster card” used on the London Tube was at issue in the Dutch case, and a similar fare card used on the Boston “T” was the center of the U.S. case. The Dutch court got it right, and the American court, in Boston, got it wrong from the start—despite facing an open-and-shut case of First Amendment prior restraint.

The U.S. court has since seen the error of its ways—but the damage is done. The MIT security researchers who were prepared to discuss their Boston findings at the DefCon security conference were prevented from giving their talk.

The ethics of full disclosure are intimately familiar to those of us in the computer-security field. Before full disclosure became the norm, researchers would quietly disclose vulnerabilities to the vendors—who would routinely ignore them. Sometimes vendors would even threaten researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers started disclosing the existence of a vulnerability but not the details. Vendors responded by denying the security holes’ existence, or calling them just theoretical. It wasn’t until full disclosure became the norm that vendors began consistently fixing vulnerabilities quickly. Now that vendors routinely patch vulnerabilities, researchers generally give them advance notice to allow them to patch their systems before the vulnerability is published. But even with this “responsible disclosure” protocol, it’s the threat of disclosure that motivates them to patch their systems. Full disclosure is the mechanism by which computer security improves.

Outside of computer security, secrecy is much more the norm. Some security communities, like locksmiths, behave much like medieval guilds, divulging the secrets of their profession only to those within it. These communities hate open research, and have responded with surprising vitriol to researchers who have found serious vulnerabilities in bicycle locks, combination safes, master-key systems and many other security devices.

Researchers have received a similar reaction from other communities more used to secrecy than openness. Researchers—sometimes young students—who discovered and published flaws in copyright-protection schemes, voting-machine security and now wireless access cards have all suffered recriminations and sometimes lawsuits for not keeping the vulnerabilities secret. When Christopher Soghoian created a website allowing people to print fake airline boarding passes, he got several unpleasant visits from the FBI.

This preference for secrecy comes from confusing a vulnerability with information about that vulnerability. Using secrecy as a security measure is fundamentally fragile. It assumes that the bad guys don’t do their own security research. It assumes that no one else will find the same vulnerability. It assumes that information won’t leak out even if the research results are suppressed. These assumptions are all incorrect.

The problem isn’t the researchers; it’s the products themselves. Companies will only design security as good as what their customers know to ask for. Full disclosure helps customers evaluate the security of the products they buy, and educates them in how to ask for better security. The Dutch court got it exactly right when it wrote: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

In a world of forced secrecy, vendors make inflated claims about their products, vulnerabilities don’t get fixed, and customers are no wiser. Security research is stifled, and security technology doesn’t improve. The only beneficiaries are the bad guys.

If you’ll forgive the analogy, the ethics of full disclosure parallel the ethics of not paying kidnapping ransoms. We all know why we don’t pay kidnappers: It encourages more kidnappings. Yet in every kidnapping case, there’s someone—a spouse, a parent, an employer—with a good reason why, in this one case, we should make an exception.

The reason we want researchers to publish vulnerabilities is because that’s how security improves. But in every case there’s someone—the Massachusetts Bay Transit Authority, the locksmiths, an election machine manufacturer—who argues that, in this one case, we should make an exception.

We shouldn’t. The benefits of responsibly publishing attacks greatly outweigh the potential harm. Disclosure encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers. It’s how we learn about security, and how we improve future security.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/26): Matt Blaze has a good essay on the topic.

EDITD TO ADD (9/12): A good legal analysis.

Posted on August 26, 2008 at 6:04 AMView Comments

Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

1 4 5 6 7 8 10

Sidebar photo of Bruce Schneier by Joe MacInnis.