Entries Tagged "disclosure"

Page 5 of 10

Recent Developments in Full Disclosure

Last week, I had a long conversation with Robert Lemos over an article he was writing about full disclosure. He had noticed that companies have recently been reacting more negatively to security researchers publishing vulnerabilities about their products.

The debate over full disclosure is as old as computing, and I’ve written about it before. Disclosing security vulnerabilities is good for security and good for society, but vendors really hate it. It results in bad press, forces them to spend money fixing vulnerabilities, and comes out of nowhere. Over the past decade or so, we’ve had an uneasy truce between security researchers and product vendors. That truce seems to be breaking down.

Lemos believes the problem is that because today’s research targets aren’t traditional computer companies—they’re phone companies, or embedded system companies, or whatnot—they’re not aware of the history of the debate or the truce, and are responding more viscerally. For example, Carrier IQ threatened legal action against the researcher that outed it, and only backed down after the EFF got involved. I am reminded of the reaction of locksmiths to Matt Blaze’s vulnerability disclosures about lock security; they thought he was evil incarnate for publicizing hundred-year-old security vulnerabilities in lock systems. And just last week, I posted about a full-disclosure debate in the virology community.

I think Lemos has put his finger on part of what’s going on, but that there’s more. I think that companies, both computer and non-computer, are trying to retain control over the situation. Apple’s heavy-handed retaliation against researcher Charlie Miller is an example of that. On one hand, Apple should know better than to do this. On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.

It’s easy to believe that if only people wouldn’t disclose problems, we could pretend they didn’t exist, and everything would be better. Certainly this is the position taken by the DHS over terrorism: public information about the problem is worse than the problem itself. It’s similar to Americans’ willingness to give both Bush and Obama the power to arrest and indefinitely detain any American without any trial whatsoever. It largely explains the common public backlash against whistle-blowers. What we don’t know can’t hurt us, and what we do know will also be known by those who want to hurt us.

There’s some profound psychological denial going on here, and I’m not sure of the implications of it all. It’s worth paying attention to, though. Security requires transparency and disclosure, and if we willingly give that up, we’re a lot less safe as a society.

Posted on December 6, 2011 at 7:31 AMView Comments

Full Disclosure in Biology

The debate over full disclosure in computer security has been going on for the better part of two decades now. The stakes are much higher in biology:

The virus is an H5N1 avian influenza strain that has been genetically altered and is now easily transmissible between ferrets, the animals that most closely mimic the human response to flu. Scientists believe it’s likely that the pathogen, if it emerged in nature or were released, would trigger an influenza pandemic, quite possibly with many millions of deaths.

In a 17th floor office in the same building, virologist Ron Fouchier of Erasmus Medical Center calmly explains why his team created what he says is “probably one of the most dangerous viruses you can make”­and why he wants to publish a paper describing how they did it. Fouchier is also bracing for a media storm. After he talked to ScienceInsider yesterday, he had an appointment with an institutional press officer to chart a communication strategy.

Of course, there’s value to the research:

“These studies are very important,” says biodefense and flu expert Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota, Twin Cities. The researchers “have the full support of the influenza community,” Osterholm says, because there are potential benefits for public health. For instance, the results show that those downplaying the risks of an H5N1 pandemic should think again, he says.

Knowing the exact mutations that make the virus transmissible also enables scientists to look for them in the field and take more aggressive control measures when one or more show up, adds Fouchier. The study also enables researchers to test whether H5N1 vaccines and antiviral drugs would work against the new strain.

And we know how badly this sort of security works:

Osterholm says he can’t discuss details of the papers because he’s an NSABB member. But he says it should be possible to omit certain key details from controversial papers and make them available to people who really need to know. “We don’t want to give bad guys a road map on how to make bad bugs really bad,” he says.

Posted on November 30, 2011 at 12:28 PMView Comments

Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

34 SCADA Vulnerabilities Published

It’s hard to tell how serious this is.

Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.

Posted on April 1, 2011 at 6:58 AMView Comments

Lockpicking and the Internet

Physical locks aren’t very good. They keep the honest out, but any burglar worth his salt can pick the common door lock pretty quickly.

It used to be that most people didn’t know this. Sure, we all watched television criminals and private detectives pick locks with an ease only found on television and thought it realistic, but somehow we still held onto the belief that our own locks kept us safe from intruders.

The Internet changed that.

First was the MIT Guide to Lockpicking, written by the late Bob (“Ted the Tool”) Baldwin. Then came Matt Blaze’s 2003 paper on breaking master key systems. After that, came a flood of lock picking information on the Net: opening a bicycle lock with a Bic pen, key bumping, and more. Many of these techniques were already known in both the criminal and locksmith communities. The locksmiths tried to suppress the knowledge, believing their guildlike secrecy was better than openness. But they’ve lost: Never has there been more public information about lock picking—or safecracking, for that matter.

Lock companies have responded with more complicated locks, and more complicated disinformation campaigns.

There seems to be a limit to how secure you can make a wholly mechanical lock, as well as a limit to how large and unwieldy a key the public will accept. As a result, there is increasing interest in other lock technologies.

As a security technologist, I worry that if we don’t fully understand these technologies and the new sorts of vulnerabilities they bring, we may be trading a flawed technology for an even worse one. Electronic locks are vulnerable to attack, often in new and surprising ways.

Start with keypads, more and more common on house doors. These have the benefit that you don’t have to carry a physical key around, but there’s the problem that you can’t give someone the key for a day and then take it away when that day is over. As such, the security decays over time—the longer the keypad is in use, the more people know how to get in. More complicated electronic keypads have a variety of options for dealing with this, but electronic keypads work only when the power is on, and battery-powered locks have their own failure modes. Plus, far too many people never bother to change the default entry code.

Keypads have other security failures, as well. I regularly see keypads where four of the 10 buttons are more worn than the other six. They’re worn from use, of course, and instead of 10,000 possible entry codes, I now have to try only 24.

Fingerprint readers are another technology, but there are many known security problems with those. And there are operational problems, too: They’re hard to use in the cold or with sweaty hands; and leaving a key with a neighbor to let the plumber in starts having a spy-versus-spy feel.

Some companies are going even further. Earlier this year, Schlage launched a series of locks that can be opened either by a key, a four-digit code, or the Internet. That’s right: The lock is online. You can send the lock SMS messages or talk to it via a Website, and the lock can send you messages when someone opens it—or even when someone tries to open it and fails.

Sounds nifty, but putting a lock on the Internet opens up a whole new set of problems, none of which we fully understand. Even worse: Security is only as strong as the weakest link. Schlage’s system combines the inherent “pickability” of a physical lock, the new vulnerabilities of electronic keypads, and the hacking risk of online. For most applications, that’s simply too much risk.

This essay previously appeared on DarkReading.com.

Posted on August 12, 2009 at 5:48 AMView Comments

The ATM Vulnerability You Won't Hear About

The talk has been pulled from the BlackHat conference:

Barnaby Jack, a researcher with Juniper Networks, was to present a demonstration showing how he could jackpot a popular ATM brand by exploiting a vulnerability in its software.

Jack was scheduled to present his talk at the upcoming Black Hat security conference being held in Las Vegas at the end of July.

But on Monday evening, his employer released a statement saying it was canceling the talk due to the vendor’s intervention.

More:

“The vulnerability Barnaby was to discuss has far reaching consequences, not only to the affected ATM vendor, but to other ATM vendors and—ultimately—the public,” wrote Brendan Lewis, director of corporate social media relations for Juniper in a statement posted to the company’s official blog last week. “To publicly disclose the research findings before the affected vendor could properly mitigate the exposure would have potentially placed their customers at risk. That is something we don’t want to see happen.”

More news articles: 1, 2, 3, 4, and 5.

Posted on July 9, 2009 at 12:56 PMView Comments

Second SHB Workshop Liveblogging (5)

David Livingstone Smith moderated the fourth session, about (more or less) methodology.

Angela Sasse, University College London (suggested reading: The Compliance Budget: Managing Security Behaviour in Organisations; Human Vulnerabilities in Security Systems), has been working on usable security for over a dozen years. As part of a project called “Trust Economics,” she looked at whether people comply with security policies and why they either do or do not. She found that there is a limit to the amount of effort people will make to comply—this is less actual cost and more perceived cost. Strict and simple policies will be complied with more than permissive but complex policies. Compliance detection, and reward or punishment, also affect compliance. People justify noncompliance by “frequently made excuses.”

Bashar Nuseibeh, Open University (suggested reading: A Multi-Pronged Empirical Approach to Mobile Privacy Investigation; Security Requirements Engineering: A Framework for Representation and Analysis), talked about mobile phone security; specifically, Facebook privacy on mobile phones. He did something clever in his experiments. Because he wasn’t able to interview people at the moment they did something—he worked with mobile users—he asked them to provide a “memory phrase” that allowed him to effectively conduct detailed interviews at a later time. This worked very well, and resulted in all sorts of information about why people made privacy decisions at that earlier time.

James Pita, University of Southern California (suggested reading: Deployed ARMOR Protection: The Application of a Game Theoretic Model for Security at the Los Angeles International Airport), studies security personnel who have to guard a physical location. In his analysis, there are limited resources—guards, cameras, etc.—and a set of locations that need to be guarded. An example would be the Los Angeles airport, where a finite number of K-9 units need to guard eight terminals. His model uses a Stackelberg game to minimize predictability (otherwise, the adversary will learn it and exploit it) while maximizing security. There are complications—observational uncertainty and bounded rationally on the part of the attackers—which he tried to capture in his model.

Markus Jakobsson, Palo Alto Research Center (suggested reading: Male, late with your credit card payment, and like to speed? You will be phished!; Social Phishing; Love and Authentication; Quantifying the Security of Preference-Based Authentication), pointed out that auto insurers ask people if they smoke in order to get a feeling for whether they engage in high-risk behaviors. In his experiment, he selected 100 people who were the victim of online fraud and 100 people who were not. He then asked them to complete a survey about different physical risks such as mountain climbing and parachute jumping, financial risks such as buying stocks and real estate, and Internet risks such as visiting porn sites and using public wi-fi networks. He found significant correlation between different risks, but I didn’t see an overall pattern emerge. And in the discussion phase, several people had questions about the data. More analysis, and probably more data, is required. To be fair, he was still in the middle of his analysis.

Rachel Greenstadt, Drexel University (suggested reading: Practical Attacks Against Authorship Recognition Techniques (pre-print); Reinterpreting the Disclosure Debate for Web Infections), discussed ways in which humans and machines can collaborate in making security decisions. These decisions are hard for several reasons: because they are context dependent, require specialized knowledge, are dynamic, and require complex risk analysis. And humans and machines are good at different sorts of tasks. Machine-style authentication: This guy I’m standing next to knows Jake’s private key, so he must be Jake. Human-style authentication: This guy I’m standing next to looks like Jake and sounds like Jake, so he must be Jake. The trick is to design systems that get the best of these two authentication styles and not the worst. She described two experiments examining two decisions: should I log into this website (the phishing problem), and should I publish this anonymous essay or will my linguistic style betray me?

Mike Roe, Microsoft, talked about crime in online games, particularly in Second Life and Metaplace. There are four classes of people on online games: explorers, socializers, achievers, and griefers. Griefers try to annoy socializers in social worlds like Second Life, or annoy achievers in competitive worlds like World of Warcraft. Crime is not necessarily economic; criminals trying to steal money is much less of a problem in these games than people just trying to be annoying. In the question session, Dave Clark said that griefers are a constant, but economic fraud grows over time. I responded that the two types of attackers are different people, with different personality profiles. I also pointed out that there is another kind of attacker: achievers who use illegal mechanisms to assist themselves.

In the discussion, Peter Neumann pointed out that safety is an emergent property, and requires security, reliability, and survivability. Others weren’t so sure.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Conference dinner tonight at Legal Seafoods. And four more sessions tomorrow.

Posted on June 11, 2009 at 4:50 PMView Comments

Software Problems with a Breath Alcohol Detector

This is an excellent lesson in the security problems inherent in trusting proprietary software:

After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.

Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.

3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

4. Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

Basically, the system was designed to return some sort of result regardless.

This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. “You can’t look at our code because we don’t want you to” simply isn’t good enough.

Posted on May 13, 2009 at 2:07 PMView Comments

1 3 4 5 6 7 10

Sidebar photo of Bruce Schneier by Joe MacInnis.