Entries Tagged "vulnerabilities"

Page 35 of 49

Browser Insecurity

This excellent paper measures insecurity in the global population of browsers, using Google’s web server logs. Why is this important? Because browsers are an increasingly popular attack vector.

The results aren’t good.

…at least 45.2%, or 637 million users, were not using the most secure Web browser version on any working day from January 2007 to June 2008. These browsers are an easy target for drive-by download attacks as they are potentially vulnerable to known exploits.

That number breaks down as 577 million users of Internet Explorer, 38 million of Firefox, 17 million of Safari, and 5 million of Opera. Lots more detail in the paper, including some ideas for technical solutions.

EDITED TO ADD (7/2): More commentary.

Posted on July 3, 2008 at 7:02 AMView Comments

Dan Wallach on Electronic Voting Machines

It’s been a while since I’ve written about electronic voting machines, but Dan Wallach has an excellent blog post about the current line of argument from the voting machine companies and why it’s wrong.

Unsurprisingly, the vendors and their trade organization are spinning the results of these studies, as best they can, in an attempt to downplay their significance. Hopefully, legislators and election administrators are smart enough to grasp the vendors’ behavior for what it actually is and take appropriate steps to bolster our election integrity.

Until then, the bottom line is that many jurisdictions in Texas and elsewhere in the country will be using e-voting equipment this November with known security vulnerabilities, and the procedures and controls they are using will not be sufficient to either prevent or detect sophisticated attacks on their e-voting equipment. While there are procedures with the capability to detect many of these attacks (e.g., post-election auditing of voter-verified paper records), Texas has not certified such equipment for use in the state. Texas’s DREs are simply vulnerable to and undefended against attacks.

Posted on July 2, 2008 at 6:15 AMView Comments

Random Number Bug in Debian Linux

This is a big deal:

On May 13th, 2008 the Debian project announced that Luciano Bello found an interesting vulnerability in the OpenSSL package they were distributing. The bug in question was caused by the removal of the following line of code from md_rand.c

	MD_Update(&m,buf,j);
	[ .. ]
	MD_Update(&m,buf,j); /* purify complains */

These lines were removed because they caused the Valgrind and Purify tools to produce warnings about the use of uninitialized data in any code that was linked to OpenSSL. You can see one such report to the OpenSSL team here. Removing this code has the side effect of crippling the seeding process for the OpenSSL PRNG. Instead of mixing in random data for the initial seed, the only “random” value that was used was the current process ID. On the Linux platform, the default maximum process ID is 32,768, resulting in a very small number of seed values being used for all PRNG operations.

More info, from Debian, here. And from the hacker community here. Seems that the bug was introduced in September 2006.

More analysis here. And a cartoon.

Random numbers are used everywhere in cryptography, for both short- and long-term security. And, as we’ve seen here, security flaws in random number generators are really easy to accidently create and really hard to discover after the fact. Back when the NSA was routinely weakening commercial cryptography, their favorite technique was reducing the entropy of the random number generator.

Posted on May 19, 2008 at 6:07 AMView Comments

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AMView Comments

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PMView Comments

London Tube Smartcard Cracked

Looks like lousy cryptography.

Details here. When will people learn not to invent their own crypto?

Note that this is the same card—maybe a different version—that was used in the Dutch transit system, and was hacked back in January. There’s another hack of that system (press release here, and a video demo), and many companies—and government agencies—are scrambling in the wake of all these revelations.

Seems like the Mifare system (especially the version called Mifare Classic—and there are billions out there) was really badly designed, in all sorts of ways. I’m sure there are many more serious security vulnerabilities waiting to be discovered.

Posted on March 14, 2008 at 7:27 AMView Comments

Chip and PIN Vulnerable

This both is and isn’t news. In the security world, we knew that replacing credit card signatures with chip and PIN created new vulnerabilities. In this paper (see also the press release and FAQ), researchers demonstrated some pretty basic attacks against the system—one using a paper clip, a needle, and a small recording device. This BBC article is a good summary of the research.

And also, there’s also this leaked chip and PIN report from APACS, the UK trade association that has been pushing chip and PIN.

Posted on March 12, 2008 at 2:12 PMView Comments

Hacking Medical Devices

Okay, so this could be big news:

But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.

They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal—if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory.

The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery.

There’s only a little bit of hyperbole in the New York Times article. The research is being conducted by the Medical Device Security Center, with researchers from Beth Israel Deaconess Medical Center, Harvard Medical School, the University of Massachusetts Amherst, and the University of Washington. They have two published papers:

This is from the FAQ for the second paper (an ICD is a implantable cardiac defibrillator):

As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.

Of course, we all know how this happened. It’s a story we’ve seen a zillion times before: the designers didn’t think about security, so the design wasn’t secure.

The researchers are making it very clear that this doesn’t mean people shouldn’t get pacemakers and ICDs. Again, from the FAQ:

We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

I agree with this answer. The risks are there, but the benefits of these devices are much greater. The point of this research isn’t to help people hack into pacemakers and commit murder, but to enable medical device companies to design better implantable equipment in the future. I think it’s great work.

Of course, that will only happen if the medical device companies don’t react like idiots:

Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities.

“To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security.

[…]

St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them.

Just because you have no knowledge of something happening does not mean it’s not a risk.

Another article.

The general moral here: more and more, computer technology is becoming intimately embedded into our lives. And with each new application comes new security risks. And we have to take those risks seriously.

Posted on March 12, 2008 at 10:39 AMView Comments

Hacking the Boeing 787

The news articles are pretty sensational:

The computer network in the Dreamliner’s passenger compartment, designed to give passengers in-flight internet access, is connected to the plane’s control, navigation and communication systems, an FAA report reveals.

And:

According to the U.S. Federal Aviation Administration, the new Boeing 787 Dreamliner aeroplane may have a serious security vulnerability in its on-board computer networks that could allow passengers to access the plane’s control systems.

More press.

If this is true, this is a very serious security vulnerability. And it’s not just terrorists trying to control the airplane, but the more common software flaw that causes some unforeseen interaction with something else and cascades into a bigger problem. However, the FAA document in the Federal Register is not as clear as all that. It does say:

The proposed architecture of the 787 is different from that of existing production (and retrofitted) airplanes. It allows new kinds of passenger connectivity to previously isolated data networks connected to systems that perform functions required for the safe operation of the airplane. Because of this new passenger connectivity, the proposed data network design and integration may result in security vulnerabilities from intentional or unintentional corruption of data and systems critical to the safety and maintenance of the airplane. The existing regulations and guidance material did not anticipate this type of system architecture or electronic access to aircraft systems that provide flight critical functions. Furthermore, 14 CFR regulations and current system safety assessment policy and techniques do not address potential security vulnerabilities that could be caused by unauthorized access to aircraft data buses and servers. Therefore, special conditions are imposed to ensure that security, integrity, and availability of the aircraft systems and data networks are not compromised by certain wired or wireless electronic connections between airplane data buses and networks.

But, honestly, this isn’t nearly enough information to work with. Normally, the aviation industry is really good about this sort of thing, and it doesn’t make sense that they’d do something as risky as this. I’d like more definitive information.

EDITED TO ADD (1/16): The FAA responds. Seems like there’s more hype than story here. Still, it’s worth paying attention to.

Posted on January 7, 2008 at 12:38 PMView Comments

1 33 34 35 36 37 49

Sidebar photo of Bruce Schneier by Joe MacInnis.