Entries Tagged "vulnerabilities"

Page 35 of 48

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PMView Comments

London Tube Smartcard Cracked

Looks like lousy cryptography.

Details here. When will people learn not to invent their own crypto?

Note that this is the same card—maybe a different version—that was used in the Dutch transit system, and was hacked back in January. There’s another hack of that system (press release here, and a video demo), and many companies—and government agencies—are scrambling in the wake of all these revelations.

Seems like the Mifare system (especially the version called Mifare Classic—and there are billions out there) was really badly designed, in all sorts of ways. I’m sure there are many more serious security vulnerabilities waiting to be discovered.

Posted on March 14, 2008 at 7:27 AMView Comments

Chip and PIN Vulnerable

This both is and isn’t news. In the security world, we knew that replacing credit card signatures with chip and PIN created new vulnerabilities. In this paper (see also the press release and FAQ), researchers demonstrated some pretty basic attacks against the system—one using a paper clip, a needle, and a small recording device. This BBC article is a good summary of the research.

And also, there’s also this leaked chip and PIN report from APACS, the UK trade association that has been pushing chip and PIN.

Posted on March 12, 2008 at 2:12 PMView Comments

Hacking Medical Devices

Okay, so this could be big news:

But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.

They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal—if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory.

The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery.

There’s only a little bit of hyperbole in the New York Times article. The research is being conducted by the Medical Device Security Center, with researchers from Beth Israel Deaconess Medical Center, Harvard Medical School, the University of Massachusetts Amherst, and the University of Washington. They have two published papers:

This is from the FAQ for the second paper (an ICD is a implantable cardiac defibrillator):

As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.

Of course, we all know how this happened. It’s a story we’ve seen a zillion times before: the designers didn’t think about security, so the design wasn’t secure.

The researchers are making it very clear that this doesn’t mean people shouldn’t get pacemakers and ICDs. Again, from the FAQ:

We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

I agree with this answer. The risks are there, but the benefits of these devices are much greater. The point of this research isn’t to help people hack into pacemakers and commit murder, but to enable medical device companies to design better implantable equipment in the future. I think it’s great work.

Of course, that will only happen if the medical device companies don’t react like idiots:

Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities.

“To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security.

[…]

St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them.

Just because you have no knowledge of something happening does not mean it’s not a risk.

Another article.

The general moral here: more and more, computer technology is becoming intimately embedded into our lives. And with each new application comes new security risks. And we have to take those risks seriously.

Posted on March 12, 2008 at 10:39 AMView Comments

Hacking the Boeing 787

The news articles are pretty sensational:

The computer network in the Dreamliner’s passenger compartment, designed to give passengers in-flight internet access, is connected to the plane’s control, navigation and communication systems, an FAA report reveals.

And:

According to the U.S. Federal Aviation Administration, the new Boeing 787 Dreamliner aeroplane may have a serious security vulnerability in its on-board computer networks that could allow passengers to access the plane’s control systems.

More press.

If this is true, this is a very serious security vulnerability. And it’s not just terrorists trying to control the airplane, but the more common software flaw that causes some unforeseen interaction with something else and cascades into a bigger problem. However, the FAA document in the Federal Register is not as clear as all that. It does say:

The proposed architecture of the 787 is different from that of existing production (and retrofitted) airplanes. It allows new kinds of passenger connectivity to previously isolated data networks connected to systems that perform functions required for the safe operation of the airplane. Because of this new passenger connectivity, the proposed data network design and integration may result in security vulnerabilities from intentional or unintentional corruption of data and systems critical to the safety and maintenance of the airplane. The existing regulations and guidance material did not anticipate this type of system architecture or electronic access to aircraft systems that provide flight critical functions. Furthermore, 14 CFR regulations and current system safety assessment policy and techniques do not address potential security vulnerabilities that could be caused by unauthorized access to aircraft data buses and servers. Therefore, special conditions are imposed to ensure that security, integrity, and availability of the aircraft systems and data networks are not compromised by certain wired or wireless electronic connections between airplane data buses and networks.

But, honestly, this isn’t nearly enough information to work with. Normally, the aviation industry is really good about this sort of thing, and it doesn’t make sense that they’d do something as risky as this. I’d like more definitive information.

EDITED TO ADD (1/16): The FAA responds. Seems like there’s more hype than story here. Still, it’s worth paying attention to.

Posted on January 7, 2008 at 12:38 PMView Comments

Security of Adult Websites Compromised

This article claims the software that runs the back end of either 35% or 80%-95% (depending on which part of the article you read) has been compromised, and that the adult industry is hushing this up. Like many of these sorts of stories, there’s no evidence that the bad guys have the personal information database. The vulnerability only means that they could have it.

Does anyone know about this?

Slashdot thread.

Posted on December 28, 2007 at 7:54 AMView Comments

More Voting Machine News

Ohio just completed a major study of voting machines. (Here’s the report, a gigantic pdf.) And, like the California study earlier this year, they found all sorts of problems:

While some tests to compromise voting systems took higher levels of sophistication, fairly simple techniques were often successfully deployed.

“To put it in every-day terms, the tools needed to compromise an accurate vote count could be as simple as tampering with the paper audit trail connector or using a magnet and a personal digital assistant,” Brunner said.

The New York Times writes:

“It was worse than I anticipated,” the official, Secretary of State Jennifer Brunner, said of the report. “I had hoped that perhaps one system would test superior to the others.”

At polling stations, teams working on the study were able to pick locks to access memory cards and use hand-held devices to plug false vote counts into machines. At boards of election, they were able to introduce malignant software into servers.

Note the lame defense from one voting machine manufacturer:

Chris Riggall, a Premier spokesman, said hardware and software problems had been corrected in his company’s new products, which will be available for installation in 2008.

“It is important to note,” he said, “that there has not been a single documented case of a successful attack against an electronic voting system, in Ohio or anywhere in the United States.”

I guess he didn’t read the part of the report that talked about how these attacks would be undetectable. Like this one:

They found that the ES&S tabulation system and the voting machine firmware were rife with basic buffer overflow vulnerabilities that would allow an attacker to easily take control of the systems and “exercise complete control over the results reported by the entire county election system.”

They also found serious security vulnerabilities involving the magnetically switched bidirectional infrared (IrDA) port on the front of the machines and the memory devices that are used to communicate with the machine through the port. With nothing more than a magnet and an infrared-enabled Palm Pilot or cell phone they could easily read and alter a memory device that is used to perform important functions on the ES&S iVotronic touch-screen machine—such as loading the ballot definition file and programming the machine to allow a voter to cast a ballot. They could also use a Palm Pilot to emulate the memory device and hack a voting machine through the infrared port (see the picture above right).

They found that a voter or poll worker with a Palm Pilot and no more than a minute’s access to a voting machine could surreptitiously re-calibrate the touch-screen so that it would prevent voters from voting for specific candidates or cause the machine to secretly record a voter’s vote for a different candidate than the one the voter chose. Access to the screen calibration function requires no password, and the attacker’s actions, the researchers say, would be indistinguishable from the normal behavior of a voter in front of a machine or of a pollworker starting up a machine in the morning.

Elsewhere in the country, Colorado has decertified most of its electronic voting machines:

The decertification decision, which cited problems with accuracy and security, affects electronic voting machines in Denver and five other counties. A number of electronic scanners used to count ballots were also decertified.

Coffman would not comment Monday on what his findings mean for past elections, despite his conclusion that some equipment had accuracy issues.

“I can only report,” he said. “The voters in those respective counties are going to have to interpret” the results.

Coffman announced in March that he had adopted new rules for testing electronic voting machines. He required the four systems used in Colorado to apply for recertification.

The systems are manufactured by Premier Election Solutions, formerly known as Diebold Election Systems; Hart InterCivic; Sequoia Voting Systems; and Election Systems and Software. Only Premier had all its equipment pass the recertification.

California is about to give up on electronic voting machines, too. This probably didn’t help:

More than a hundred computer chips containing voting machine software were lost or stolen during transit in California this week.

EDITED TO ADD (1/2): More news.

Posted on December 24, 2007 at 1:02 PMView Comments

Defeating the Shoe Scanning Machine at Heathrow Airport

For a while now, Heathrow Airport has had a unique setup for scanning shoes. Instead of taking your shoes off during the normal screening process, as you do in U.S. airports, you go through the metal detector with your shoes on. Then, later, there is a special shoe scanning X-ray machine. You take your shoes off, send them through the machine, and put them on at the other end.

It’s definitely faster, but it’s an easy system to defeat. The vulnerability is that no one verifies that the shoes you walked through the metal detector with are the same shoes you put on the scanning machine.

Here’s how the attack works. Assume that you have two pairs of shoes: a clean pair that passes all levels of screening, and a dangerous pair that doesn’t. (Ignore for a moment the ridiculousness of screening shoes in the first place, and assume that an X-ray machine can detect the dangerous pair.) Put the dangerous shoes on your feet and the clean shoes in your carry-on bag. Walk through the metal detector. Then, at the shoe X-ray machine, take the dangerous shoes off and put them in your bag, and take the clean shoes out of your bag and place them on the X-ray machine. You’ve now managed to get through security without having your shoes screened.

This works because the two security systems are decoupled. And the shoe screening machine is so crowded and chaotic, and so poorly manned, that no one notices the switch.

U.S. airports force people to put their shoes through the X-ray machine and walk through the metal detector shoeless, ensuring that all shoes get screened. That might be slower, but it works.

EDITED TO ADD (12/14): Heathrow Terminal 3, that is. The system wasn’t in place in Terminal 4, and I don’t know about Terminals 1 and 2.

Posted on December 14, 2007 at 5:43 AMView Comments

1 33 34 35 36 37 48

Sidebar photo of Bruce Schneier by Joe MacInnis.