Entries Tagged "disclosure"

Page 3 of 9

Suckfly

Suckfly seems to be another Chinese nation-state espionage tool, first stealing South Korean certificates and now attacking Indian networks.

Symantec has done a good job of explaining how Suckfly works, and there’s a lot of good detail in the blog posts. My only complaint is its reluctance to disclose who the targets are. It doesn’t name the South Korean companies whose certificates were stolen, and it doesn’t name the Indian companies that were hacked:

Many of the targets we identified were well known commercial organizations located in India. These organizations included:

  • One of India’s largest financial organizations
  • A large e-commerce company
  • The e-commerce company’s primary shipping vendor
  • One of India’s top five IT firms
  • A United States healthcare provider’s Indian business unit
  • Two government organizations

Suckfly spent more time attacking the government networks compared to all but one of the commercial targets. Additionally, one of the two government organizations had the highest infection rate of the Indian targets.

My guess is that Symantec can’t disclose those names, because those are all customers and Symantec has confidentiality obligations towards them. But by leaving this information out, Symantec is harming us all. We have to make decisions on the Internet all the time about who to trust and who to rely on. The more information we have, the better we can make those decisions. And the more companies are publicly called out when their security fails, the more they will try to make security better.

Symantec’s motivation in releasing information about Suckfly is marketing, and that’s fine. There, its interests and the interests of the research community are aligned. But here, the interests diverge, and this is the value of mandatory disclosure laws.

Posted on May 26, 2016 at 6:31 AMView Comments

Hacking Team, Computer Vulnerabilities, and the NSA

When the National Security Administration (NSA) — or any government agency — discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others’ computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks. The two options are mutually exclusive: either we can help to secure both our own networks and the systems we might want to attack, or we can keep both networks vulnerable. Many, myself included, have long argued that defense is more important than offense, and that we should patch almost every vulnerability we find. Even the President’s Review Group on Intelligence and Communications Technologies recommended in 2013 that “U.S. policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on U.S. Government and other networks.”

Both the NSA and the White House have talked about a secret “vulnerability equities process” they go through when they find a security flaw. Both groups maintain the process is heavily weighted in favor or disclosing vulnerabilities to the vendors and having them patched.

An undated document — declassified last week with heavy redactions after a year-long Freedom of Information Act lawsuit — shines some light on the process but still leaves many questions unanswered. An important question is: which vulnerabilities go through the equities process, and which don’t?

A real-world example of the ambiguity surrounding the equities process emerged from the recent hacking of the cyber weapons arms manufacturer Hacking Team. The corporation sells Internet attack and espionage software to countries around the world, including many reprehensible governments to allow them to eavesdrop on their citizens, sometimes as a prelude to arrest and torture. The computer tools were used against U.S. journalists.

In July, unidentified hackers penetrated Hacking Team’s corporate network and stole almost everything of value, including corporate documents, e-mails, and source code. The hackers proceeded to post it all online.

The NSA was most likely able to penetrate Hacking Team’s network and steal the same data. The agency probably did it years ago. They would have learned the same things about Hacking Team’s network software that we did in July: how it worked, what vulnerabilities they were using, and which countries were using their cyber weapons. Armed with that knowledge, the NSA could have quietly neutralized many of the company’s products. The United States could have alerted software vendors about the zero-day exploits and had them patched. It could have told the antivirus companies how to detect and remove Hacking Team’s malware. It could have done a lot. Assuming that the NSA did infiltrate Hacking Team’s network, the fact that the United States chose not to reveal the vulnerabilities it uncovered is both revealing and interesting, and the decision provides a window into the vulnerability equities process.

The first question to ask is why? There are three possible reasons. One, the software was also being used by the United States, and the government did not want to lose its benefits. Two, NSA was able to eavesdrop on other entities using Hacking Team’s software, and they wanted to continue benefitting from the intelligence. And three, the agency did not want to expose their own hacking capabilities by demonstrating that they had compromised Hacking Team’s network. In reality, the decision may have been due to a combination of the three possibilities.

How was this decision made? More explicitly, did any vulnerabilities that Hacking Team exploited, and the NSA was aware of, go through the vulnerability equities process? It is unclear. The NSA plays fast and loose when deciding which security flaws go through the procedure. The process document states that it applies to vulnerabilities that are “newly discovered and not publicly known.” Does that refer only to vulnerabilities discovered by the NSA, or does the process also apply to zero-day vulnerabilities that the NSA discovers others are using? If vulnerabilities used in others’ cyber weapons are excluded, it is very difficult to talk about the process as it is currently formulated.

The U.S. government should close the vulnerabilities that foreign governments are using to attack people and networks. If taking action is as easy as plugging security vulnerabilities in products and making everyone in the world more secure, that should be standard procedure. The fact that the NSA — we assume — chose not to suggests that the United States has its priorities wrong.

Undoubtedly, there would be blowback from closing vulnerabilities utilized in others’ cyber weapons. Several companies sell information about vulnerabilities to different countries, and if they found that those security gaps were regularly closed soon after they started trying to sell them, they would quickly suspect espionage and take more defensive precautions. The new wariness of sellers and decrease in available security flaws would also raise the price of vulnerabilities worldwide. The United States is one of the biggest buyers, meaning that we benefit from greater availability and lower prices.

If we assume the NSA has penetrated these companies’ networks, we should also assume that the intelligence agencies of countries like Russia and China have done the same. Are those countries using Hacking Team’s vulnerabilities in their cyber weapons? We are all embroiled in a cyber arms race — finding, buying, stockpiling, using, and exposing vulnerabilities — and our actions will affect the actions of all the other players.

It seems foolish that we would not take every opportunity to neutralize the cyberweapons of those countries that would attack the United States or use them against their own people for totalitarian gain. Is it truly possible that when the NSA intercepts and reverse-engineers a cyberweapon used by one of our enemies — whether a Hacking Team customer or a country like China — we don’t close the vulnerabilities that that weapon uses? Does the NSA use knowledge of the weapon to defend the U.S. government networks whose security it maintains, at the expense of everyone else in the country and the world? That seems incredibly dangerous.

In my book Data and Goliath, I suggested breaking apart the NSA’s offensive and defensive components, in part to resolve the agency’s internal conflict between attack and defense. One part would be focused on foreign espionage, and another on cyberdefense. This Hacking Team discussion demonstrates that even separating the agency would not be enough. The espionage-focused organization that penetrates and analyzes the products of cyberweapons arms manufacturers would regularly learn about vulnerabilities used to attack systems and networks worldwide. Thus, that section of the agency would still have to transfer that knowledge to the defense-focused organization. That is not going to happen as long as the United States prioritizes surveillance over security and attack over defense. The norms governing actions in cyberspace need to be changed, a task far more difficult than any reform of the NSA.

This essay previously appeared in the Georgetown Journal of International Affairs.

EDITED TO ADD: Hacker News thread.

Posted on September 15, 2015 at 6:38 AMView Comments

Oracle CSO Rant Against Security Experts

Oracle’s CSO Mary Ann Davidson wrote a blog post ranting against security experts finding vulnerabilities in her company’s products. The blog post has been taken down by the company, but was saved for posterity by others. There’s been lots of commentary.

It’s easy to just mock Davidson’s stance, but it’s dangerous to our community. Yes, if researchers don’t find vulnerabilities in Oracle products, then the company won’t look bad and won’t have to patch things. But the real attackers — whether they be governments, criminals, or cyberweapons arms manufacturers who sell to government and criminals — will continue to find vulnerabilities in her products. And while they won’t make a press splash and embarrass her, they will exploit them.

Posted on August 17, 2015 at 6:45 AMView Comments

Vulnerabilities in Brink's Smart Safe

Brink’s sells an Internet-enabled smart safe called the CompuSafe Galileo. Despite being sold as a more secure safe, it’s wildly insecure:

Vulnerabilities found in CompuSafe Galileo safes, smart safes made by the ever-reliable Brinks company that are used by retailers, restaurants, and convenience stores, would allow a rogue employee or anyone else with physical access to them to command their doors to open and relinquish their cash….

The hack has the makings of the perfect crime, because a thief could also erase any evidence that the theft occurred simply by altering data in a back-end database where the smartsafe logs how much money is inside and who accessed it.

Nothing about these vulnerabilities is a surprise to anyone who works in computer security:

But the safes have an external USB port on the side of the touchscreens that allows service technicians to troubleshoot and obtain a backup of the database. This, unfortunately, creates an easy entrypoint for thieves to take complete, administrative control of the devices.

“Once you’re able to plug into that USB port, you’re able to access lots of things that you shouldn’t normally be able to access,” Petro told WIRED. “There is a full operating system…that you’re able to…fully take over…and make [the safe] do whatever you want it to do.”

The researchers created a malicious script that, once inserted into a safe on a USB stick, lets a thief automatically open the safe doors by emulating certain mouse and keyboard actions and bypassing standard application controls. “You plug in this little gizmo, wait about 60 seconds, and the door just pops open,” says Petro.

If it sounds like the people who designed this e-safe ignored all of the things we’ve learned about computer security in the last few decades, you’re right. And that’s the problem with Internet-of-Things security: it’s often designed by people who don’t know computer or Internet security.

They also haven’t learned the lessons of full disclosure or rapid patching:

They notified Brinks about the vulnerabilities more than a year ago, but say the company appears to have done nothing to resolve the issues. Although Brinks could disable driver software associated with the USB port to prevent someone from controlling the safes in this way, or lock down the system and database so it’s not running in administrative mode and the database can’t be changed, but so far the company appears to have done none of these.

.

Again, this all sounds familiar. The computer industry learned its lessons over a decade ago. Before then they ignored security vulnerabilities, threatened researchers, and generally behaved very badly. I expect the same things to happen with Internet-of-Things companies.

Posted on August 3, 2015 at 1:27 PMView Comments

Race Condition Exploit in Starbucks Gift Cards

A researcher was able to steal money from Starbucks by exploiting a race condition in its gift card value-transfer protocol. Basically, by initiating two identical web transfers at once, he was able to trick the system into recording them both. Normally, you could take a $5 gift card and move that money to another $5 gift card, leaving you with an empty gift card and a $10 gift card. He was able to duplicate the transfer, giving him an empty gift card and a $15 gift card.

Race-condition attacks are unreliable and it took him a bunch of tries to get it right, but there’s no reason to believe that he couldn’t have kept doing this forever.

Unfortunately, there was really no one at Starbucks he could tell this to:

The hardest part — responsible disclosure. Support guy honestly answered there’s absolutely no way to get in touch with technical department and he’s sorry I feel this way. Emailing InformationSecurityServices@starbucks.com on March 23 was futile (and it only was answered on Apr 29). After trying really hard to find anyone who cares, I managed to get this bug fixed in like 10 days.

The unpleasant part is a guy from Starbucks calling me with nothing like “thanks” but mentioning “fraud” and “malicious actions” instead. Sweet!

A little more from BBC News:

A spokeswoman for Starbucks told BBC News: “After this individual reported he was able to commit fraudulent activity against Starbucks, we put safeguards in place to prevent replication.”

The company did not answer questions about its response to Mr Homakov.

More info.

Posted on May 26, 2015 at 4:51 PMView Comments

Regin Malware

Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.

This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.

I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.

For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.

This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.

So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?

To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.

Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”

My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules — Fox IT called it “a full framework of a lot of species of malware” — making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.

That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.

Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.

This essay previously appeared in the MIT Technology Review.

Posted on December 8, 2014 at 7:19 AMView Comments

FOXACID Operations Manual

A few days ago, I saw this tweet: “Just a reminder that it is now *a full year* since Schneier cited it, and the FOXACID ops manual remains unpublished.” It’s true.

The citation is this:

According to a top-secret operational procedures manual provided by Edward Snowden, an exploit named Validator might be the default, but the NSA has a variety of options. The documentation mentions United Rake, Peddle Cheap, Packet Wrench, and Beach Head-­all delivered from a FOXACID subsystem called Ferret Cannon.

Back when I broke the QUANTUM and FOXACID programs, I talked with the Guardian editors about publishing the manual. In the end, we decided not to, because the information in it wasn’t useful to understanding the story. It’s been a year since I’ve seen it, but I remember it being just what I called it: an operation procedures manual. It talked about what to type into which screens, and how to deal with error conditions. It didn’t talk about capabilities, either technical or operational. I found it interesting, but it was hard to argue that it was necessary in order to understand the story.

It will probably never be published. I lost access to the Snowden documents soon after writing that essay — Greenwald broke with the Guardian, and I have never been invited back by the Intercept — and there’s no one looking at the documents with an eye to writing about the NSA’s technical capabilities and how to securely design systems to protect against government surveillance. Even though we now know that the same capabilities are being used by other governments and cyber criminals, there’s much more interest in stories with political ramifications.

Posted on October 15, 2014 at 6:29 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.