Last July, a still-anonymous hacker broke into the network belonging to the cyberweapons arms manufacturer Hacking Team, and dumped an enormous amount of its proprietary documents online. Kaspersky Labs was able to reverse-engineer one of its zero-day exploits from that data.
Entries Tagged "cyberweapons"
Page 3 of 4
Oracle’s CSO Mary Ann Davidson wrote a blog post ranting against security experts finding vulnerabilities in her company’s products. The blog post has been taken down by the company, but was saved for posterity by others. There’s been lots of commentary.
It’s easy to just mock Davidson’s stance, but it’s dangerous to our community. Yes, if researchers don’t find vulnerabilities in Oracle products, then the company won’t look bad and won’t have to patch things. But the real attackers — whether they be governments, criminals, or cyberweapons arms manufacturers who sell to government and criminals — will continue to find vulnerabilities in her products. And while they won’t make a press splash and embarrass her, they will exploit them.
Hacking Team asked its customers to shut down operations, but according to one of the leaked files, as part of Hacking Team’s “crisis procedure,” it could have killed their operations remotely. The company, in fact, has “a backdoor” into every customer’s software, giving it ability to suspend it or shut it down — something that even customers aren’t told about.
To make matters worse, every copy of Hacking Team’s Galileo software is watermarked, according to the source, which means Hacking Team, and now everyone with access to this data dump, can find out who operates it and who they’re targeting with it.
It’s one thing to have dissatisfied customers. It’s another to have dissatisfied customers with death squads. I don’t think the company is going to survive this.
Hacking Team is a pretty sleazy company, selling surveillance software to all sorts of authoritarian governments around the world. Reporters Without Borders calls it one of the enemies of the Internet. Citizen Lab has published many reports about their activities.
It’s a huge trove of data, including a spreadsheet listing every government client, when they first bought the surveillance software, and how much money they have paid the company to date. Not surprising, the company has been lying about who its customers are. Chris Soghoian has been going through the data and tweeting about it. More Twitter comments on the data here. Here are articles from Wired and The Guardian.
I expect we’ll be sifting through all the data for a while.
EDITED TO ADD: The Hacking Team CEO, David Vincenzetti, doesn’t like me:
In another [e-mail], the Hacking Team CEO on 15 May claimed renowned cryptographer Bruce Schneier was “exploiting the Big Brother is Watching You FUD (Fear, Uncertainty and Doubt) phenomenon in order to sell his books, write quite self-promoting essays, give interviews, do consulting etc. and earn his hefty money.”
Meanwhile, Hacking Team has told all of its customers to shut down all uses of its software. They are in “full on emergency mode,” which is perfectly understandable.
EDITED TO ADD: Hacking Team had no exploits for an un-jail-broken iPhone. Seems like the platform of choice if you want to stay secure.
EDITED TO ADD (7/14): WikiLeaks has published a huge trove of e-mails.
The Citizen Lab at the University of Toronto published a new report on the use of spyware from the Italian cyberweapons arms manufacturer Hacking Team by the Ethiopian intelligence service. We previously learned that the government used this software to target US-based Ethiopian journalists.
No one has admitted taking down North Korea’s Internet. It could have been an act of retaliation by the US government, but it could just as well have been an ordinary DDoS attack. The follow-on attack against Sony PlayStation definitely seems to be the work of hackers unaffiliated with a government.
Not knowing who did what isn’t new. It’s called the “attribution problem,” and it plagues Internet security. But as governments increasingly get involved in cyberspace attacks, it has policy implications as well. Last year, I wrote:
Ordinarily, you could determine who the attacker was by the weaponry. When you saw a tank driving down your street, you knew the military was involved because only the military could afford tanks. Cyberspace is different. In cyberspace, technology is broadly spreading its capability, and everyone is using the same weaponry: hackers, criminals, politically motivated hacktivists, national spies, militaries, even the potential cyberterrorist. They are all exploiting the same vulnerabilities, using the same sort of hacking tools, engaging in the same attack tactics, and leaving the same traces behind. They all eavesdrop or steal data. They all engage in denial-of-service attacks. They all probe cyberdefences and do their best to cover their tracks.
Despite this, knowing the attacker is vitally important. As members of society, we have several different types of organizations that can defend us from an attack. We can call the police or the military. We can call on our national anti-terrorist agency and our corporate lawyers. Or we can defend ourselves with a variety of commercial products and services. Depending on the situation, all of these are reasonable choices.
The legal regime in which any defense operates depends on two things: who is attacking you and why. Unfortunately, when you are being attacked in cyberspace, the two things you often do not know are who is attacking you and why. It is not that everything can be defined as cyberwar; it is that we are increasingly seeing warlike tactics used in broader cyberconflicts. This makes defence and national cyberdefence policy difficult.
In 2007, the Israeli Air Force bombed and destroyed the al-Kibar nuclear facility in Syria. The Syrian government immediately knew who did it, because airplanes are hard to disguise. In 2010, the US and Israel jointly damaged Iran’s Natanz nuclear facility. But this time they used a cyberweapon, Stuxnet, and no one knew who did it until details were leaked years later. China routinely denies its cyberespionage activities. And a 2009 cyberattack against the United States and South Korea was blamed on North Korea even though it may have originated from either London or Miami.
When it’s possible to identify the origins of cyberattacks — like forensic experts were able to do with many of the Chinese attacks against US networks — it’s as a result of months of detailed analysis and investigation. That kind of time frame doesn’t help at the moment of attack, when you have to decide within milliseconds how your network is going to react and within days how your country is going to react. This, in part, explains the relative disarray within the Obama administration over what to do about North Korea. Officials in the US government and international institutions simply don’t have the legal or even the conceptual framework to deal with these types of scenarios.
The blurring of lines between individual actors and national governments has been happening more and more in cyberspace. What has been called the first cyberwar, Russia vs. Estonia in 2007, was partly the work of a 20-year-old ethnic Russian living in Tallinn, and partly the work of a pro-Kremlin youth group associated with the Russian government. Many of the Chinese hackers targeting Western networks seem to be unaffiliated with the Chinese government. And in 2011, the hacker group Anonymous threatened NATO.
It’s a strange future we live in when we can’t tell the difference between random hackers and major governments, or when those same random hackers can credibly threaten international military organizations.
This is why people around the world should care about the Sony hack. In this future, we’re going to see an even greater blurring of traditional lines between police, military, and private actions as technology broadly distributes attack capabilities across a variety of actors. This attribution difficulty is here to stay, at least for the foreseeable future.
If North Korea is responsible for the cyberattack, how is the situation different than a North Korean agent breaking into Sony’s office, photocopying a lot of papers, and making them available to the public? Is Chinese corporate espionage a problem for governments to solve, or should we let corporations defend themselves? Should the National Security Agency defend US corporate networks, or only US military networks? How much should we allow organizations like the NSA to insist that we trust them without proof when they claim to have classified evidence that they don’t want to disclose? How should we react to one government imposing sanctions on another based on this secret evidence? More importantly, when we don’t know who is launching an attack or why, who is in charge of the response and under what legal system should those in charge operate?
We need to figure all of this out. We need national guidelines to determine when the military should get involved and when it’s a police matter, as well as what sorts of proportional responses are available in each instance. We need international agreements defining what counts as cyberwar and what does not. And, most of all right now, we need to tone down all the cyberwar rhetoric. Breaking into the offices of a company and photocopying their paperwork is not an act of war, no matter who did it. Neither is doing the same thing over the Internet. Let’s save the big words for when it matters.
This essay previously appeared on TheAtlantic.com.
Jack Goldsmith responded to this essay.
An analysis of the timestamps on some of the leaked documents shows that they were downloaded at USB 2.0 speeds — which implies an insider.
Our Gotnews.com investigation into the data that has been released by the “hackers” shows that someone at Sony was copying 182GB at minimum the night of the 21st — the very same day that Sony Pictures’ head of corporate communications, Charles Sipkins, publicly resigned from a $600,000 job. This could be a coincidence but it seems unlikely. Sipkins’s former client was NewsCorp and Sipkins was officially fired by Pascal’s husband over a snub by the Hollywood Reporter.
Two days later a malware bomb occurred.
We are left with several conclusions about the malware incident:
- The “hackers” did this leak physically at a Sony LAN workstation. Remember Sony’s internal security is hard on the outside squishy in the center and so it wouldn’t be difficult for an insider to harm Sony by downloading the material in much the same way Bradley Manning or Edward Snowden did at their respective posts.
- If the “hackers” already had copies, then it’s possible they made a local copy the night of the 21st to prepare for publishing them as a link in the malware screens on the 24th.
Sony CEO Michael Lynton’s released emails go up to November 21, 2014. Lynton got the “God’sApstls” email demand for money on the 21st at 12:44pm.
Other evidence implies insiders as well:
Working on the premise that it would take an insider with detailed knowledge of the Sony systems in order to gain access and navigate the breadth of the network to selectively exfiltrate the most sensitive of data, researchers from Norse Corporation are focusing on this group based in part on leaked human resources documents that included data on a series of layoffs at Sony that took place in the Spring of 2014.
The researchers tracked the activities of the ex-employee on underground forums where individuals in the U.S., Europe and Asia may have communicated prior to the attack.
The investigators believe the disgruntled former employee or employees may have joined forces with pro-piracy hacktivists, who have long resented the Sony’s anti-piracy stance, to infiltrate the company’s networks.
I have been skeptical of the insider theory. It requires us to postulate the existence of a single person who has both insider knowledge and the requisite hacking skill. And since I don’t believe that insider knowledge was required, it seemed unlikely that the hackers had it. But these results point in that direction.
Pointing in a completely different direction, a linguistic analysis of the grammatical errors in the hacker communications implies that they are Russian speakers:
Taia Global, Inc. has examined the written evidence left by the attackers in an attempt to scientifically determine nationality through Native Language Identification (NLI). We tested for Korean, Mandarin Chinese, Russian, and German using an analysis of L1 interference. Our preliminary results show that Sony’s attackers were most likely Russian, possibly but not likely Korean and definitely not Mandarin Chinese or German.
The FBI still blames North Korea:
The FBI said Monday it was standing behind its assessment, adding that evidence doesn’t support any other explanations.
“The FBI has concluded the government of North Korea is responsible for the theft and destruction of data on the network of Sony Pictures Entertainment. Attribution to North Korea is based on intelligence from the FBI, the U.S. intelligence community, DHS, foreign partners and the private sector,” a spokeswoman said in a statement. “There is no credible information to indicate that any other individual is responsible for this cyber incident.”
Although it is now thinking that the North Koreans hired outside hackers:
U.S. investigators believe that North Korea likely hired hackers from outside the country to help with last month’s massive cyberattack against Sony Pictures, an official close to the investigation said on Monday.
As North Korea lacks the capability to conduct some elements of the sophisticated campaign by itself, the official said, U.S. investigators are looking at the possibility that Pyongyang “contracted out” some of the cyber work.
Even so, lots of security experts don’t believe that it’s North Korea. Marc Rogers picks the FBI’s evidence apart pretty well.
So in conclusion, there is NOTHING here that directly implicates the North Koreans. In fact, what we have is one single set of evidence that has been stretched out into 3 separate sections, each section being cited as evidence that the other section is clear proof of North Korean involvement. As soon as you discredit one of these pieces of evidence, the whole house of cards will come tumbling down.
But, as I wrote earlier this month:
Tellingly, the FBI’s press release says that the bureau’s conclusion is only based “in part” on these clues. This leaves open the possibility that the government has classified evidence that North Korea is behind the attack. The NSA has been trying to eavesdrop on North Korea’s government communications since the Korean War, and it’s reasonable to assume that its analysts are in pretty deep. The agency might have intelligence on the planning process for the hack. It might, say, have phone calls discussing the project, weekly PowerPoint status reports, or even Kim Jong Un’s sign-off on the plan.
On the other hand, maybe not. I could have written the same thing about Iraq’s weapons of mass destruction program in the run-up to the 2003 invasion of that country, and we all know how wrong the government was about that.
I also wrote that bluffing about this is a smart strategy for the US government:
…from a diplomatic perspective, it’s a smart strategy for the US to be overconfident in assigning blame for the cyberattacks. Beyond the politics of this particular attack, the long-term US interest is to discourage other nations from engaging in similar behavior. If the North Korean government continues denying its involvement, no matter what the truth is, and the real attackers have gone underground, then the US decision to claim omnipotent powers of attribution serves as a warning to others that they will get caught if they try something like this.
Of course, this strategy completely backfires if the attackers can be definitely shown to be not from North Korea. Stay tuned for more.
EDITED TO ADD (12/31): Lots of people in the comments are doubting the USB claim.
Last week, we learned about a striking piece of malware called Regin that has been infecting computer networks worldwide since 2008. It’s more sophisticated than any known criminal malware, and everyone believes a government is behind it. No country has taken credit for Regin, but there’s substantial evidence that it was built and operated by the United States.
This isn’t the first government malware discovered. GhostNet is believed to be Chinese. Red October and Turla are believed to be Russian. The Mask is probably Spanish. Stuxnet and Flame are probably from the U.S. All these were discovered in the past five years, and named by researchers who inferred their creators from clues such as who the malware targeted.
I dislike the “cyberwar” metaphor for espionage and hacking, but there is a war of sorts going on in cyberspace. Countries are using these weapons against each other. This affects all of us not just because we might be citizens of one of these countries, but because we are all potentially collateral damage. Most of the varieties of malware listed above have been used against nongovernment targets, such as national infrastructure, corporations, and NGOs. Sometimes these attacks are accidental, but often they are deliberate.
For their defense, civilian networks must rely on commercial security products and services. We largely rely on antivirus products from companies such as Symantec, Kaspersky, and F-Secure. These products continuously scan our computers, looking for malware, deleting it, and alerting us as they find it. We expect these companies to act in our interests, and never deliberately fail to protect us from a known threat.
This is why the recent disclosure of Regin is so disquieting. The first public announcement of Regin was from Symantec, on November 23. The company said that its researchers had been studying it for about a year, and announced its existence because they knew of another source that was going to announce it. That source was a news site, the Intercept, which described Regin and its U.S. connections the following day. Both Kaspersky and F-Secure soon published their own findings. Both stated that they had been tracking Regin for years. All three of the antivirus companies were able to find samples of it in their files since 2008 or 2009.
So why did these companies all keep Regin a secret for so long? And why did they leave us vulnerable for all this time?
To get an answer, we have to disentangle two things. Near as we can tell, all the companies had added signatures for Regin to their detection database long before last month. The VirusTotal website has a signature for Regin as of 2011. Both Microsoft security and F-Secure started detecting and removing it that year as well. Symantec has protected its users against Regin since 2013, although it certainly added the VirusTotal signature in 2011.
Entirely separately and seemingly independently, all of these companies decided not to publicly discuss Regin’s existence until after Symantec and the Intercept did so. Reasons given vary. Mikko Hyponnen of F-Secure said that specific customers asked him not to discuss the malware that had been found on their networks. Fox IT, which was hired to remove Regin from the Belgian phone company Belgacom’s website, didn’t say anything about what it discovered because it “didn’t want to interfere with NSA/GCHQ operations.”
My guess is that none of the companies wanted to go public with an incomplete picture. Unlike criminal malware, government-grade malware can be hard to figure out. It’s much more elusive and complicated. It is constantly updated. Regin is made up of multiple modules — Fox IT called it “a full framework of a lot of species of malware” — making it even harder to figure out what’s going on. Regin has also been used sparingly, against only a select few targets, making it hard to get samples. When you make a press splash by identifying a piece of malware, you want to have the whole story. Apparently, no one felt they had that with Regin.
That is not a good enough excuse, though. As nation-state malware becomes more common, we will often lack the whole story. And as long as countries are battling it out in cyberspace, some of us will be targets and the rest of us might be unlucky enough to be sitting in the blast radius. Military-grade malware will continue to be elusive.
Right now, antivirus companies are probably sitting on incomplete stories about a dozen more varieties of government-grade malware. But they shouldn’t. We want, and need, our antivirus companies to tell us everything they can about these threats as soon as they know them, and not wait until the release of a political story makes it impossible for them to remain silent.
This essay previously appeared in the MIT Technology Review.
The US Air Force is focusing on cyber deception next year:
Background: Deception is a deliberate act to conceal activity on our networks, create uncertainty and confusion against the adversary’s efforts to establish situational awareness and to influence and misdirect adversary perceptions and decision processes. Military deception is defined as “those actions executed to deliberately mislead adversary decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of the friendly mission.” Military forces have historically used techniques such as camouflage, feints, chaff, jammers, fake equipment, false messages or traffic to alter an enemy’s perception of reality. Modern day military planners need a capability that goes beyond the current state-of-the-art in cyber deception to provide a system or systems that can be employed by a commander when needed to enable deception to be inserted into defensive cyber operations.
Relevance and realism are the grand technical challenges to cyber deception. The application of the proposed technology must be relevant to operational and support systems within the DoD. The DoD operates within a highly standardized environment. Any technology that significantly disrupts or increases the cost to the standard of practice will not be adopted. If the technology is adopted, the defense system must appear legitimate to the adversary trying to exploit it.
Objective: To provide cyber-deception capabilities that could be employed by commanders to provide false information, confuse, delay, or otherwise impede cyber attackers to the benefit of friendly forces. Deception mechanisms must be incorporated in such a way that they are transparent to authorized users, and must introduce minimal functional and performance impacts, in order to disrupt our adversaries and not ourselves. As such, proposed techniques must consider how challenges relating to transparency and impact will be addressed. The security of such mechanisms is also paramount, so that their power is not co-opted by attackers against us for their own purposes. These techniques are intended to be employed for defensive purposes only on networks and systems controlled by the DoD.
Advanced techniques are needed with a focus on introducing varying deception dynamics in network protocols and services which can severely impede, confound, and degrade an attacker’s methods of exploitation and attack, thereby increasing the costs and limiting the benefits gained from the attack. The emphasis is on techniques that delay the attacker in the reconnaissance through weaponization stages of an attack and also aid defenses by forcing an attacker to move and act in a more observable manner. Techniques across the host and network layers or a hybrid thereof are of interest in order to provide AF cyber operations with effective, flexible, and rapid deployment options.
More discussion here.
Sidebar photo of Bruce Schneier by Joe MacInnis.