Entries Tagged "network security"

Page 1 of 11

How the SolarWinds Hackers Bypassed Duo’s Multi-Factor Authentication

This is interesting:

Toward the end of the second incident that Volexity worked involving Dark Halo, the actor was observed accessing the e-mail account of a user via OWA. This was unexpected for a few reasons, not least of which was the targeted mailbox was protected by MFA. Logs from the Exchange server showed that the attacker provided username and password authentication like normal but were not challenged for a second factor through Duo. The logs from the Duo authentication server further showed that no attempts had been made to log into the account in question. Volexity was able to confirm that session hijacking was not involved and, through a memory dump of the OWA server, could also confirm that the attacker had presented cookie tied to a Duo MFA session named duo-sid.

Volexity’s investigation into this incident determined the attacker had accessed the Duo integration secret key (akey) from the OWA server. This key then allowed the attacker to derive a pre-computed value to be set in the duo-sid cookie. After successful password authentication, the server evaluated the duo-sid cookie and determined it to be valid. This allowed the attacker with knowledge of a user account and password to then completely bypass the MFA set on the account. It should be noted this is not a vulnerability with the MFA provider and underscores the need to ensure that all secrets associated with key integrations, such as those with an MFA provider, should be changed following a breach.

Again, this is not a Duo vulnerability. From ArsTechnica:

While the MFA provider in this case was Duo, it just as easily could have involved any of its competitors. MFA threat modeling generally doesn’t include a complete system compromise of an OWA server. The level of access the hacker achieved was enough to neuter just about any defense.

Posted on December 15, 2020 at 2:13 PMView Comments

FireEye Hacked

FireEye was hacked by—they believe—”a nation with top-tier offensive capabilities”:

During our investigation to date, we have found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security. These tools mimic the behavior of many cyber threat actors and enable FireEye to provide essential diagnostic security services to our customers. None of the tools contain zero-day exploits. Consistent with our goal to protect the community, we are proactively releasing methods and means to detect the use of our stolen Red Team tools.

We are not sure if the attacker intends to use our Red Team tools or to publicly disclose them. Nevertheless, out of an abundance of caution, we have developed more than 300 countermeasures for our customers, and the community at large, to use in order to minimize the potential impact of the theft of these tools.

We have seen no evidence to date that any attacker has used the stolen Red Team tools. We, as well as others in the security community, will continue to monitor for any such activity. At this time, we want to ensure that the entire security community is both aware and protected against the attempted use of these Red Team tools. Specifically, here is what we are doing:

  • We have prepared countermeasures that can detect or block the use of our stolen Red Team tools.
  • We have implemented countermeasures into our security products.
  • We are sharing these countermeasures with our colleagues in the security community so that they can update their security tools.
  • We are making the countermeasures publicly available on our GitHub.
  • We will continue to share and refine any additional mitigations for the Red Team tools as they become available, both publicly and directly with our security partners.

Consistent with a nation-state cyber-espionage effort, the attacker primarily sought information related to certain government customers. While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems. If we discover that customer information was taken, we will contact them directly.

From the New York Times:

The hack was the biggest known theft of cybersecurity tools since those of the National Security Agency were purloined in 2016 by a still-unidentified group that calls itself the ShadowBrokers. That group dumped the N.S.A.’s hacking tools online over several months, handing nation-states and hackers the “keys to the digital kingdom,” as one former N.S.A. operator put it. North Korea and Russia ultimately used the N.S.A.’s stolen weaponry in destructive attacks on government agencies, hospitals and the world’s biggest conglomerates ­- at a cost of more than $10 billion.

The N.S.A.’s tools were most likely more useful than FireEye’s since the U.S. government builds purpose-made digital weapons. FireEye’s Red Team tools are essentially built from malware that the company has seen used in a wide range of attacks.

Russia is presumed to be the attacker.

Reuters article. Boing Boing post. Slashdot thread. Wired article.

Posted on December 9, 2020 at 6:36 AMView Comments

Router Vulnerability and the VPNFilter Botnet

On May 25, the FBI asked us all to reboot our routers. The story behind this request is one of sophisticated malware and unsophisticated home-network security, and it’s a harbinger of the sorts of pervasive threats ­ from nation-states, criminals and hackers ­ that we should expect in coming years.

VPNFilter is a sophisticated piece of malware that infects mostly older home and small-office routers made by Linksys, MikroTik, Netgear, QNAP and TP-Link. (For a list of specific models, click here.) It’s an impressive piece of work. It can eavesdrop on traffic passing through the router ­ specifically, log-in credentials and SCADA traffic, which is a networking protocol that controls power plants, chemical plants and industrial systems ­ attack other targets on the Internet and destructively “kill” its infected device. It is one of a very few pieces of malware that can survive a reboot, even though that’s what the FBI has requested. It has a number of other capabilities, and it can be remotely updated to provide still others. More than 500,000 routers in at least 54 countries have been infected since 2016.

Because of the malware’s sophistication, VPNFilter is believed to be the work of a government. The FBI suggested the Russian government was involved for two circumstantial reasons. One, a piece of the code is identical to one found in another piece of malware, called BlackEnergy, that was used in the December 2015 attack against Ukraine’s power grid. Russia is believed to be behind that attack. And two, the majority of those 500,000 infections are in Ukraine and controlled by a separate command-and-control server. There might also be classified evidence, as an FBI affidavit in this matter identifies the group behind VPNFilter as Sofacy, also known as APT28 and Fancy Bear. That’s the group behind a long list of attacks, including the 2016 hack of the Democratic National Committee.

Two companies, Cisco and Symantec, seem to have been working with the FBI during the past two years to track this malware as it infected ever more routers. The infection mechanism isn’t known, but we believe it targets known vulnerabilities in these older routers. Pretty much no one patches their routers, so the vulnerabilities have remained, even if they were fixed in new models from the same manufacturers.

On May 30, the FBI seized control of toknowall.com, a critical VPNFilter command-and-control server. This is called “sinkholing,” and serves to disrupt a critical part of this system. When infected routers contact toknowall.com, they will no longer be contacting a server owned by the malware’s creators; instead, they’ll be contacting a server owned by the FBI. This doesn’t entirely neutralize the malware, though. It will stay on the infected routers through reboot, and the underlying vulnerabilities remain, making the routers susceptible to reinfection with a variant controlled by a different server.

If you want to make sure your router is no longer infected, you need to do more than reboot it, the FBI’s warning notwithstanding. You need to reset the router to its factory settings. That means you need to reconfigure it for your network, which can be a pain if you’re not sophisticated in these matters. If you want to make sure your router cannot be reinfected, you need to update the firmware with any security patches from the manufacturer. This is harder to do and may strain your technical capabilities, though it’s ridiculous that routers don’t automatically download and install firmware updates on their own. Some of these models probably do not even have security patches available. Honestly, the best thing to do if you have one of the vulnerable models is to throw it away and get a new one. (Your ISP will probably send you a new one free if you claim that it’s not working properly. And you should have a new one, because if your current one is on the list, it’s at least 10 years old.)

So if it won’t clear out the malware, why is the FBI asking us to reboot our routers? It’s mostly just to get a sense of how bad the problem is. The FBI now controls toknowall.com. When an infected router gets rebooted, it connects to that server to get fully reinfected, and when it does, the FBI will know. Rebooting will give it a better idea of how many devices out there are infected.

Should you do it? It can’t hurt.

Internet of Things malware isn’t new. The 2016 Mirai botnet, for example, created by a lone hacker and not a government, targeted vulnerabilities in Internet-connected digital video recorders and webcams. Other malware has targeted Internet-connected thermostats. Lots of malware targets home routers. These devices are particularly vulnerable because they are often designed by ad hoc teams without a lot of security expertise, stay around in networks far longer than our computers and phones, and have no easy way to patch them.

It wouldn’t be surprising if the Russians targeted routers to build a network of infected computers for follow-on cyber operations. I’m sure many governments are doing the same. As long as we allow these insecure devices on the Internet ­ and short of security regulations, there’s no way to stop them ­ we’re going to be vulnerable to this kind of malware.

And next time, the command-and-control server won’t be so easy to disrupt.

This essay previously appeared in the Washington Post

EDITED TO ADD: The malware is more capable than we previously thought.

Posted on June 11, 2018 at 6:19 AMView Comments

Inmates Secretly Build and Network Computers while in Prison

This is kind of amazing:

Inmates at a medium-security Ohio prison secretly assembled two functioning computers, hid them in the ceiling, and connected them to the Marion Correctional Institution’s network. The hard drives were loaded with pornography, a Windows proxy server, VPN, VOIP and anti-virus software, the Tor browser, password hacking and e-mail spamming tools, and the open source packet analyzer Wireshark.

Another article.

Clearly there’s a lot about prison security, or the lack thereof, that I don’t know. This article reveals some of it.

Posted on May 30, 2017 at 12:47 PMView Comments

NSA/GCHQ Exploits against Juniper Networking Equipment

The Intercept just published a 2011 GCHQ document outlining its exploit capabilities against Juniper networking equipment, including routers and NetScreen firewalls as part of this article.

GCHQ currently has capabilities against:

  • Juniper NetScreen Firewalls models Ns5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. Some reverse engineering maybe required depending on firmware revisions.
  • Juniper Routers: M320 is currently being worked on and we would expect to have full support by the end of 2010.
  • No other models are currently supported.
  • Juniper technology sharing with NSA improved dramatically during CY2010 to exploit several target networks where GCHQ had access primacy.

Yes, the document said “end of 2010” even though the document is dated February 3, 2011.

This doesn’t have much to do with the Juniper backdoor currently in the news, but the document does provide even more evidence that (despite what the government says) the NSA hoards vulnerabilities in commonly used software for attack purposes instead of improving security for everyone by disclosing it.

Note: In case anyone is researching this issue, here is my complete list of useful links on various different aspects of the ongoing debate.

EDITED TO ADD: In thinking about the equities process, it’s worth differentiating among three different things: bugs, vulnerabilities, and exploits. Bugs are plentiful in code, but not all bugs can be turned into vulnerabilities. And not all vulnerabilities can be turned into exploits. Exploits are what matter; they’re what everyone uses to compromise our security. Fixing bugs and vulnerabilities is important because they could potentially be turned into exploits.

I think the US government deliberately clouds the issue when they say that they disclose almost all bugs they discover, ignoring the much more important question of how often they disclose exploits they discover. What this document shows is that—despite their insistence that they prioritize security over surveillance—they like to hoard exploits against commonly used network equipment.

Posted on December 28, 2015 at 6:54 AMView Comments

Backdoors Won't Solve Comey's Going Dark Problem

At the Aspen Security Forum two weeks ago, James Comey (and others) explicitly talked about the “going dark” problem, describing the specific scenario they are concerned about. Maybe others have heard the scenario before, but it was a first for me. It centers around ISIL operatives abroad and ISIL-inspired terrorists here in the US. The FBI knows who the Americans are, can get a court order to carry out surveillance on their communications, but cannot eavesdrop on the conversations, because they are encrypted. They can get the metadata, so they know who is talking to who, but they can’t find out what’s being said.

“ISIL’s M.O. is to broadcast on Twitter, get people to follow them, then move them to Twitter Direct Messaging” to evaluate if they are a legitimate recruit, he said. “Then they’ll move them to an encrypted mobile-messaging app so they go dark to us.”

[…]

The FBI can get court-approved access to Twitter exchanges, but not to encrypted communication, Comey said. Even when the FBI demonstrates probable cause and gets a judicial order to intercept that communication, it cannot break the encryption for technological reasons, according to Comey.

If this is what Comey and the FBI are actually concerned about, they’re getting bad advice—because their proposed solution won’t solve the problem. Comey wants communications companies to give them the capability to eavesdrop on conversations without the conversants’ knowledge or consent; that’s the “backdoor” we’re all talking about. But the problem isn’t that most encrypted communications platforms are securely encrypted, or even that some are—the problem is that there exists at least one securely encrypted communications platform on the planet that ISIL can use.

Imagine that Comey got what he wanted. Imagine that iMessage and Facebook and Skype and everything else US-made had his backdoor. The ISIL operative would tell his potential recruit to use something else, something secure and non-US-made. Maybe an encryption program from Finland, or Switzerland, or Brazil. Maybe Mujahedeen Secrets. Maybe anything. (Sure, some of these will have flaws, and they’ll be identifiable by their metadata, but the FBI already has the metadata, and the better software will rise to the top.) As long as there is something that the ISIL operative can move them to, some software that the American can download and install on their phone or computer, or hardware that they can buy from abroad, the FBI still won’t be able to eavesdrop.

And by pushing these ISIL operatives to non-US platforms, they lose access to the metadata they otherwise have.

Convincing US companies to install backdoors isn’t enough; in order to solve this going dark problem, the FBI has to ensure that an American can only use backdoored software. And the only way to do that is to prohibit the use of non-backdoored software, which is the sort of thing that the UK’s David Cameron said he wanted for his country in January:

But the question is are we going to allow a means of communications which it simply isn’t possible to read. My answer to that question is: no, we must not.

And that, of course, is impossible. Jonathan Zittrain explained why. And Cory Doctorow outlined what trying would entail:

For David Cameron’s proposal to work, he will need to stop Britons from installing software that comes from software creators who are out of his jurisdiction. The very best in secure communications are already free/open source projects, maintained by thousands of independent programmers around the world. They are widely available, and thanks to things like cryptographic signing, it is possible to download these packages from any server in the world (not just big ones like Github) and verify, with a very high degree of confidence, that the software you’ve downloaded hasn’t been tampered with.

[…]

This, then, is what David Cameron is proposing:

* All Britons’ communications must be easy for criminals, voyeurs and foreign spies to intercept.

* Any firms within reach of the UK government must be banned from producing secure software.

* All major code repositories, such as Github and Sourceforge, must be blocked.

* Search engines must not answer queries about web-pages that carry secure software.

* Virtually all academic security work in the UK must cease—security research must only take place in proprietary research environments where there is no onus to publish one’s findings, such as industry R&D and the security services.

* All packets in and out of the country, and within the country, must be subject to Chinese-style deep-packet inspection and any packets that appear to originate from secure software must be dropped.

* Existing walled gardens (like IOs and games consoles) must be ordered to ban their users from installing secure software.

* Anyone visiting the country from abroad must have their smartphones held at the border until they leave.

* Proprietary operating system vendors (Microsoft and Apple) must be ordered to redesign their operating systems as walled gardens that only allow users to run software from an app store, which will not sell or give secure software to Britons.

* Free/open source operating systems—that power the energy, banking, ecommerce, and infrastructure sectors—must be banned outright.

As extreme as it reads, without all of that, the ISIL operative would be able to communicate securely with his potential American recruit. And all of this is not going to happen.

Last week, former NSA director Mike McConnell, former DHS secretary Michael Chertoff, and former deputy defense secretary William Lynn published a Washington Post op-ed opposing backdoors in encryption software. They wrote:

Today, with almost everyone carrying a networked device on his or her person, ubiquitous encryption provides essential security. If law enforcement and intelligence organizations face a future without assured access to encrypted communications, they will develop technologies and techniques to meet their legitimate mission goals.

I believe this is true. Already one is being talked about in the academic literature: lawful hacking.

Perhaps the FBI’s reluctance to accept this is based on their belief that all encryption software comes from the US, and therefore is under their influence. Back in the 1990s, during the first Crypto Wars, the US government had a similar belief. To convince them otherwise, George Washington University surveyed the cryptography market in 1999 and found that there were over 500 companies in 70 countries manufacturing or distributing non-US cryptography products. Maybe we need a similar study today.

This essay previously appeared on Lawfare.

Posted on July 31, 2015 at 6:08 AMView Comments

Duqu 2.0

Kaspersky Labs has discovered and publicized details of a new nation-state surveillance malware system, called Duqu 2.0. It’s being attributed to Israel.

There’s a lot of details, and I recommend reading them. There was probably a Kerberos zero-day vulnerability involved, allowing the attackers to send updates to Kaspersky’s clients. There’s code specifically targeting anti-virus software, both Kaspersky and others. The system includes anti-sniffer defense, and packet-injection code. It’s designed to reside in RAM so that it better avoids detection. This is all very sophisticated.

Eugene Kaspersky wrote an op-ed condemning the attack—and making his company look good—and almost, but not quite, comparing attacking his company to attacking the Red Cross:

Historically companies like mine have always played an important role in the development of IT. When the number of Internet users exploded, cybercrime skyrocketed and became a serious threat to the security of billions of Internet users and connected devices. Law enforcement agencies were not prepared for the advent of the digital era, and private security companies were alone in providing protection against cybercrime ­ both to individuals and to businesses. The security community has been something like a group of doctors for the Internet; we even share some vocabulary with the medical profession: we talk about ‘viruses’, ‘disinfection’, etc. And obviously we’re helping law enforcement develop its skills to fight cybercrime more effectively.

One thing that struck me from a very good Wired article on Duqu 2.0:

Raiu says each of the infections began within three weeks before the P5+1 meetings occurred at that particular location. “It cannot be coincidental,” he says. “Obviously the intention was to spy on these meetings.”

Initially Kaspersky was unsure all of these infections were related, because one of the victims appeared not to be part of the nuclear negotiations. But three weeks after discovering the infection, Raiu says, news outlets began reporting that negotiations were already taking place at the site. “Somehow the attackers knew in advance that this was one of the [negotiation] locations,” Raiu says.

Exactly how the attackers spied on the negotiations is unclear, but the malware contained modules for sniffing WiFi networks and hijacking email communications. But Raiu believes the attackers were more sophisticated than this. “I don’t think their style is to infect people connecting to the WiFi. I think they were after some kind of room surveillance—to hijack the audio through the teleconference or hotel phone systems.”

Those meetings are talks about Iran’s nuclear program, which we previously believed Israel spied on. Look at the details of the attack, though: hack the hotel’s Internet, get into the phone system, and turn the hotel phones into room bugs. Very clever.

Posted on June 12, 2015 at 6:18 AMView Comments

Lessons from the Sony Hack

Earlier this month, a mysterious group that calls itself Guardians of Peace hacked into Sony Pictures Entertainment’s computer systems and began revealing many of the Hollywood studio’s best-kept secrets, from details about unreleased movies to embarrassing emails (notably some racist notes from Sony bigwigs about President Barack Obama’s presumed movie-watching preferences) to the personnel data of employees, including salaries and performance reviews. The Federal Bureau of Investigation now says it has evidence that North Korea was behind the attack, and Sony Pictures pulled its planned release of “The Interview,” a satire targeting that country’s dictator, after the hackers made some ridiculous threats about terrorist violence.

Your reaction to the massive hacking of such a prominent company will depend on whether you’re fluent in information-technology security. If you’re not, you’re probably wondering how in the world this could happen. If you are, you’re aware that this could happen to any company (though it is still amazing that Sony made it so easy).

To understand any given episode of hacking, you need to understand who your adversary is. I’ve spent decades dealing with Internet hackers (as I do now at my current firm), and I’ve learned to separate opportunistic attacks from targeted ones.

You can characterize attackers along two axes: skill and focus. Most attacks are low-skill and low-focus—people using common hacking tools against thousands of networks world-wide. These low-end attacks include sending spam out to millions of email addresses, hoping that someone will fall for it and click on a poisoned link. I think of them as the background radiation of the Internet.

High-skill, low-focus attacks are more serious. These include the more sophisticated attacks using newly discovered “zero-day” vulnerabilities in software, systems and networks. This is the sort of attack that affected Target, J.P. Morgan Chase and most of the other commercial networks that you’ve heard about in the past year or so.

But even scarier are the high-skill, high-focus attacks­—the type that hit Sony. This includes sophisticated attacks seemingly run by national intelligence agencies, using such spying tools as Regin and Flame, which many in the IT world suspect were created by the U.S.; Turla, a piece of malware that many blame on the Russian government; and a huge snooping effort called GhostNet, which spied on the Dalai Lama and Asian governments, leading many of my colleagues to blame China. (We’re mostly guessing about the origins of these attacks; governments refuse to comment on such issues.) China has also been accused of trying to hack into the New York Times in 2010, and in May, Attorney General Eric Holder announced the indictment of five Chinese military officials for cyberattacks against U.S. corporations.

This category also includes private actors, including the hacker group known as Anonymous, which mounted a Sony-style attack against the Internet-security firm HBGary Federal, and the unknown hackers who stole racy celebrity photos from Apple’s iCloud and posted them. If you’ve heard the IT-security buzz phrase “advanced persistent threat,” this is it.

There is a key difference among these kinds of hacking. In the first two categories, the attacker is an opportunist. The hackers who penetrated Home Depot’s networks didn’t seem to care much about Home Depot; they just wanted a large database of credit-card numbers. Any large retailer would do.

But a skilled, determined attacker wants to attack a specific victim. The reasons may be political: to hurt a government or leader enmeshed in a geopolitical battle. Or ethical: to punish an industry that the hacker abhors, like big oil or big pharma. Or maybe the victim is just a company that hackers love to hate. (Sony falls into this category: It has been infuriating hackers since 2005, when the company put malicious software on its CDs in a failed attempt to prevent copying.)

Low-focus attacks are easier to defend against: If Home Depot’s systems had been better protected, the hackers would have just moved on to an easier target. With attackers who are highly skilled and highly focused, however, what matters is whether a targeted company’s security is superior to the attacker’s skills, not just to the security measures of other companies. Often, it isn’t. We’re much better at such relative security than we are at absolute security.

That is why security experts aren’t surprised by the Sony story. We know people who do penetration testing for a living—real, no-holds-barred attacks that mimic a full-on assault by a dogged, expert attacker—and we know that the expert always gets in. Against a sufficiently skilled, funded and motivated attacker, all networks are vulnerable. But good security makes many kinds of attack harder, costlier and riskier. Against attackers who aren’t sufficiently skilled, good security may protect you completely.

It is hard to put a dollar value on security that is strong enough to assure you that your embarrassing emails and personnel information won’t end up posted online somewhere, but Sony clearly failed here. Its security turned out to be subpar. They didn’t have to leave so much information exposed. And they didn’t have to be so slow detecting the breach, giving the attackers free rein to wander about and take so much stuff.

For those worried that what happened to Sony could happen to you, I have two pieces of advice. The first is for organizations: take this stuff seriously. Security is a combination of protection, detection and response. You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

The time to start is before the attack hits: Sony would have fared much better if its executives simply hadn’t made racist jokes about Mr. Obama or insulted its stars—or if their response systems had been agile enough to kick the hackers out before they grabbed everything.

My second piece of advice is for individuals. The worst invasion of privacy from the Sony hack didn’t happen to the executives or the stars; it happened to the blameless random employees who were just using their company’s email system. Because of that, they’ve had their most personal conversations—gossip, medical conditions, love lives—exposed. The press may not have divulged this information, but their friends and relatives peeked at it. Hundreds of personal tragedies must be unfolding right now.

This could be any of us. We have no choice but to entrust companies with our intimate conversations: on email, on Facebook, by text and so on. We have no choice but to entrust the retailers that we use with our financial details. And we have little choice but to use cloud services such as iCloud and Google Docs.

So be smart: Understand the risks. Know that your data are vulnerable. Opt out when you can. And agitate for government intervention to ensure that organizations protect your data as well as you would. Like many areas of our hyper-technical world, this isn’t something markets can fix.

This essay previously appeared on the Wall Street Journal CIO Journal.

EDITED TO ADD (12/21): Slashdot thread.

EDITED TO ADD (1/14): Sony has had more than 50 security breaches in the past fifteen years.

Posted on December 19, 2014 at 12:44 PMView Comments

Chinese Hacking of the US

Chinese hacking of American computer networks is old news. For years we’ve known about their attacks against U.S. government and corporate targets. We’ve seen detailed reports of how they hacked The New York Times. Google has detected them going after Gmail accounts of dissidents. They’ve built sophisticated worldwide eavesdropping networks. These hacks target both military secrets and corporate intellectual property. They’re perpetrated by a combination of state, state-sponsored and state-tolerated hackers. It’s been going on for years.

On Monday, the Justice Department indicted five Chinese hackers in absentia, all associated with the Chinese military, for stealing corporate secrets from U.S. energy, metals and manufacturing companies. It’s entirely for show; the odds that the Chinese are going to send these people to the U.S. to stand trial is zero. But it does move what had been mostly a technical security problem into the world of diplomacy and foreign policy. By doing this, the U.S. government is taking a very public stand and saying “enough.”

The problem with that stand is that we’ve been doing much the same thing to China. Documents revealed by the whistleblower Edward Snowden show that the NSA has penetrated Chinese government and commercial networks, and is exfiltrating—that’s NSA talk for stealing—an enormous amount of secret data. We’ve hacked the networking hardware of one of their own companies, Huawei. We’ve intercepted networking equipment being sent there and installed monitoring devices. We’ve been listening in on their private communications channels.

The only difference between the U.S. and China’s actions is that the U.S. doesn’t engage in direct industrial espionage. That is, we don’t steal secrets from Chinese companies and pass them directly to U.S. competitors. But we do engage in economic espionage; we steal secrets from Chinese companies for an advantage in government trade negotiations, which directly benefits U.S. competitors. We might think this difference is important, but other countries are not as as impressed with our nuance.

Already the Chinese are retaliating against the U.S. actions with rhetoric of their own. I don’t know the Chinese expression for ‘pot calling the kettle black,’ but it certainly fits in this case.

Again, none of this is new. The U.S. and the Chinese have been conducting electronic espionage on each other throughout the Cold War, and there’s no reason to think it’s going to change anytime soon. What’s different now is the ease with which the two countries can do this safely and remotely, over the Internet, as well as the massive amount of information that can be stolen with a few computer commands.

On the Internet today, it is much easier to attack systems and break into them than it is to defend those systems against attack, so the advantage is to the attacker. This is true for a combination of reasons: the ability of an attacker to concentrate his attack, the nature of vulnerabilities in computer systems, poor software quality and the enormous complexity of computer systems.

The computer security industry is used to coping with criminal attacks. In general, such attacks are untargeted. Criminals might have broken into Target’s network last year and stolen 40 million credit and debit card numbers, but they would have been happy with any retailer’s large credit card database. If Target’s security had been better than its competitors, the criminals would have gone elsewhere. In this way, security is relative.

The Chinese attacks are different. For whatever reason, the government hackers wanted certain information inside the networks of Alcoa World Alumina, Westinghouse Electric, Allegheny Technologies, U.S. Steel, United Steelworkers Union and SolarWorld. It wouldn’t have mattered how those companies’ security compared with other companies; all that mattered was whether it was better than the ability of the attackers.

This is a fundamentally different security model—often called APT or Advanced Persistent Threat—and one that is much more difficult to defend against.

In a sense, American corporations are collateral damage in this battle of espionage between the U.S. and China. Taking the battle from the technical sphere into the foreign policy sphere might be a good idea, but it will work only if we have some moral high ground from which to demand that others not spy on us. As long as we run the largest surveillance network in the world and hack computer networks in foreign countries, we’re going to have trouble convincing others not to attempt the same on us.

This essay previously appeared on Time.com.

Posted on June 2, 2014 at 6:37 AMView Comments

1 2 3 11

Sidebar photo of Bruce Schneier by Joe MacInnis.