Entries Tagged "Cisco"

Page 1 of 2

Cisco Can’t Stop Using Hard-Coded Passwords

There’s a new Cisco vulnerability in its Emergency Responder product:

This vulnerability is due to the presence of static user credentials for the root account that are typically reserved for use during development. An attacker could exploit this vulnerability by using the account to log in to an affected system. A successful exploit could allow the attacker to log in to the affected system and execute arbitrary commands as the root user.

This is not the first time Cisco products have had hard-coded passwords made public. You’d think it would learn.

Posted on October 11, 2023 at 7:04 AMView Comments

Thangrycat: A Serious Cisco Vulnerability

Summary:

Thangrycat is caused by a series of hardware design flaws within Cisco’s Trust Anchor module. First commercially introduced in 2013, Cisco Trust Anchor module (TAm) is a proprietary hardware security module used in a wide range of Cisco products, including enterprise routers, switches and firewalls. TAm is the root of trust that underpins all other Cisco security and trustworthy computing mechanisms in these devices. Thangrycat allows an attacker to make persistent modification to the Trust Anchor module via FPGA bitstream modification, thereby defeating the secure boot process and invalidating Cisco’s chain of trust at its root. While the flaws are based in hardware, Thangrycat can be exploited remotely without any need for physical access. Since the flaws reside within the hardware design, it is unlikely that any software security patch will fully resolve the fundamental security vulnerability.

From a news article:

Thrangrycat is awful for two reasons. First, if a hacker exploits this weakness, they can do whatever they want to your routers. Second, the attack can happen remotely ­ it’s a software vulnerability. But the fix can only be applied at the hardware level. Like, physical router by physical router. In person. Yeesh.

That said, Thrangrycat only works once you have administrative access to the device. You need a two-step attack in order to get Thrangrycat working. Attack #1 gets you remote administrative access, Attack #2 is Thrangrycat. Attack #2 can’t happen without Attack #1. Cisco can protect you from Attack #1 by sending out a software update. If your I.T. people have your systems well secured and are applying updates and patches consistently and you’re not a regular target of nation-state actors, you’re relatively safe from Attack #1, and therefore, pretty safe from Thrangrycat.

Unfortunately, Attack #1 is a garden variety vulnerability. Many systems don’t even have administrative access configured correctly. There’s opportunity for Thrangrycat to be exploited.

And from Boing Boing:

Thangrycat relies on attackers being able to run processes as the system’s administrator, and Red Balloon, the security firm that disclosed the vulnerability, also revealed a defect that allows attackers to run code as admin.

It’s tempting to dismiss the attack on the trusted computing module as a ho-hum flourish: after all, once an attacker has root on your system, all bets are off. But the promise of trusted computing is that computers will be able to detect and undo this kind of compromise, by using a separate, isolated computer to investigate and report on the state of the main system (Huang and Snowden call this an introspection engine). Once this system is compromised, it can be forced to give false reports on the state of the system: for example, it might report that its OS has been successfully updated to patch a vulnerability when really the update has just been thrown away.

As Charlie Warzel and Sarah Jeong discuss in the New York Times, this is an attack that can be executed remotely, but can only be detected by someone physically in the presence of the affected system (and only then after a very careful inspection, and there may still be no way to do anything about it apart from replacing the system or at least the compromised component).

Posted on May 23, 2019 at 11:52 AMView Comments

The NSA Is Hoarding Vulnerabilities

The National Security Agency is lying to us. We know that because data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others’ computers. Those vulnerabilities aren’t being reported, and aren’t getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn’t hacked; what probably happened was that a “staging server” for NSA cyberweapons—that is, a server the NSA was making use of to mask its surveillance activities—was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: “!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?”

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee—or other high-profile data breaches—the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and “exploit code” that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper—systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA—despite what it and other representatives of the US government say—prioritizing its ability to conduct surveillance over our security. Here’s one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls’ security. Cisco hasn’t sold these firewalls since 2009, but they’re still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard “zero days” ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is “a clear national security or law enforcement” use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn’t stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we’re all less secure. When Edward Snowden exposed many of the NSA’s surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It’s an inter-agency process, and it’s complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can’t use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there’s the bigger question of what qualifies in the NSA’s eyes as a “vulnerability.”

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can’t use, and doing so gets its numbers up; it’s good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever “they” are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn’t rely on zero days—very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) organization—basically the country’s chief hacker—gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

The distinction he’s referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for “nobody but us.” Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It’s an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone—another government, cybercriminals, amateur hackers—could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that—according to the standards established by the White House and the NSA—should have been disclosed and fixed, it’s these. That they have not been during the three-plus years that the NSA knew about and exploited them—despite Joyce’s insistence that they’re not very important—demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

I doubt we’re going to see any congressional investigations this year, but we’re going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that “no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find…” Our nation’s cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.

EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.

James Bamford thinks this is the work of an insider. I disagree, but he’s right that the TAO catalog was not a Snowden document.

People are looking at the quality of the code. It’s not that good.

Posted on August 26, 2016 at 5:56 AMView Comments

SYNful Knock Attack Against Cisco Routers

FireEye is reporting the discovery of persistent malware that compromises Cisco routers:

While this attack could be possible on any router technology, in this case, the targeted victims were Cisco routers. The Mandiant team found 14 instances of this router implant, dubbed SYNful Knock, across four countries: Ukraine, Philippines, Mexico, and India.

[…]

The implant uses techniques that make it very difficult to detect. A clandestine modification of the router’s firmware image can be utilized to maintain perpetual presence to an environment. However, it mainly surpasses detection because very few, if any, are monitoring these devices for compromise.

I don’t know if the attack is related to this attack against Cisco routers discovered in August.

As I wrote then, this is very much the sort of attack you’d expect from a government eavesdropping agency. We know, for example, that the NSA likes to attack routers. If I had to guess, I would guess that this is an NSA exploit. (Note the lack of Five Eyes countries in the target list.)

Posted on September 21, 2015 at 11:45 AMView Comments

Nasty Cisco Attack

This is serious:

Cisco Systems officials are warning customers of a series of attacks that completely hijack critical networking gear by swapping out the valid ROMMON firmware image with one that’s been maliciously altered.

The attackers use valid administrator credentials, an indication the attacks are being carried out either by insiders or people who have otherwise managed to get hold of the highly sensitive passwords required to update and make changes to the Cisco hardware. Short for ROM Monitor, ROMMON is the means for booting Cisco’s IOS operating system. Administrators use it to perform a variety of configuration tasks, including recovering lost passwords, downloading software, or in some cases running the router itself.

There’s no indication of who is doing these attacks, but it’s exactly the sort of thing you’d expect out of a government attacker. Regardless of which government initially discovered this, assume that they’re all exploiting it by now—and will continue to do so until it’s fixed.

Posted on August 19, 2015 at 12:34 PMView Comments

Cisco Shipping Equipment to Fake Addresses to Foil NSA Interception

Last May, we learned that the NSA intercepts equipment being shipped around the world and installs eavesdropping implants. There were photos of NSA employees opening up a Cisco box. Cisco’s CEO John Chambers personally complained to President Obama about this practice, which is not exactly a selling point for Cisco equipment abroad. Der Spiegel published the more complete document, along with a broader story, in January of this year:

In one recent case, after several months a beacon implanted through supply-chain interdiction called back to the NSA covert infrastructure. The call back provided us access to further exploit the device and survey the network. Upon initiating the survey, SIGINT analysis from TAO/Requirements & Targeting determined that the implanted device was providing even greater access than we had hoped: We knew the devices were bound for the Syrian Telecommunications Establishment (STE) to be used as part of their internet backbone, but what we did not know was that STE’s GSM (cellular) network was also using this backbone. Since the STE GSM network had never before been exploited, this new access represented a real coup.

Now Cisco is taking matters into its own hands, offering to ship equipment to fake addresses in an effort to avoid NSA interception.

I don’t think we have even begun to understand the long-term damage the NSA has done to the US tech industry.

Slashdot thread.

Posted on March 20, 2015 at 6:56 AMView Comments

JETPLOW: NSA Exploit of the Day

Today’s implant from the NSA’s Tailored Access Operations (TAO) group implant catalog:

JETPLOW

(TS//SI//REL) JETPLOW is a firmware persistence implant for Cisco PIX Series and ASA (Adaptive Security Appliance) firewalls. It persists DNT’s BANANAGLEE software implant. JETPLOW also has a persistent back-door capability.

(TS//SI//REL) JETPLOW is a firmware persistence implant for Cisco PIX Series and ASA (Adaptive Security Appliance) firewalls. It persists DNT’s BANANAGLEE software implant and modifies the Cisco firewall’s operating system (OS) at boot time. If BANANAGLEE support is not available for the booting operating system, it can install a Persistent Backdoor (PDB) designed to work with BANANAGLEE’S communications structure, so that full access can be reacquired at a later time. JETPLOW works on Cisco’s 500-series PIX firewalls, as well as most ASA firewalls (5505, 5510, 5520, 5540, 5550).

(TS//SI//REL) A typical JETPLOW deployment on a target firewall with an exfiltration path to the Remote Operations Center (ROC) is shown above. JETPLOW is remotely upgradable and is also remotely installable provided BANANAGLEE is already on the firewall of interest.

Status: (C//REL) Released. Has been widely deployed. Current availability restricted based on OS version (inquire for details).

Unit Cost: $0

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on January 9, 2014 at 1:02 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.