Entries Tagged "DHS"

Page 2 of 38

The NSA Is Hoarding Vulnerabilities

The National Security Agency is lying to us. We know that because data stolen from an NSA server was dumped on the Internet. The agency is hoarding information about security vulnerabilities in the products you use, because it wants to use it to hack others’ computers. Those vulnerabilities aren’t being reported, and aren’t getting fixed, making your computers and networks unsafe.

On August 13, a group calling itself the Shadow Brokers released 300 megabytes of NSA cyberweapon code on the Internet. Near as we experts can tell, the NSA network itself wasn’t hacked; what probably happened was that a “staging server” for NSA cyberweapons—that is, a server the NSA was making use of to mask its surveillance activities—was hacked in 2013.

The NSA inadvertently resecured itself in what was coincidentally the early weeks of the Snowden document release. The people behind the link used casual hacker lingo, and made a weird, implausible proposal involving holding a bitcoin auction for the rest of the data: “!!! Attention government sponsors of cyber warfare and those who profit from it !!!! How much you pay for enemies cyber weapons?”

Still, most people believe the hack was the work of the Russian government and the data release some sort of political message. Perhaps it was a warning that if the US government exposes the Russians as being behind the hack of the Democratic National Committee—or other high-profile data breaches—the Russians will expose NSA exploits in turn.

But what I want to talk about is the data. The sophisticated cyberweapons in the data dump include vulnerabilities and “exploit code” that can be deployed against common Internet security systems. Products targeted include those made by Cisco, Fortinet, TOPSEC, Watchguard, and Juniper—systems that are used by both private and government organizations around the world. Some of these vulnerabilities have been independently discovered and fixed since 2013, and some had remained unknown until now.

All of them are examples of the NSA—despite what it and other representatives of the US government say—prioritizing its ability to conduct surveillance over our security. Here’s one example. Security researcher Mustafa al-Bassam found an attack tool codenamed BENIGHCERTAIN that tricks certain Cisco firewalls into exposing some of their memory, including their authentication passwords. Those passwords can then be used to decrypt virtual private network, or VPN, traffic, completely bypassing the firewalls’ security. Cisco hasn’t sold these firewalls since 2009, but they’re still in use today.

Vulnerabilities like that one could have, and should have, been fixed years ago. And they would have been, if the NSA had made good on its word to alert American companies and organizations when it had identified security holes.

Over the past few years, different parts of the US government have repeatedly assured us that the NSA does not hoard “zero days” ­ the term used by security experts for vulnerabilities unknown to software vendors. After we learned from the Snowden documents that the NSA purchases zero-day vulnerabilities from cyberweapons arms manufacturers, the Obama administration announced, in early 2014, that the NSA must disclose flaws in common software so they can be patched (unless there is “a clear national security or law enforcement” use).

Later that year, National Security Council cybersecurity coordinator and special adviser to the president on cybersecurity issues Michael Daniel insisted that US doesn’t stockpile zero-days (except for the same narrow exemption). An official statement from the White House in 2014 said the same thing.

The Shadow Brokers data shows this is not true. The NSA hoards vulnerabilities.

Hoarding zero-day vulnerabilities is a bad idea. It means that we’re all less secure. When Edward Snowden exposed many of the NSA’s surveillance programs, there was considerable discussion about what the agency does with vulnerabilities in common software products that it finds. Inside the US government, the system of figuring out what to do with individual vulnerabilities is called the Vulnerabilities Equities Process (VEP). It’s an inter-agency process, and it’s complicated.

There is a fundamental tension between attack and defense. The NSA can keep the vulnerability secret and use it to attack other networks. In such a case, we are all at risk of someone else finding and using the same vulnerability. Alternatively, the NSA can disclose the vulnerability to the product vendor and see it gets fixed. In this case, we are all secure against whoever might be using the vulnerability, but the NSA can’t use it to attack other systems.

There are probably some overly pedantic word games going on. Last year, the NSA said that it discloses 91 percent of the vulnerabilities it finds. Leaving aside the question of whether that remaining 9 percent represents 1, 10, or 1,000 vulnerabilities, there’s the bigger question of what qualifies in the NSA’s eyes as a “vulnerability.”

Not all vulnerabilities can be turned into exploit code. The NSA loses no attack capabilities by disclosing the vulnerabilities it can’t use, and doing so gets its numbers up; it’s good PR. The vulnerabilities we care about are the ones in the Shadow Brokers data dump. We care about them because those are the ones whose existence leaves us all vulnerable.

Because everyone uses the same software, hardware, and networking protocols, there is no way to simultaneously secure our systems while attacking their systems ­ whoever “they” are. Either everyone is more secure, or everyone is more vulnerable.

Pretty much uniformly, security experts believe we ought to disclose and fix vulnerabilities. And the NSA continues to say things that appear to reflect that view, too. Recently, the NSA told everyone that it doesn’t rely on zero days—very much, anyway.

Earlier this year at a security conference, Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) organization—basically the country’s chief hacker—gave a rare public talk, in which he said that credential stealing is a more fruitful method of attack than are zero days: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

The distinction he’s referring to is the one between exploiting a technical hole in software and waiting for a human being to, say, get sloppy with a password.

A phrase you often hear in any discussion of the Vulnerabilities Equities Process is NOBUS, which stands for “nobody but us.” Basically, when the NSA finds a vulnerability, it tries to figure out if it is unique in its ability to find it, or whether someone else could find it, too. If it believes no one else will find the problem, it may decline to make it public. It’s an evaluation prone to both hubris and optimism, and many security experts have cast doubt on the very notion that there is some unique American ability to conduct vulnerability research.

The vulnerabilities in the Shadow Brokers data dump are definitely not NOBUS-level. They are run-of-the-mill vulnerabilities that anyone—another government, cybercriminals, amateur hackers—could discover, as evidenced by the fact that many of them were discovered between 2013, when the data was stolen, and this summer, when it was published. They are vulnerabilities in common systems used by people and companies all over the world.

So what are all these vulnerabilities doing in a secret stash of NSA code that was stolen in 2013? Assuming the Russians were the ones who did the stealing, how many US companies did they hack with these vulnerabilities? This is what the Vulnerabilities Equities Process is designed to prevent, and it has clearly failed.

If there are any vulnerabilities that—according to the standards established by the White House and the NSA—should have been disclosed and fixed, it’s these. That they have not been during the three-plus years that the NSA knew about and exploited them—despite Joyce’s insistence that they’re not very important—demonstrates that the Vulnerable Equities Process is badly broken.

We need to fix this. This is exactly the sort of thing a congressional investigation is for. This whole process needs a lot more transparency, oversight, and accountability. It needs guiding principles that prioritize security over surveillance. A good place to start are the recommendations by Ari Schwartz and Rob Knake in their report: these include a clearly defined and more public process, more oversight by Congress and other independent bodies, and a strong bias toward fixing vulnerabilities instead of exploiting them.

And as long as I’m dreaming, we really need to separate our nation’s intelligence-gathering mission from our computer security mission: we should break up the NSA. The agency’s mission should be limited to nation state espionage. Individual investigation should be part of the FBI, cyberwar capabilities should be within US Cyber Command, and critical infrastructure defense should be part of DHS’s mission.

I doubt we’re going to see any congressional investigations this year, but we’re going to have to figure this out eventually. In my 2014 book Data and Goliath, I write that “no matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security by fixing almost all the vulnerabilities we find…” Our nation’s cybersecurity is just too important to let the NSA sacrifice it in order to gain a fleeting advantage over a foreign adversary.

This essay previously appeared on Vox.com.

EDITED TO ADD (8/27): The vulnerabilities were seen in the wild within 24 hours, demonstrating how important they were to disclose and patch.

James Bamford thinks this is the work of an insider. I disagree, but he’s right that the TAO catalog was not a Snowden document.

People are looking at the quality of the code. It’s not that good.

Posted on August 26, 2016 at 5:56 AMView Comments

Tracking the Owner of Kickass Torrents

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

Posted on July 26, 2016 at 6:42 AMView Comments

Anonymous Browsing at the Library

A rural New Hampshire library decided to install Tor on their computers and allow anonymous Internet browsing. The Department of Homeland pressured them to stop:

A special agent in a Boston DHS office forwarded the article to the New Hampshire police, who forwarded it to a sergeant at the Lebanon Police Department.

DHS spokesman Shawn Neudauer said the agent was simply providing “visibility/situational awareness,” and did not have any direct contact with the Lebanon police or library. “The use of a Tor browser is not, in [or] of itself, illegal and there are legitimate purposes for its use,” Neudauer said, “However, the protections that Tor offers can be attractive to criminal enterprises or actors and HSI [Homeland Security Investigations] will continue to pursue those individuals who seek to use the anonymizing technology to further their illicit activity.”

When the DHS inquiry was brought to his attention, Lt. Matthew Isham of the Lebanon Police Department was concerned. “For all the good that a Tor may allow as far as speech, there is also the criminal side that would take advantage of that as well,” Isham said. “We felt we needed to make the city aware of it.”

The good news is that the library is resisting the pressure and keeping Tor running.

This is an important issue for reasons that go beyond the New Hampshire library. The goal of the Library Freedom Project is to set up Tor exit nodes at libraries. Exit nodes help every Tor user in the world; the more of them there are, the harder it is to subvert the system. The Kilton Public Library isn’t just allowing its patrons to browse the Internet anonymously; it is helping dissidents around the world stay alive.

Librarians have been protecting our privacy for decades, and I’m happy to see that tradition continue.

EDITED TO ADD (10/13): As a result of the story, more libraries are planning to run Tor nodes.

Posted on September 16, 2015 at 1:40 PMView Comments

Metal Detectors at Sports Stadiums

Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.

As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive—people with phones and keys in their pockets are sailing through—and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.

There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games—and there hasn’t—this is a whole lot of time and money being spent to combat an imaginary threat.

But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.

This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.

It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.

In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right—security for the careers of those in charge.

I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.

I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?

That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.

So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem—most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.

We no longer need the terrorists. We’re now so good at terrorizing ourselves.

This essay previously appeared in the Washington Post.

Posted on April 15, 2015 at 6:58 AMView Comments

Tom Ridge Can Find Terrorists Anywhere

One of the problems with our current discourse about terrorism and terrorist policies is that the people entrusted with counterterrorism—those whose job it is to surveil, study, or defend against terrorism—become so consumed with their role that they literally start seeing terrorists everywhere. So it comes as no surprise that if you ask Tom Ridge, the former head of the Department of Homeland Security, about potential terrorism risks at a new LA football stadium, of course he finds them everywhere.

From a report he prepared—paid, I’m sure—about the location of a new football stadium:

Specifically, locating an NFL stadium at the Inglewood-Hollywood Park site needlessly increases risks for existing interests: LAX and tenant airlines, the NFL, the City of Los Angeles, law enforcement and first responders as well as the citizens and commercial enterprises in surrounding areas and across global transportation networks and supply chains. That risk would be expanded with the additional stadium and “soft target” infrastructure that would encircle the facility locally.

To be clear, total risk cannot be eliminated at any site. But basic risk management principles suggest that the proximity of these two sites creates a separate and additional set of risks that are wholly unnecessary.

In the post 9/11 world, the threat of terrorism is a permanent condition. As both a former governor and secretary of homeland security, it is my opinion that the peril of placing a National Football League stadium in the direct flight path of LAX—layering risk—outweigh any benefits over the decades-long lifespan of the facility.

If a decision is made to move forward at the Inglewood/Hollywood Park site, the NFL, state and local leaders, and those they represent, must be willing to accept the significant risk and the possible consequences that accompany a stadium at the location. This should give both public and private leaders in the area some pause. At the very least, an open, public debate should be enabled so that all interests may understand the comprehensive and interconnected security, safety and economic risks well before a shovel touches the ground.

I’m sure he can’t help himself.

I am reminded of Glenn Greenwald’s essay on the “terrorist expert” industry. I am also reminded of this story about a father taking pictures of his daughters.

On the plus side, now we all have a convincing argument against development. “You can’t possibly build that shopping mall near my home, because OMG! terrorism.”

Posted on March 4, 2015 at 6:40 AMView Comments

Evading Airport Security

The news is reporting about Evan Booth, who builds weaponry out of items you can buy after airport security. It’s clever stuff.

It’s not new, though. People have been explaining how to evade airport security for years.

Back in 2006, I—and others—explained how to print your own boarding pass and evade the photo-ID check, a trick that still seems to work. In 2008, I demonstrated carrying two large bottles of liquid through airport security. Here’s a paper about stabbing people with stuff you can take through airport security. And here’s a German video of someone building a bomb out of components he snuck through a full-body scanner. There’s lots more if you start poking around the Internet.

So, what’s the moral here? It’s not like the terrorists don’t know about these tricks. They’re no surprise to the TSA, either. If airport security is so porous, why aren’t there more terrorist attacks? Why aren’t the terrorists using these, and other, techniques to attack planes every month?

I think the answer is simple: airplane terrorism isn’t a big risk. There are very few actual terrorists, and plots are much more difficult to execute than the tactics of the attack itself. It’s the same reason why I don’t care very much about the various TSA mistakes that are regularly reported.

Posted on December 4, 2013 at 6:28 AMView Comments

Latest Movie-Plot Threat: Explosive-Dipped Clothing

It’s being reported, although there’s no indication of where this rumor is coming from or what it’s based on.

…the new tactic allows terrorists to dip ordinary clothing into the liquid to make the clothes themselves into explosives once dry.

“It’s ingenious,” one of the officials said.

Another senior official said that the tactic would not be detected by current security measures.

I can see the trailer now. “In a world where your very clothes might explode at any moment, Bruce Willis is, Bruce Willis in a Michael Bay film: BLOW UP! Co-starring Lindsay Lohan…”

I guess there’s nothing to be done but to force everyone to fly naked.

Posted on August 9, 2013 at 6:04 AMView Comments

TSA Considering Implementing Randomized Security

For a change, here’s a good idea by the TSA:

TSA has just issued a Request for Information (RFI) to prospective vendors who could develop and supply such randomizers, which TSA expects to deploy at CAT X through CAT IV airports throughout the United States.

“The Randomizers would be used to route passengers randomly to different checkpoint lines,” says the agency’s RFI.

The article lists a bunch of requirements by the TSA for the device.

I’ve seen something like this at customs in, I think, India. Every passenger walks up to a kiosk and presses a button. If the green light turns on, he walks through. If the red light turns on, his bags get searched. Presumably the customs officials can set the search percentage.

Automatic randomized screening is a good idea. It’s free from bias or profiling. It can’t be gamed. These both make it more secure. Note that this is just an RFI from the TSA. An actual program might be years away, and it might not be implemented well. But it’s certainly a start.

EDITED TO ADD (7/19): This is an opposing view. Basically, it’s based on the argument that profiling makes sense, and randomized screening means that you can’t profile. It’s an argument I’ve argued against before.

EDITED TO ADD (8/10): Another argument that profiling does not work.

Posted on July 19, 2013 at 2:45 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.