Entries Tagged "false positives"

Page 2 of 13

Did Kaspersky Fake Malware?

Two former Kaspersky employees have accused the company of faking malware to harm rival antivirus products. They would falsely classify legitimate files as malicious, tricking other antivirus companies that blindly copied Kaspersky’s data into deleting them from their customers’ computers.

In one technique, Kaspersky’s engineers would take an important piece of software commonly found in PCs and inject bad code into it so that the file looked like it was infected, the ex-employees said. They would send the doctored file anonymously to VirusTotal.

Then, when competitors ran this doctored file through their virus detection engines, the file would be flagged as potentially malicious. If the doctored file looked close enough to the original, Kaspersky could fool rival companies into thinking the clean file was problematic as well.

[…]

The former Kaspersky employees said Microsoft was one of the rivals that were targeted because many smaller security companies followed the Redmond, Washington-based company’s lead in detecting malicious files. They declined to give a detailed account of any specific attack.

Microsoft’s antimalware research director, Dennis Batchelder, told Reuters in April that he recalled a time in March 2013 when many customers called to complain that a printer code had been deemed dangerous by its antivirus program and placed in “quarantine.”

Batchelder said it took him roughly six hours to figure out that the printer code looked a lot like another piece of code that Microsoft had previously ruled malicious. Someone had taken a legitimate file and jammed a wad of bad code into it, he said. Because the normal printer code looked so much like the altered code, the antivirus program quarantined that as well.

Over the next few months, Batchelder’s team found hundreds, and eventually thousands, of good files that had been altered to look bad.

Kaspersky denies it.

EDITED TO ADD (8/19): Here’s an October 2013 presentation by Microsoft on the attacks.

EDITED TO ADD (9/11): A dissenting opinion.

Posted on August 18, 2015 at 2:35 PMView Comments

Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The “cost” of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Posted on July 21, 2015 at 6:51 AMView Comments

Finding People's Locations Based on Their Activities in Cyberspace

Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar’s news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.

Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency’s programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.

Its techniques to locate someone based on their electronic activities are straightforward, although they require an enormous capability to monitor data networks. One set of techniques involves the cell phone network, and the other the Internet.

Tracking Locations With Cell Towers

Every cell-phone network knows the approximate location of all phones capable of receiving calls. This is necessary to make the system work; if the system doesn’t know what cell you’re in, it isn’t able to route calls to your phone. We already know that the NSA conducts physical surveillance on a massive scale using this technique.

By triangulating location information from different cell phone towers, cell phone providers can geolocate phones more accurately. This is often done to direct emergency services to a particular person, such as someone who has made a 911 call. The NSA can get this data either by network eavesdropping with the cooperation of the carrier, or by intercepting communications between the cell phones and the towers. A previously released Top Secret NSA document says this: "GSM Cell Towers can be used as a physical-geolocation point in relation to a GSM handset of interest."

This technique becomes even more powerful if you can employ a drone. Greenwald and Scahill write:

The agency also equips drones and other aircraft with devices known as "virtual base-tower transceivers"—creating, in effect, a fake cell phone tower that can force a targeted person’s device to lock onto the NSA’s receiver without their knowledge.

The drone can do this multiple times as it flies around the area, measuring the signal strength—and inferring distance—each time. Again from the Intercept article:

The NSA geolocation system used by JSOC is known by the code name GILGAMESH. Under the program, a specially constructed device is attached to the drone. As the drone circles, the device locates the SIM card or handset that the military believes is used by the target.

The Top Secret source document associated with the Intercept story says:

As part of the GILGAMESH (PREDATOR-based active geolocation) effort, this team used some advanced mathematics to develop a new geolocation algorithm intended for operational use on unmanned aerial vehicle (UAV) flights.

This is at least part of that advanced mathematics.

None of this works if the target turns his phone off or exchanges SMS cards often with his colleagues, which Greenwald and Scahill write is routine. It won’t work in much of Yemen, which isn’t on any cell phone network. Because of this, the NSA also tracks people based on their actions on the Internet.

Finding You From Your Web Connection

A surprisingly large number of Internet applications leak location data. Applications on your smart phone can transmit location data from your GPS receiver over the Internet. We already know that the NSA collects this data to determine location. Also, many applications transmit the IP address of the network the computer is connected to. If the NSA has a database of IP addresses and locations, it can use that to locate users.

According to a previously released Top Secret NSA document, that program is code named HAPPYFOOT: "The HAPPYFOOT analytic aggregated leaked location-based service / location-aware application data to infer IP geo-locations."

Another way to get this data is to collect it from the geographical area you’re interested in. Greenwald and Scahill talk about exactly this:

In addition to the GILGAMESH system used by JSOC, the CIA uses a similar NSA platform known as SHENANIGANS. The operation—previously undisclosed—utilizes a pod on aircraft that vacuums up massive amounts of data from any wireless routers, computers, smart phones or other electronic devices that are within range.

And again from an NSA document associated with the FirstLook story: “Our mission (VICTORYDANCE) mapped the Wi-Fi fingerprint of nearly every major town in Yemen.” In the hacker world, this is known as war-driving, and has even been demonstrated from drones.

Another story from the Snowden documents describes a research effort to locate individuals based on the location of wifi networks they log into.

This is how the NSA can find someone, even when their cell phone is turned off and their SIM card is removed. If they’re at an Internet café, and they log into an account that identifies them, the NSA can locate them—because the NSA already knows where that wifi network is.

This also explains the drone assassination of Hassan Guhl, also reported in the Washington Post last October. In the story, Guhl was at an Internet cafe when he read an email from his wife. Although the article doesn’t describe how that email was intercepted by the NSA, the NSA was able to use it to determine his location.

There’s almost certainly more. NSA surveillance is robust, and they almost certainly have several different ways of identifying individuals on cell phone and Internet connections. For example, they can hack individual smart phones and force them to divulge location information.

As fascinating as the technology is, the critical policy question—and the one discussed extensively in the FirstLook article—is how reliable all this information is. While much of the NSA’s capabilities to locate someone in the real world by their network activity piggy-backs on corporate surveillance capabilities, there’s a critical difference: False positives are much more expensive. If Google or Facebook get a physical location wrong, they show someone an ad for a restaurant they’re nowhere near. If the NSA gets a physical location wrong, they call a drone strike on innocent people.

As we move to a world where all of us are tracked 24/7, these are the sorts of trade-offs we need to keep in mind.

This essay previously appeared on TheAtlantic.com.

Edited to add: this essay has been translated into French.

Posted on February 13, 2014 at 6:03 AMView Comments

False Positives and Ubiquitous Surveillance

Searching on Google for a pressure cooker and backpacks got one family investigated by the police. More stories and comments.

This seems not to be the NSA eavesdropping on everyone’s Internet traffic, as was first assumed. It was one of those “see something, say something” amateur tips:

Suffolk County Criminal Intelligence Detectives received a tip from a Bay Shore based computer company regarding suspicious computer searches conducted by a recently released employee. The former employee’s computer searches took place on this employee’s workplace computer. On that computer, the employee searched the terms “pressure cooker bombs” and “backpacks.”

Scary, nonetheless.

EDITED TO ADD (8/2): Another article.

EDITED TO ADD (8/3): As more of the facts come out, this seems like less of an overreaction than I first thought. The person was an ex-employee of the company—not an employee—and was searching “pressure cooker bomb.” It’s not unreasonable for the company to call the police in that case, and for the police to investigate the searcher. Whether or not the employer should be monitoring Internet use is another matter.

Posted on August 2, 2013 at 8:03 AMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

Evacuation Alerts at the Airport

Last week, an employee error caused the monitors at LAX to display a building evacuation order:

At a little before 9:47 p.m., the message read: “An emergency has been declared in the terminal. Please evacuate.” An airport police source said officers responded to the scene at the Tom Bradley International Terminal, believing the system had been hacked. But an airport spokeswoman said it was an honest mistake.

I think the real news has nothing to do with how susceptible those systems are to hacking. It’s this line:

Castles said there were no reports of passengers evacuating the terminal and the problem was fixed within about 10 minutes.

So now we know: building evacuation announcements on computer screens are ineffective.

She said airport officials are looking into ways to ensure a similar problem does not occur again.

That probably means that they’re going to make sure an erroneous evacuation message doesn’t appear on the computer screens again, not that everyone doesn’t ignore the evacuation message when there is an actual emergency.

Posted on May 8, 2013 at 6:32 AMView Comments

Classifying a Shape

This is a great essay:

Spheres are special shapes for nuclear weapons designers. Most nuclear weapons have, somewhere in them, that spheres-within-spheres arrangement of the implosion nuclear weapon design. You don’t have to use spheres—cylinders can be made to work, and there are lots of rumblings and rumors about non-spherical implosion designs around these here Internets—but spheres are pretty common.

[…]

Imagine the scenario: you’re a security officer working at Los Alamos. You know that spheres are weapon parts. You walk into a technical area, and you see spheres all around! Is that an ashtray, or it is a model of a plutonium pit? Anxiety mounts—does the ashtray go into a safe at the end of the day, or does it stay out on the desk? (Has someone been tapping their cigarettes out into the pit model?)

All of this anxiety can be gone—gone!—by simply banning all non-nuclear spheres! That way you can effectively treat all spheres as sensitive shapes.

What I love about this little policy proposal is that it illuminates something deep about how secrecy works. Once you decide that something is so dangerous that the entire world hinges on keeping it under control, this sense of fear and dread starts to creep outwards. The worry about what must be controlled becomes insatiable ­ and pretty soon the mundane is included with the existential.

The essay continues with a story of a scientist who received a security violation for leaving an orange on his desk.

Two points here. One, this is a classic problem with any detection system. When it’s hard to build a system that detects the thing you’re looking for, you change the problem to detect something easier—and hope the overlap is enough to make the system work. Think about airport security. It’s too hard to detect actual terrorists with terrorist weapons, so instead they detect pointy objects. Internet filtering systems work the same way, too. (Remember when URL filters blocked the word “sex,” and the Middlesex Public Library found that it couldn’t get to its municipal webpages?)

Two, the Los Alamos system only works because false negatives are much, much worse than false positives. It really is worth classifying an abstract shape and annoying an officeful of scientists and others to protect the nuclear secrets. Airport security fails because the false-positive/false-negative cost ratio is different.

Posted on January 3, 2013 at 6:03 AMView Comments

The Problem of False Alarms

The context is tornado warnings:

The basic problem, Smith says, it that sirens are sounded too often in most places. Sometimes they sound in an entire county for a warning that covers just a sliver of it; sometimes for other thunderstorm phenomena like large hail and/or strong straight-line winds; and sometimes for false alarm warnings ­ warnings for tornadoes that were incorrectly detected.

The residents of Joplin, Smith contends, were numbed by the too frequent blaring of sirens. As a result of too many past false alarms, he writes: “The citizens of Joplin were unwittingly being trained to NOT act when the sirens sounded.”

Posted on May 30, 2012 at 6:44 AMView Comments

Criminal Intent Prescreening and the Base Rate Fallacy

I’ve often written about the base rate fallacy and how it makes tests for rare events—like airplane terrorists—useless because the false positives vastly outnumber the real positives. This essay uses that argument to demonstrate why the TSA’s FAST program is useless:

First, predictive software of this kind is undermined by a simple statistical problem known as the false-positive paradox. Any system designed to spot terrorists before they commit an act of terrorism is, necessarily, looking for a needle in a haystack. As the adage would suggest, it turns out that this is an incredibly difficult thing to do. Here is why: let’s assume for a moment that 1 in 1,000,000 people is a terrorist about to commit a crime. Terrorists are actually probably much much more rare, or we would have a whole lot more acts of terrorism, given the daily throughput of the global transportation system. Now lets imagine the FAST algorithm correctly classifies 99.99 percent of observations—an incredibly high rate of accuracy for any big data-based predictive model. Even with this unbelievable level of accuracy, the system would still falsely accuse 99 people of being terrorists for every one terrorist it finds. Given that none of these people would have actually committed a terrorist act yet distinguishing the innocent false positives from the guilty might be a non-trivial, and invasive task.

Of course FAST has nowhere near a 99.99 percent accuracy rate. I imagine much of the work being done here is classified, but a writeup in Nature reported that the first round of field tests had a 70 percent accuracy rate. From the available material it is difficult to determine exactly what this number means. There are a couple of ways to interpret this, since both the write-up and the DHS documentation (all pdfs) are unclear. This might mean that the current iteration of FAST correctly classifies 70 percent of people it observes—which would produce false positives at an abysmal rate, given the rarity of terrorists in the population. The other way of interpreting this reported result is that FAST will call a terrorist a terrorist 70 percent of the time. This second option tells us nothing about the rate of false positives, but it would likely be quite high. In either case, it is likely that the false-positive paradox would be in full force for FAST, ensuring that any real terrorists identified are lost in a sea of falsely accused innocents.

It’s that final sentence in the first quoted paragraph that really points to how bad this idea is. If FAST determines you are guilty of a crime you have not yet committed, how do you exonerate yourself?

Posted on May 3, 2012 at 6:22 AMView Comments

Possibly the Most Incompetent TSA Story Yet

The storyline:

  1. TSA screener finds two pipes in passenger’s bags.
  2. Screener determines that they’re not a threat.
  3. Screener confiscates them anyway, because of their “material and appearance.”
  4. Because they’re not actually a threat, screener leaves them at the checkpoint.
  5. Everyone forgets about them.
  6. Six hours later, the next shift of TSA screeners notices the pipes and—not being able to explain how they got there and, presumably, because of their “material and appearance”—calls the police bomb squad to remove the pipes.
  7. TSA does not evacuate the airport, or even close the checkpoint, because—well, we don’t know why.

I don’t even know where to begin.

Posted on January 31, 2012 at 5:03 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.