Entries Tagged "searches"

Page 2 of 15

NSA Abandons "About" Searches

Earlier this month, the NSA said that it would no longer conduct “about” searches of bulk communications data. This was the practice of collecting the communications of Americans based on keywords and phrases in the contents of the messages, not based on who they were from or to.

The NSA’s own words:

After considerable evaluation of the program and available technology, NSA has decided that its Section 702 foreign intelligence surveillance activities will no longer include any upstream internet communications that are solely “about” a foreign intelligence target. Instead, this surveillance will now be limited to only those communications that are directly “to” or “from” a foreign intelligence target. These changes are designed to retain the upstream collection that provides the greatest value to national security while reducing the likelihood that NSA will acquire communications of U.S. persons or others who are not in direct contact with one of the Agency’s foreign intelligence targets.

In addition, as part of this curtailment, NSA will delete the vast majority of previously acquired upstream internet communications as soon as practicable.


After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

A quick review: under Section 702 of the Patriot Act, the NSA seizes a copy of all communications moving through a telco — think e-mail and such — and searches it for particular senders, receivers, and — until recently — key words. This pretty clearly violates the Fourth Amendment, and groups like the EFF have been fighting the NSA in court about this for years. The NSA has also had problems in the FISA court about these searches, and cites “inadvertent compliance incidents” related to this.

We might learn more about this change. Again, from the NSA’s statement:

After reviewing amended Section 702 certifications and NSA procedures that implement these changes, the FISC recently issued an opinion and order, approving the renewal certifications and use of procedures, which authorize this narrowed form of Section 702 upstream internet collection. A declassification review of the FISC’s opinion and order, and the related targeting and minimization procedures, is underway.

And the EFF is still fighting for more NSA surveillance reforms.

Posted on May 19, 2017 at 2:05 PMView Comments

Why Is the TSA Scanning Paper?

I’ve been reading a bunch of anecdotal reports that the TSA is starting to scan paper separately:

A passenger going through security at Kansas City International Airport (MCI) recently was asked by security officers to remove all paper products from his bag. Everything from books to Post-It Notes, documents and more. Once the paper products were removed, the passenger had to put them in a separate bin to be scanned separately.

When the passenger inquired why he was being forced to remove the paper products from his carry-on bag, the agent told him that it was a pilot program that’s being tested at MCI and will begin rolling out nationwide. KSHB Kansas City is reporting that other passengers traveling through MCI have also reported the paper-removal procedure at the airport. One person said that security dug through the suitcase for two “blocks” of Post-It Notes at the bottom.

Does anyone have any guesses as to why the TSA is doing this?

EDITED TO ADD (5/11): This article says that the TSA has stopped doing this. They blamed it on their contractor, Akai Security.

Posted on May 5, 2017 at 7:35 AMView Comments

Smartphone Forensics to Detect Distraction

The company Cellebrite is developing a portable forensics device that would determine if a smartphone user was using the phone at a particular time. The idea is to test phones of drivers after accidents:

Under the first-of-its-kind legislation proposed in New York, drivers involved in accidents would have to submit their phone to roadside testing from a textalyzer to determine whether the driver was using a mobile phone ahead of a crash. In a bid to get around the Fourth Amendment right to privacy, the textalyzer allegedly would keep conversations, contacts, numbers, photos, and application data private. It will solely say whether the phone was in use prior to a motor-vehicle mishap. Further analysis, which might require a warrant, could be necessary to determine whether such usage was via hands-free dashboard technology and to confirm the original finding.

This is interesting technology. To me, it feels no more intrusive than a breathalyzer, assuming that the textalyzer has all the privacy guards described above.

Slashdot thread. Reddit thread.

EDITED TO ADD (4/19): Good analysis and commentary.

Posted on April 13, 2016 at 6:51 AMView Comments

Should We Allow Bulk Searching of Cloud Archives?

Jonathan Zittrain proposes a very interesting hypothetical:

Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.

The discovery would surely help in the prosecution of the laptop’s owner, tying him to the crime. But a junior prosecutor has a further idea. The private document was likely shared among other conspirators, some of whom are still on the run or unknown entirely. Surely Google has the ability to run a search of all Gmail inboxes, outboxes, and message drafts folders, plus Google Drive cloud storage, to see if any of its 900 million users are currently in possession of that exact document. If Google could be persuaded or ordered to run the search, it could generate a list of only those Google accounts possessing the precise file ­ and all other Google users would remain undisturbed, except for the briefest of computerized “touches” on their accounts to see if the file reposed there.

He then goes through the reasons why Google should run the search, and then reasons why Google shouldn’t — and finally says what he would do.

I think it’s important to think through hypotheticals like this before they happen. We’re better able to reason about them now, when they are just hypothetical.

Posted on January 16, 2016 at 5:26 AMView Comments

More about the NSA's XKEYSCORE

I’ve been reading through the 48 classified documents about the NSA’s XKEYSCORE system released by the Intercept last week. From the article:

The NSA’s XKEYSCORE program, first revealed by The Guardian, sweeps up countless people’s Internet searches, emails, documents, usernames and passwords, and other private communications. XKEYSCORE is fed a constant flow of Internet traffic from fiber optic cables that make up the backbone of the world’s communication network, among other sources, for processing. As of 2008, the surveillance system boasted approximately 150 field sites in the United States, Mexico, Brazil, United Kingdom, Spain, Russia, Nigeria, Somalia, Pakistan, Japan, Australia, as well as many other countries, consisting of over 700 servers.

These servers store “full-take data” at the collection sites — meaning that they captured all of the traffic collected — and, as of 2009, stored content for 3 to 5 days and metadata for 30 to 45 days. NSA documents indicate that tens of billions of records are stored in its database. “It is a fully distributed processing and query system that runs on machines around the world,” an NSA briefing on XKEYSCORE says. “At field sites, XKEYSCORE can run on multiple computers that gives it the ability to scale in both processing power and storage.”

There seems to be no access controls at all restricting how analysts can use XKEYSCORE. Standing queries — called “workflows” — and new fingerprints have an approval process, presumably for load issues, but individual queries are not approved beforehand but may be audited after the fact. These are things which are supposed to be low latency, and you can’t have an approval process for low latency analyst queries. Since a query can get at the recorded raw data, a single query is effectively a retrospective wiretap.

All this means that the Intercept is correct when it writes:

These facts bolster one of Snowden’s most controversial statements, made in his first video interview published by The Guardian on June 9, 2013. “I, sitting at my desk,” said Snowden, could “wiretap anyone, from you or your accountant, to a federal judge to even the president, if I had a personal email.”

You’ll only get the data if it’s in the NSA’s databases, but if it is there you’ll get it.

Honestly, there’s not much in these documents that’s a surprise to anyone who studied the 2013 XKEYSCORE leaks and knows what can be done with a highly customizable Intrusion Detection System. But it’s always interesting to read the details.

One document — “Intro to Context Sensitive Scanning with X-KEYSCORE Fingerprints (2010) — talks about some of the queries an analyst can run. A sample scenario: “I want to look for people using Mojahedeen Secrets encryption from an iPhone” (page 6).

Mujahedeen Secrets is an encryption program written by al Qaeda supporters. It has been around since 2007. Last year, Stuart Baker cited its increased use as evidence that Snowden harmed America. I thought the opposite, that the NSA benefits from al Qaeda using this program. I wrote: “There’s nothing that screams ‘hack me’ more than using specially designed al Qaeda encryption software.”

And now we see how it’s done. In the document, we read about the specific XKEYSCORE queries an analyst can use to search for traffic encrypted by Mujahedeen Secrets. Here are some of the program’s fingerprints (page 10):


So if you want to search for all iPhone users of Mujahedeen Secrets (page 33):


fingerprint(‘encryption/mojahdeen2’ and fingerprint(‘browser/cellphone/iphone’)

Or you can search for the program’s use in the encrypted text, because (page 37): “…many of the CT Targets are now smart enough not to leave the Mojahedeen Secrets header in the E-mails they send. How can we detect that the E-mail (which looks like junk) is in fact Mojahedeen Secrets encrypted text.” Summary of the answer: there are lots of ways to detect the use of this program that users can’t detect. And you can combine the use of Mujahedeen Secrets with other identifiers to find targets. For example, you can specifically search for the program’s use in extremist forums (page 9). (Note that the NSA wrote that comment about Mujahedeen Secrets users increasing their opsec in 2010, two years before Snowden supposedly told them that the NSA was listening on their communications. Honestly, I would not be surprised if the program turned out to have been a US operation to get Islamic radicals to make their traffic stand out more easily.)

It’s not just Mujahedeen Secrets. Nicholas Weaver explains how you can use XKEYSCORE to identify co-conspirators who are all using PGP.

And these searches are just one example. Other examples from the documents include:

  • “Targets using mail.ru from a behind a large Iranian proxy” (here, page 7).
  • Usernames and passwords of people visiting gov.ir (here, page 26 and following).
  • People in Pakistan visiting certain German-language message boards (here, page 1).
  • HTTP POST traffic from Russia in the middle of the night — useful for finding people trying to steal our data (here, page 16).
  • People doing web searches on jihadist topics from Kabul (here).

E-mails, chats, web-browsing traffic, pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, file uploads to online services, Skype sessions and more: if you can figure out how to form the query, you can ask XKEYSCORE for it. For an example of how complex the searches can be, look at this XKEYSCORE query published in March, showing how New Zealand used the system to spy on the World Trade Organization: automatically track any email body with any particular WTO-related content for the upcoming election. (Good new documents to read include this, this, and this.)

I always read these NSA documents with an assumption that other countries are doing the same thing. The NSA is not made of magic, and XKEYSCORE is not some super-advanced NSA-only technology. It is the same sort of thing that every other country would use with its surveillance data. For example, Russia explicitly requires ISPs to install similar monitors as part of its SORM Internet surveillance system. As a home user, you can build your own XKEYSCORE using the public-domain Bro Security Monitor and the related Network Time Machine attached to a back-end data-storage system. (Lawrence Berkeley National Laboratory uses this system to store three months’ worth of Internet traffic for retrospective surveillance — it used the data to study Heartbleed.) The primary advantage the NSA has is that it sees more of the Internet than anyone else, and spends more money to store the data it intercepts for longer than anyone else. And if these documents explain XKEYSCORE in 2009 and 2010, expect that it’s much more powerful now.

Back to encryption and Mujahedeen Secrets. If you want to stay secure, whether you’re trying to evade surveillance by Russia, China, the NSA, criminals intercepting large amounts of traffic, or anyone else, try not to stand out. Don’t use some homemade specialized cryptography that can be easily identified by a system like this. Use reasonably strong encryption software on a reasonably secure device. If you trust Apple’s claims (pages 35-6), use iMessage and FaceTime on your iPhone. I really like Moxie Marlinspike’s Signal for both text and voice, but worry that it’s too obvious because it’s still rare. Ubiquitous encryption is the bane of listeners worldwide, and it’s the best thing we can deploy to make the world safer.

Posted on July 7, 2015 at 6:38 AMView Comments

Metal Detectors at Sports Stadiums

Fans attending Major League Baseball games are being greeted in a new way this year: with metal detectors at the ballparks. Touted as a counterterrorism measure, they’re nothing of the sort. They’re pure security theater: They look good without doing anything to make us safer. We’re stuck with them because of a combination of buck passing, CYA thinking, and fear.

As a security measure, the new devices are laughable. The ballpark metal detectors are much more lax than the ones at an airport checkpoint. They aren’t very sensitive — people with phones and keys in their pockets are sailing through — and there are no X-ray machines. Bags get the same cursory search they’ve gotten for years. And fans wanting to avoid the detectors can opt for a “light pat-down search” instead.

There’s no evidence that this new measure makes anyone safer. A halfway competent ticketholder would have no trouble sneaking a gun into the stadium. For that matter, a bomb exploded at a crowded checkpoint would be no less deadly than one exploded in the stands. These measures will, at best, be effective at stopping the random baseball fan who’s carrying a gun or knife into the stadium. That may be a good idea, but unless there’s been a recent spate of fan shootings and stabbings at baseball games — and there hasn’t — this is a whole lot of time and money being spent to combat an imaginary threat.

But imaginary threats are the only ones baseball executives have to stop this season; there’s been no specific terrorist threat or actual intelligence to be concerned about. MLB executives forced this change on ballparks based on unspecified discussions with the Department of Homeland Security after the Boston Marathon bombing in 2013. Because, you know, that was also a sporting event.

This system of vague consultations and equally vague threats ensure that no one organization can be seen as responsible for the change. MLB can claim that the league and teams “work closely” with DHS. DHS can claim that it was MLB’s initiative. And both can safely relax because if something happens, at least they did something.

It’s an attitude I’ve seen before: “Something must be done. This is something. Therefore, we must do it.” Never mind if the something makes any sense or not.

In reality, this is CYA security, and it’s pervasive in post-9/11 America. It no longer matters if a security measure makes sense, if it’s cost-effective or if it mitigates any actual threats. All that matters is that you took the threat seriously, so if something happens you won’t be blamed for inaction. It’s security, all right — security for the careers of those in charge.

I’m not saying that these officials care only about their jobs and not at all about preventing terrorism, only that their priorities are skewed. They imagine vague threats, and come up with correspondingly vague security measures intended to address them. They experience none of the costs. They’re not the ones who have to deal with the long lines and confusion at the gates. They’re not the ones who have to arrive early to avoid the messes the new policies have caused around the league. And if fans spend more money at the concession stands because they’ve arrived an hour early and have had the food and drinks they tried to bring along confiscated, so much the better, from the team owners’ point of view.

I can hear the objections to this as I write. You don’t know these measures won’t be effective! What if something happens? Don’t we have to do everything possible to protect ourselves against terrorism?

That’s worst-case thinking, and it’s dangerous. It leads to bad decisions, bad design and bad security. A better approach is to realistically assess the threats, judge security measures on their effectiveness and take their costs into account. And the result of that calm, rational look will be the realization that there will always be places where we pack ourselves densely together, and that we should spend less time trying to secure those places and more time finding terrorist plots before they can be carried out.

So far, fans have been exasperated but mostly accepting of these new security measures. And this is precisely the problem — most of us don’t care all that much. Our options are to put up with these measures, or stay home. Going to a baseball game is not a political act, and metal detectors aren’t worth a boycott. But there’s an undercurrent of fear as well. If it’s in the name of security, we’ll accept it. As long as our leaders are scared of the terrorists, they’re going to continue the security theater. And we’re similarly going to accept whatever measures are forced upon us in the name of security. We’re going to accept the National Security Agency’s surveillance of every American, airport security procedures that make no sense and metal detectors at baseball and football stadiums. We’re going to continue to waste money overreacting to irrational fears.

We no longer need the terrorists. We’re now so good at terrorizing ourselves.

This essay previously appeared in the Washington Post.

Posted on April 15, 2015 at 6:58 AMView Comments

The TSA's FAST Personality Screening Program Violates the Fourth Amendment

New law journal article: “A Slow March Towards Thought Crime: How the Department of Homeland Security’s FAST Program Violates the Fourth Amendment,” by Christopher A. Rogers. From the abstract:

FAST is currently designed for deployment at airports, where heightened security threats justify warrantless searches under the administrative search exception to the Fourth Amendment. FAST scans, however, exceed the scope of the administrative search exception. Under this exception, the courts would employ a balancing test, weighing the governmental need for the search versus the invasion of personal privacy of the search, to determine whether FAST scans violate the Fourth Amendment. Although the government has an acute interest in protecting the nation’s air transportation system against terrorism, FAST is not narrowly tailored to that interest because it cannot detect the presence or absence of weapons but instead detects merely a person’s frame of mind. Further, the system is capable of detecting an enormous amount of the scannee’s highly sensitive personal medical information, ranging from detection of arrhythmias and cardiovascular disease, to asthma and respiratory failures, physiological abnormalities, psychiatric conditions, or even a woman’s stage in her ovulation cycle. This personal information warrants heightened protection under the Fourth Amendment. Rather than target all persons who fly on commercial airplanes, the Department of Homeland Security should limit the use of FAST to where it has credible intelligence that a terrorist act may occur and should place those people scanned on prior notice that they will be scanned using FAST.

Posted on March 6, 2015 at 6:28 AMView Comments

The Limits of Police Subterfuge

“The next time you call for assistance because the Internet service in your home is not working, the ‘technician’ who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and — ­when he shows up at your door, impersonating a technician­ — let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have ‘consented’ to an intrusive search of your home.”

This chilling scenario is the first paragraph of a motion to suppress evidence gathered by the police in exactly this manner, from a hotel room. Unbelievably, this isn’t a story from some totalitarian government on the other side of an ocean. This happened in the United States, and by the FBI. Eventually — I’m sure there will be appeals — higher U.S. courts will decide whether this sort of practice is legal. If it is, the country will slide even further into a society where the police have even more unchecked power than they already possess.

The facts are these. In June, Two wealthy Macau residents stayed at Caesar’s Palace in Las Vegas. The hotel suspected that they were running an illegal gambling operation out of their room. They enlisted the police and the FBI, but could not provide enough evidence for them to get a warrant. So instead they repeatedly cut the guests’ Internet connection. When the guests complained to the hotel, FBI agents wearing hidden cameras and recorders pretended to be Internet repair technicians and convinced the guests to let them in. They filmed and recorded everything under the pretense of fixing the Internet, and then used the information collected from that to get an actual search warrant. To make matters even worse, they lied to the judge about how they got their evidence.

The FBI claims that their actions are no different from any conventional sting operation. For example, an undercover policeman can legitimately look around and report on what he sees when he invited into a suspect’s home under the pretext of trying to buy drugs. But there are two very important differences: one of consent, and the other of trust. The former is easier to see in this specific instance, but the latter is much more important for society.

You can’t give consent to something you don’t know and understand. The FBI agents did not enter the hotel room under the pretext of making an illegal bet. They entered under a false pretext, and relied on that for consent of their true mission. That makes things different. The occupants of the hotel room didn’t realize who they were giving access to, and they didn’t know their intentions. The FBI knew this would be a problem. According to the New York Times, “a federal prosecutor had initially warned the agents not to use trickery because of the ‘consent issue.’ In fact, a previous ruse by agents had failed when a person in one of the rooms refused to let them in.” Claiming that a person granting an Internet technician access is consenting to a police search makes no sense, and is no different than one of those “click through” Internet license agreements that you didn’t read saying one thing and while meaning another. It’s not consent in any meaningful sense of the term.

Far more important is the matter of trust. Trust is central to how a society functions. No one, not even the most hardened survivalists who live in backwoods log cabins, can do everything by themselves. Humans need help from each other, and most of us need a lot of help from each other. And that requires trust. Many Americans’ homes, for example, are filled with systems that require outside technical expertise when they break: phone, cable, Internet, power, heat, water. Citizens need to trust each other enough to give them access to their hotel rooms, their homes, their cars, their person. Americans simply can’t live any other way.

It cannot be that every time someone allows one of those technicians into our homes they are consenting to a police search. Again from the motion to suppress: “Our lives cannot be private — ­and our personal relationships intimate­ — if each physical connection that links our homes to the outside world doubles as a ready-made excuse for the government to conduct a secret, suspicionless, warrantless search.” The resultant breakdown in trust would be catastrophic. People would not be able to get the assistance they need. Legitimate servicemen would find it much harder to do their job. Everyone would suffer.

It all comes back to the warrant. Through warrants, Americans legitimately grant the police an incredible level of access into our personal lives. This is a reasonable choice because the police need this access in order to solve crimes. But to protect ordinary citizens, the law requires the police to go before a neutral third party and convince them that they have a legitimate reason to demand that access. That neutral third party, a judge, then issues the warrant when he or she is convinced. This check on the police’s power is for Americans’ security, and is an important part of the Constitution.

In recent years, the FBI has been pushing the boundaries of its warrantless investigative powers in disturbing and dangerous ways. It collects phone-call records of millions of innocent people. It uses hacking tools against unknown individuals without warrants. It impersonates legitimate news sites. If the lower court sanctions this particular FBI subterfuge, the matter needs to be taken up — ­and reversed­ — by the Supreme Court.

This essay previously appeared in The Atlantic.

EDITED TO ADD (4/24/2015): A federal court has ruled that the FBI cannot do this.

Posted on December 18, 2014 at 6:57 AMView Comments

FBI Agents Pose as Repairmen to Bypass Warrant Process

This is a creepy story. The FBI wanted access to a hotel guest’s room without a warrant. So agents broke his Internet connection, and then posed as Internet technicians to gain access to his hotel room without a warrant.

From the motion to suppress:

The next time you call for assistance because the internet service in your home is not working, the “technician” who comes to your door may actually be an undercover government agent. He will have secretly disconnected the service, knowing that you will naturally call for help and — when he shows up at your door, impersonating a technician — let him in. He will walk through each room of your house, claiming to diagnose the problem. Actually, he will be videotaping everything (and everyone) inside. He will have no reason to suspect you have broken the law, much less probable cause to obtain a search warrant. But that makes no difference, because by letting him in, you will have “consented” to an intrusive search of your home.

Basically, the agents snooped around the hotel room, and gathered evidence that they submitted to a magistrate to get a warrant. Of course, they never told the judge that they had engineered the whole outage and planted the fake technicians.

More coverage of the case here.

This feels like an important case to me. We constantly allow repair technicians into our homes to fix this or that technological thingy. If we can’t be sure they are not government agents in disguise, then we’ve lost quite a lot of our freedom and liberty.

Posted on November 26, 2014 at 6:50 AMView Comments

Testing for Explosives in the Chicago Subway

Chicago is doing random explosives screenings at random L stops in the Chicago area. Compliance is voluntary:

Police made no arrests but one rider refused to submit to the screening and left the station without incident, Maloney said.


Passengers can decline the screening, but will not be allowed to board a train at that station. Riders can leave that station and board a train at a different station.

I have to wonder what would happen if someone who looks Arab refused to be screened. And what possible value this procedure has. Anyone who has a bomb in their bag would see the screening point well before approaching it, and be able to walk to the next stop without potentially arousing suspicion.

Posted on November 7, 2014 at 9:59 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.