Entries Tagged "phones"

Page 10 of 19

TOTECHASER: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

TOTECHASER

(TS//SI//REL) TOTECHASER is a Windows CE implant targeting the Thuraya 2520 handset. The Thuraya is a dual mode phone that can operate either in SAT or GSM modes. The phone also supports a GPRS data connection for Web browsing, e-mail, and MMS messages. The initial software implant capabilities include providing GPS and GSM geo-location information. Call log, contact list, and other user information can also be retrieved from the phone. Additional capabilities are being investigated.

(TS//SI//REL) TOTECHASER will use SMS messaging for the command, control, and data exfiltration path. The initial capability will use covert SMS messages to communicate with the handset. These covert messages can be transmitted in either Thuraya Satellite mode or GMS mode and will not alert the user of this activity. An alternate command and control channel using the GPRS data connection based on the TOTEGHOSTLY impant is intended for a future version.

(TS//SI//REL) Prior to deployment, the TOTECHASER handsets must be modified. Details of how the phone is modified are being developed. A remotely deployable TOTECHASER implant is being investigated. The TOTECHASER system consists of the modified target handsets and a collection system.

(TS//SI//REL) TOTECHASER will accept configuration parameters to determine how the implant operates. Configuration parameters will determine what information is recorded, when to collect that information, and when the information is exfiltrated. The configuration parameters can be set upon initial deployment and updated remotely.

Unit Cost: $

Status:

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on February 18, 2014 at 2:17 PMView Comments

MONKEYCALENDAR: NSA Exploit of the Day

Today’s item from the NSA’s Tailored Access Operations (TAO) group implant catalog:

MONKEYCALENDAR

(TS//SI//REL) MONKEYCALENDAR is a software implant for GSM (Global System for Mobile communication) subscriber identity module (SIM) cards. This implant pulls geolocation information from a target handset and exfiltrates it to a user-defined phone number via short message service (SMS).

(TS//SI//REL) Modern SIM cards (Phase 2+) have an application program interface known as the SIM Toolkit (STK). The STK has a suite of proactive commands that allow the SIM card to issue commands and make requests to the handset. MONKEYCALENDAR uses STK commands to retrieve location information and to exfiltrate data via SMS. After the MONKEYCALENDAR file is compiled, the program is loaded onto the SIM card using either a Universal Serial Bus (USB) smartcard reader or via over-the-air provisioning. In both cases, keys to the card may be required to install the application depending on the service provider’s security configuration.

Unit Cost: $0

Status: Released, not deployed.

Page, with graphics, is here. General information about TAO and the catalog is here.

In the comments, feel free to discuss how the exploit works, how we might detect it, how it has probably been improved since the catalog entry in 2008, and so on.

Posted on February 14, 2014 at 3:19 PMView Comments

Finding People's Locations Based on Their Activities in Cyberspace

Glenn Greenwald is back reporting about the NSA, now with Pierre Omidyar’s news organization FirstLook and its introductory publication, The Intercept. Writing with national security reporter Jeremy Scahill, his first article covers how the NSA helps target individuals for assassination by drone.

Leaving aside the extensive political implications of the story, the article and the NSA source documents reveal additional information about how the agency’s programs work. From this and other articles, we can now piece together how the NSA tracks individuals in the real world through their actions in cyberspace.

Its techniques to locate someone based on their electronic activities are straightforward, although they require an enormous capability to monitor data networks. One set of techniques involves the cell phone network, and the other the Internet.

Tracking Locations With Cell Towers

Every cell-phone network knows the approximate location of all phones capable of receiving calls. This is necessary to make the system work; if the system doesn’t know what cell you’re in, it isn’t able to route calls to your phone. We already know that the NSA conducts physical surveillance on a massive scale using this technique.

By triangulating location information from different cell phone towers, cell phone providers can geolocate phones more accurately. This is often done to direct emergency services to a particular person, such as someone who has made a 911 call. The NSA can get this data either by network eavesdropping with the cooperation of the carrier, or by intercepting communications between the cell phones and the towers. A previously released Top Secret NSA document says this: "GSM Cell Towers can be used as a physical-geolocation point in relation to a GSM handset of interest."

This technique becomes even more powerful if you can employ a drone. Greenwald and Scahill write:

The agency also equips drones and other aircraft with devices known as "virtual base-tower transceivers"—creating, in effect, a fake cell phone tower that can force a targeted person’s device to lock onto the NSA’s receiver without their knowledge.

The drone can do this multiple times as it flies around the area, measuring the signal strength—and inferring distance—each time. Again from the Intercept article:

The NSA geolocation system used by JSOC is known by the code name GILGAMESH. Under the program, a specially constructed device is attached to the drone. As the drone circles, the device locates the SIM card or handset that the military believes is used by the target.

The Top Secret source document associated with the Intercept story says:

As part of the GILGAMESH (PREDATOR-based active geolocation) effort, this team used some advanced mathematics to develop a new geolocation algorithm intended for operational use on unmanned aerial vehicle (UAV) flights.

This is at least part of that advanced mathematics.

None of this works if the target turns his phone off or exchanges SMS cards often with his colleagues, which Greenwald and Scahill write is routine. It won’t work in much of Yemen, which isn’t on any cell phone network. Because of this, the NSA also tracks people based on their actions on the Internet.

Finding You From Your Web Connection

A surprisingly large number of Internet applications leak location data. Applications on your smart phone can transmit location data from your GPS receiver over the Internet. We already know that the NSA collects this data to determine location. Also, many applications transmit the IP address of the network the computer is connected to. If the NSA has a database of IP addresses and locations, it can use that to locate users.

According to a previously released Top Secret NSA document, that program is code named HAPPYFOOT: "The HAPPYFOOT analytic aggregated leaked location-based service / location-aware application data to infer IP geo-locations."

Another way to get this data is to collect it from the geographical area you’re interested in. Greenwald and Scahill talk about exactly this:

In addition to the GILGAMESH system used by JSOC, the CIA uses a similar NSA platform known as SHENANIGANS. The operation—previously undisclosed—utilizes a pod on aircraft that vacuums up massive amounts of data from any wireless routers, computers, smart phones or other electronic devices that are within range.

And again from an NSA document associated with the FirstLook story: “Our mission (VICTORYDANCE) mapped the Wi-Fi fingerprint of nearly every major town in Yemen.” In the hacker world, this is known as war-driving, and has even been demonstrated from drones.

Another story from the Snowden documents describes a research effort to locate individuals based on the location of wifi networks they log into.

This is how the NSA can find someone, even when their cell phone is turned off and their SIM card is removed. If they’re at an Internet café, and they log into an account that identifies them, the NSA can locate them—because the NSA already knows where that wifi network is.

This also explains the drone assassination of Hassan Guhl, also reported in the Washington Post last October. In the story, Guhl was at an Internet cafe when he read an email from his wife. Although the article doesn’t describe how that email was intercepted by the NSA, the NSA was able to use it to determine his location.

There’s almost certainly more. NSA surveillance is robust, and they almost certainly have several different ways of identifying individuals on cell phone and Internet connections. For example, they can hack individual smart phones and force them to divulge location information.

As fascinating as the technology is, the critical policy question—and the one discussed extensively in the FirstLook article—is how reliable all this information is. While much of the NSA’s capabilities to locate someone in the real world by their network activity piggy-backs on corporate surveillance capabilities, there’s a critical difference: False positives are much more expensive. If Google or Facebook get a physical location wrong, they show someone an ad for a restaurant they’re nowhere near. If the NSA gets a physical location wrong, they call a drone strike on innocent people.

As we move to a world where all of us are tracked 24/7, these are the sorts of trade-offs we need to keep in mind.

This essay previously appeared on TheAtlantic.com.

Edited to add: this essay has been translated into French.

Posted on February 13, 2014 at 6:03 AMView Comments

"Military Style" Raid on California Power Station

I don’t know what to think about this:

Around 1:00 AM on April 16, at least one individual (possibly two) entered two different manholes at the PG&E Metcalf power substation, southeast of San Jose, and cut fiber cables in the area around the substation. That knocked out some local 911 services, landline service to the substation, and cell phone service in the area, a senior U.S. intelligence official told Foreign Policy. The intruder(s) then fired more than 100 rounds from what two officials described as a high-powered rifle at several transformers in the facility. Ten transformers were damaged in one area of the facility, and three transformer banks—or groups of transformers—were hit in another, according to a PG&E spokesman.

The article worries that this might be a dry-run to some cyberwar-like attack, but that doesn’t make sense. But it’s just too complicated and weird to be a prank.

Anyone have any ideas?

Posted on January 2, 2014 at 6:40 AMView Comments

Operation Vula

Talking to Vula” is the story of a 1980s secret communications channel between black South African leaders and others living in exile in the UK. The system used encrypted text encoded into DTMF “touch tones” and transmitted from pay phones.

Our next project was one that led to the breakthrough we had been waiting for. We had received a request, as members of the Technical Committee, to find a way for activists to contact each other safely in an urban environment. Ronnie had seen a paging device that could be used between users of walkie-talkies. A numeric keypad was attached to the front of each radio set and when a particular number was pressed a light would flash on the remote set that corresponded to the number. The recipient of the paging signal could then respond to the caller using a pre-determined frequency so that the other users would not know about it.

Since the numbers on the keypad actually generated the same tones as those of a touch-tone telephone it occurred to us that instead of merely having a flashing light at the recipient`s end you could have a number appear corresponding to the number pressed on the keypad. If you could have one number appear you could have all numbers appear and in this way send a coded message. If the enemy was monitoring the airwaves all they would hear was a series of tones that would mean nothing.

Taking this a step further we realised that if you could send the tones by radio then they could also be sent by telephone, especially as the tones were intended for use on telephone systems. Ronnie put together a little microphone device that – when held on the earpiece of the receiving telephone – could display whatever number was pressed at the sending end. Using touch-tone telephones or separate tone pads as used for telephone banking services two people could send each other coded messages over the telephone. This could be done from public telephones, thus ensuring the safety of the users.

To avoid having to key in the numbers while in a telephone booth the tones could be recorded on a tape recorder at home and then played into the telephone. Similarly, at the receiving end, the tones could be recorded on a tape recorder and then decoded later. Messages could even be sent to an answering machine and picked up from an answering machine if left as the outgoing message.

We gave a few of these devices, disguised as electronic calculators, to activists to take back to South Africa. They were not immensely successful as the coding still had to be done by hand and that remained the chief factor discouraging people from communicating.

The next step was an attempt to marry the tone communication system with computer encryption. Ronnie got one of the boffins at the polytechnic to construct a device that produced the telephone tones at very high speed. This was attached to a computer that did the encryption. The computer, through the device, output the encrypted message as a series of tones and these could be saved on a cassette tape recorder that could be taken to a public telephone. This seemed to solve the problem of underground communications as everything could be done from public telephones and the encryption was done by computer.

Lots more operational details in the article.

Posted on December 26, 2013 at 6:44 AMView Comments

Close-In Surveillance Using Your Phone's Wi-Fi

This article talks about applications in retail, but the possibilities are endless.

Every smartphone these days comes equipped with a WiFi card. When the card is on and looking for networks to join, it’s detectable by local routers. In your home, the router connects to your device, and then voila ­ you have the Internet on your phone. But in a retail environment, other in-store equipment can pick up your WiFi card, learn your device’s unique ID number and use it to keep tabs on that device over time as you move through the store.

This gives offline companies the power to get incredibly specific data about how their customers behave. You could say it’s the physical version of what Web-based vendors have spent millions of dollars trying to perfect ­ the science of behavioral tracking.

Basically, the system is using the MAC address to identify individual devices. Another article on the system is here.

Posted on November 1, 2013 at 6:32 AMView Comments

Defending Against Crypto Backdoors

We already know the NSA wants to eavesdrop on the Internet. It has secret agreements with telcos to get direct access to bulk Internet traffic. It has massive systems like TUMULT, TURMOIL, and TURBULENCE to sift through it all. And it can identify ciphertext—encrypted information—and figure out which programs could have created it.

But what the NSA wants is to be able to read that encrypted information in as close to real-time as possible. It wants backdoors, just like the cybercriminals and less benevolent governments do.

And we have to figure out how to make it harder for them, or anyone else, to insert those backdoors.

How the NSA Gets Its Backdoors

The FBI tried to get backdoor access embedded in an AT&T secure telephone system in the mid-1990s. The Clipper Chip included something called a LEAF: a Law Enforcement Access Field. It was the key used to encrypt the phone conversation, itself encrypted in a special key known to the FBI, and it was transmitted along with the phone conversation. An FBI eavesdropper could intercept the LEAF and decrypt it, then use the data to eavesdrop on the phone call.

But the Clipper Chip faced severe backlash, and became defunct a few years after being announced.

Having lost that public battle, the NSA decided to get its backdoors through subterfuge: by asking nicely, pressuring, threatening, bribing, or mandating through secret order. The general name for this program is BULLRUN.

Defending against these attacks is difficult. We know from subliminal channel and kleptography research that it’s pretty much impossible to guarantee that a complex piece of software isn’t leaking secret information. We know from Ken Thompson’s famous talk on “trusting trust” (first delivered in the ACM Turing Award Lectures) that you can never be totally sure if there’s a security flaw in your software.

Since BULLRUN became public last month, the security community has been examining security flaws discovered over the past several years, looking for signs of deliberate tampering. The Debian random number flaw was probably not deliberate, but the 2003 Linux security vulnerability probably was. The DUAL_EC_DRBG random number generator may or may not have been a backdoor. The SSL 2.0 flaw was probably an honest mistake. The GSM A5/1 encryption algorithm was almost certainly deliberately weakened. All the common RSA moduli out there in the wild: we don’t know. Microsoft’s _NSAKEY looks like a smoking gun, but honestly, we don’t know.

How the NSA Designs Backdoors

While a separate program that sends our data to some IP address somewhere is certainly how any hacker—from the lowliest script kiddie up to the NSA—spies on our computers, it’s too labor-intensive to work in the general case.

For government eavesdroppers like the NSA, subtlety is critical. In particular, three characteristics are important:

  • Low discoverability. The less the backdoor affects the normal operations of the program, the better. Ideally, it shouldn’t affect functionality at all. The smaller the backdoor is, the better. Ideally, it should just look like normal functional code. As a blatant example, an email encryption backdoor that appends a plaintext copy to the encrypted copy is much less desirable than a backdoor that reuses most of the key bits in a public IV (initialization vector).
  • High deniability. If discovered, the backdoor should look like a mistake. It could be a single opcode change. Or maybe a “mistyped” constant. Or “accidentally” reusing a single-use key multiple times. This is the main reason I am skeptical about _NSAKEY as a deliberate backdoor, and why so many people don’t believe the DUAL_EC_DRBG backdoor is real: they’re both too obvious.
  • Minimal conspiracy. The more people who know about the backdoor, the more likely the secret is to get out. So any good backdoor should be known to very few people. That’s why the recently described potential vulnerability in Intel’s random number generator worries me so much; one person could make this change during mask generation, and no one else would know.

These characteristics imply several things:

  • A closed-source system is safer to subvert, because an open-source system comes with a greater risk of that subversion being discovered. On the other hand, a big open-source system with a lot of developers and sloppy version control is easier to subvert.
  • If a software system only has to interoperate with itself, then it is easier to subvert. For example, a closed VPN encryption system only has to interoperate with other instances of that same proprietary system. This is easier to subvert than an industry-wide VPN standard that has to interoperate with equipment from other vendors.
  • A commercial software system is easier to subvert, because the profit motive provides a strong incentive for the company to go along with the NSA’s requests.
  • Protocols developed by large open standards bodies are harder to influence, because a lot of eyes are paying attention. Systems designed by closed standards bodies are easier to influence, especially if the people involved in the standards don’t really understand security.
  • Systems that send seemingly random information in the clear are easier to subvert. One of the most effective ways of subverting a system is by leaking key information—recall the LEAF—and modifying random nonces or header information is the easiest way to do that.

Design Strategies for Defending against Backdoors

With these principles in mind, we can list design strategies. None of them is foolproof, but they are all useful. I’m sure there’s more; this list isn’t meant to be exhaustive, nor the final word on the topic. It’s simply a starting place for discussion. But it won’t work unless customers start demanding software with this sort of transparency.

  • Vendors should make their encryption code public, including the protocol specifications. This will allow others to examine the code for vulnerabilities. It’s true we won’t know for sure if the code we’re seeing is the code that’s actually used in the application, but surreptitious substitution is hard to do, forces the company to outright lie, and increases the number of people required for the conspiracy to work.
  • The community should create independent compatible versions of encryption systems, to verify they are operating properly. I envision companies paying for these independent versions, and universities accepting this sort of work as good practice for their students. And yes, I know this can be very hard in practice.
  • There should be no master secrets. These are just too vulnerable.
  • All random number generators should conform to published and accepted standards. Breaking the random number generator is the easiest difficult-to-detect method of subverting an encryption system. A corollary: we need better published and accepted RNG standards.
  • Encryption protocols should be designed so as not to leak any random information. Nonces should be considered part of the key or public predictable counters if possible. Again, the goal is to make it harder to subtly leak key bits in this information.

This is a hard problem. We don’t have any technical controls that protect users from the authors of their software.

And the current state of software makes the problem even harder: Modern apps chatter endlessly on the Internet, providing noise and cover for covert communications. Feature bloat provides a greater “attack surface” for anyone wanting to install a backdoor.

In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area—and it’s hurting us badly right now.

Yes, we need legal prohibitions against the NSA trying to subvert authors and deliberately weaken cryptography. But this isn’t just about the NSA, and legal controls won’t protect against those who don’t follow the law and ignore international agreements. We need to make their job harder by increasing their risk of discovery. Against a risk-averse adversary, it might be good enough.

This essay previously appeared on Wired.com.

EDITED TO ADD: I am looking for other examples of known or plausible instances of intentional vulnerabilities for a paper I am writing on this topic. If you can think of an example, please post a description and reference in the comments below. Please explain why you think the vulnerability could be intentional. Thank you.

Posted on October 22, 2013 at 6:15 AMView Comments

1 8 9 10 11 12 19

Sidebar photo of Bruce Schneier by Joe MacInnis.