Entries Tagged "backdoors"

Page 19 of 22

The NSA's Ragtime Surveillance Program and the Need for Leaks

A new book reveals details about the NSA’s Ragtime surveillance program:

A book published earlier this month, “Deep State: Inside the Government Secrecy Industry,” contains revelations about the NSA’s snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is “Ragtime”—and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.

Deep State, which was authored by Marc Ambinder and D.B. Grady, also offers insight into how the NSA deems individuals a potential threat. The agency uses an automated data-mining process based on “a computerized analysis that assigns probability scores to each potential target,” as Harris puts it in his summary. The domestic version of the program, dubbed “Ragtime-P,” can process as many as 50 different data sets at one time, focusing on international communications from or to the United States. Intercepted metadata, such as email headers showing “to” and “from” fields, is stored in a database called “Marina,” where it generally stays for five years.

About three dozen NSA officials have access to Ragtime’s intercepted data on domestic counter-terrorism, the book claims, though outside the agency some 1000 people “are privy to the full details of the program.” Internally, the NSA apparently only employs four or five individuals as “compliance staff” to make sure the snooping is falling in line with laws and regulations. Another section of the Ragtime program, “Ragtime-A,” is said to involve U.S.-based interception of foreign counterterrorism data, while “Ragtime-B” collects data from foreign governments that transits through the U.S., and “Ragtime-C” monitors counter proliferation activity.

The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:

The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what’s really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they’ll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say ‘Moonraker’ or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest…actually wasn’t shut down at all, just went back under the radar after being renamed (and renamed and renamed).

He’s right. The real threat isn’t any one particular secret program, it’s all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.

We need whistleblowers. Much of the information we have about the NSA’s and the Justice Department’s plans and capabilities—think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program—is over a decade old.

Frank Rieger of the Chaos Computer Club got it right in 2006:

We also need to know how the intelligence agencies work today. It is of highest priority to learn how the “we rather use backdoors than waste time cracking your keys”-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems….

Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

Unfortunately, the Obama Administration’s mistreatment of Bradley Manning and its aggressive prosecution of other whistleblowers has probably succeeded in scaring any copycats. Yochai Benkler writes:

The prosecution will likely not accept Manning’s guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of “aiding the enemy.” That’s when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of “aiding the enemy.”

Aiding the enemy is a broad and vague offense. In the past, it was used in hard-core cases where somebody handed over information about troop movements directly to someone the collaborator believed to be “the enemy,” to American POWs collaborating with North Korean captors, or to a German American citizen who was part of a German sabotage team during WWII. But the language of the statute is broad. It prohibits not only actually aiding the enemy, giving intelligence, or protecting the enemy, but also the broader crime of communicating—directly or indirectly—with the enemy without authorization. That’s the prosecution’s theory here: Manning knew that the materials would be made public, and he knew that Al Qaeda or its affiliates could read the publications in which the materials would be published. Therefore, the prosecution argues, by giving the materials to WikiLeaks, Manning was “indirectly” communicating with the enemy. Under this theory, there is no need to show that the defendant wanted or intended to aid the enemy. The prosecution must show only that he communicated the potentially harmful information, knowing that the enemy could read the publications to which he leaked the materials. This would be true whether Al Qaeda searched the WikiLeaks database or the New York Times‘….

This theory is unprecedented in modern American history.

[…]

If Bradley Manning is convicted of aiding the enemy, the introduction of a capital offense into the mix would dramatically elevate the threat to whistleblowers. The consequences for the ability of the press to perform its critical watchdog function in the national security arena will be dire. And then there is the principle of the thing. However technically defensible on the language of the statute, and however well-intentioned the individual prosecutors in this case may be, we have to look at ourselves in the mirror of this case and ask: Are we the America of Japanese Internment and Joseph McCarthy, or are we the America of Ida Tarbell and the Pentagon Papers? What kind of country makes communicating with the press for publication to the American public a death-eligible offense?

A country that’s much less free and much less secure.

Posted on March 6, 2013 at 1:24 PMView Comments

Hackers Use Backdoor to Break System

Industrial control system comes with a backdoor:

Although the system was password protected in general, the backdoor through the IP address apparently required no password and allowed direct access to the control system. “[Th]e published backdoor URL provided the same level of access to the company’s control system as the password-protected administrator login,” said the memo.

The security of this backdoor is secrecy. Of course, that never lasts:

Hackers broke into the industrial control system of a New Jersey air conditioning company earlier this year, using a backdoor vulnerability in the system, according to an FBI memo made public this week.

Posted on December 26, 2012 at 6:05 AMView Comments

Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).

[…]

It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PMView Comments

RuggedCom Inserts Backdoor into Its Products

All RuggedCom equipment comes with a built-in backdoor:

The backdoor, which cannot be disabled, is found in all versions of the Rugged Operating System made by RuggedCom, according to independent researcher Justin W. Clarke, who works in the energy sector. The login credentials for the backdoor include a static username, “factory,” that was assigned by the vendor and can’t be changed by customers, and a dynamically generated password that is based on the individual MAC address, or media access control address, for any specific device.

This seems like a really bad idea.

No word from the company about whether they’re going to replace customer units.

EDITED TO ADD (5/11): RuggedCom’s response.

Posted on May 9, 2012 at 6:24 AMView Comments

FBI-Sponsored Backdoors

From a review of Susan Landau’s Surveillance or Security?:

To catch up with the new technologies of malfeasance, FBI director Robert Mueller traveled to Silicon Valley last November to persuade technology companies to build “backdoors” into their products. If Mueller’s wish were granted, the FBI would gain undetected real-time access to suspects’ Skype calls, Facebook chats, and other online communications­and in “clear text,” the industry lingo for unencrypted data. Backdoors, in other words, would make the Internet—and especially its burgeoning social media sector—”wiretappable.”

This is one of the cyber threats I talked about last week: insecurities deliberately created in some mistaken belief that they will stop crime. Once you build a backdoor into a product, you need to ensure that only the good guys use that backdoor, and only when they should. We’d all be much more secure if the backdoor didn’t exist at all.

Posted on October 7, 2011 at 6:01 AMView Comments

Details of the RSA Hack

We finally have some, even though the company isn’t talking:

So just how well crafted was the e-mail that got RSA hacked? Not very, judging by what F-Secure found.

The attackers spoofed the e-mail to make it appear to come from a “web master” at Beyond.com, a job-seeking and recruiting site. Inside the e-mail, there was just one line of text: “I forward this file to you for review. Please open and view it.” This was apparently enough to get the intruders the keys to RSAs kingdom.

F-Secure produced a brief video showing what happened if the recipient clicked on the attachment. An Excel spreadsheet opened, which was completely blank except for an “X” that appeared in the first box of the spreadsheet. The “X” was the only visible sign that there was an embedded Flash exploit in the spreadsheet. When the spreadsheet opened, Excel triggered the Flash exploit to activate, which then dropped the backdoor—in this case a backdoor known as Poison Ivy—onto the system.

Poison Ivy would then reach out to a command-and-control server that the attackers controlled at good.mincesur.com, a domain that F-Secure says has been used in other espionage attacks, giving the attackers remote access to the infected computer at EMC. From there, they were able to reach the systems and data they were ultimately after.

F-Secure notes that neither the phishing e-mail nor the backdoor it dropped onto systems were advanced, although the zero-day Flash exploit it used to drop the backdoor was advanced.

Posted on August 30, 2011 at 6:25 AMView Comments

Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.