Blog: September 2015 Archives

Volkswagen and Cheating Software

Portuguese translation by Ricardo R Hashimoto

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What’s important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars’ emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is “smart” in ways that normal objects are not, the cheating can be subtler and harder to detect.

We’ve already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they’re being tested and artificially increasing their performance. We’re going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly—except during the first Tuesday of November, when they undetectably switch a few percent of votes from one party’s candidates to another’s.

My worry is that some corporate executives won’t interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they’ll cheat smarter. For all of VW’s brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don’t affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won’t be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analogue would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said “trust, but verify” when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it’s much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn’t magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It’s only the first step. The code must be analyzed. And because software is so complicated, that analysis can’t be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can’t just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It’s a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify that they’re not cheating us.

There’s no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in prison for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it’s there.

We need better verification of the software that controls our lives, and that means more—and more public—transparency.

This essay previously appeared on CNN.com.

EDITED TO ADD: Three more essays.

EDITED TO ADD (10/8): A history of emissions-control cheating devices.

Posted on September 30, 2015 at 9:13 AM76 Comments

How GCHQ Tracks Internet Users

The Intercept has a new story from the Snowden documents about the UK’s surveillance of the Internet by the GCHQ:

The mass surveillance operation ­ code-named KARMA POLICE­ was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.

[…]

One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps.

[…]

As of March 2009, the largest slice of data Black Hole held—41 percent—was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.

Lots more in the article. The Intercept also published 28 new top secret NSA and GCHQ documents.

Posted on September 29, 2015 at 6:16 AM41 Comments

Good Article on the Sony Attack

Fortune has a threepart article on the Sony attack by North Korea. There’s not a lot of tech here; it’s mostly about Sony’s internal politics regarding the movie and IT security before the attack, and some about their reaction afterwards.

Despite what I wrote at the time, I now believe that North Korea was responsible for the attack. This is the article that convinced me. It’s about the US government’s reaction to the attack.

Posted on September 28, 2015 at 6:22 AM42 Comments

People Who Need to Pee Are Better at Lying

No, really.

Abstract: The Inhibitory-Spillover-Effect (ISE) on a deception task was investigated. The ISE occurs when performance in one self-control task facilitates performance in another (simultaneously conducted) self-control task. Deceiving requires increased access to inhibitory control. We hypothesized that inducing liars to control urination urgency (physical inhibition) would facilitate control during deceptive interviews (cognitive inhibition). Participants drank small (low-control) or large (high-control) amounts of water. Next, they lied or told the truth to an interviewer. Third-party observers assessed the presence of behavioral cues and made true/lie judgments. In the high-control, but not the low-control condition, liars displayed significantly fewer behavioral cues to deception, more behavioral cues signaling truth, and provided longer and more complex accounts than truth-tellers. Accuracy detecting liars in the high-control condition was significantly impaired; observers revealed bias toward perceiving liars as truth-tellers. The ISE can operate in complex behaviors. Acts of deception can be facilitated by covert manipulations of self-control.

News article.

Posted on September 25, 2015 at 5:54 AM52 Comments

Living in a Code Yellow World

In the 1980s, handgun expert Jeff Cooper invented something called the Color Code to describe what he called the “combat mind-set.” Here is his summary:

In White you are unprepared and unready to take lethal action. If you are attacked in White you will probably die unless your adversary is totally inept.

In Yellow you bring yourself to the understanding that your life may be in danger and that you may have to do something about it.

In Orange you have determined upon a specific adversary and are prepared to take action which may result in his death, but you are not in a lethal mode.

In Red you are in a lethal mode and will shoot if circumstances warrant.

Cooper talked about remaining in Code Yellow over time, but he didn’t write about its psychological toll. It’s significant. Our brains can’t be on that alert level constantly. We need downtime. We need to relax. This is why we have friends around whom we can let our guard down and homes where we can close our doors to outsiders. We only want to visit Yellowland occasionally.

Since 9/11, the US has increasingly become Yellowland, a place where we assume danger is imminent. It’s damaging to us individually and as a society.

I don’t mean to minimize actual danger. Some people really do live in a Code Yellow world, due to the failures of government in their home countries. Even there, we know how hard it is for them to maintain a constant level of alertness in the face of constant danger. Psychologist Abraham Maslow wrote about this, making safety a basic level in his hierarchy of needs. A lack of safety makes people anxious and tense, and the long term effects are debilitating.

The same effects occur when we believe we’re living in an unsafe situation even if we’re not. The psychological term for this is hypervigilance. Hypervigilance in the face of imagined danger causes stress and anxiety. This, in turn, alters how your hippocampus functions, and causes an excess of cortisol in your body. Now cortisol is great in small and infrequent doses, and helps you run away from tigers. But it destroys your brain and body if you marinate in it for extended periods of time.

Not only does trying to live in Yellowland harm you physically, it changes how you interact with your environment and it impairs your judgment. You forget what’s normal and start seeing the enemy everywhere. Terrorism actually relies on this kind of reaction to succeed.

Here’s an example from The Washington Post last year: “I was taking pictures of my daughters. A stranger thought I was exploiting them.” A father wrote about his run-in with an off-duty DHS agent, who interpreted an innocent family photoshoot as something nefarious and proceeded to harass and lecture the family. That the parents were white and the daughters Asian added a racist element to the encounter.

At the time, people wrote about this as an example of worst-case thinking, saying that as a DHS agent, “he’s paid to suspect the worst at all times and butt in.” While, yes, it was a “disturbing reminder of how the mantra of ‘see something, say something’ has muddied the waters of what constitutes suspicious activity,” I think there’s a deeper story here. The agent is trying to live his life in Yellowland, and it caused him to see predators where there weren’t any.

I call these “movie-plot threats,” scenarios that would make great action movies but that are implausible in real life. Yellowland is filled with them.

Last December former DHS director Tom Ridge wrote about the security risks of building a NFL stadium near the Los Angeles Airport. His report is full of movie-plot threats, including terrorists shooting down a plane and crashing it into a stadium. His conclusion, that it is simply too dangerous to build a sports stadium within a few miles of the airport, is absurd. He’s been living too long in Yellowland.

That our brains aren’t built to live in Yellowland makes sense, because actual attacks are rare. The person walking towards you on the street isn’t an attacker. The person doing something unexpected over there isn’t a terrorist. Crashing an airplane into a sports stadium is more suitable to a Die Hard movie than real life. And the white man taking pictures of two Asian teenagers on a ferry isn’t a sex slaver. (I mean, really?)

Most of us, that DHS agent included, are complete amateurs at knowing the difference between something benign and something that’s actually dangerous. Combine this with the rarity of attacks, and you end up with an overwhelming number of false alarms. This is the ultimate problem with programs like “see something, say something.” They waste an enormous amount of time and money.

Those of us fortunate enough to live in a Code White society are much better served acting like we do. This is something we need to learn at all levels, from our personal interactions to our national policy. Since the terrorist attacks of 9/11, many of our counterterrorism policies have helped convince people they’re not safe, and that they need to be in a constant state of readiness. We need our leaders to lead us out of Yellowland, not to perpetuate it.

This essay previously appeared on Fusion.net.

EDITED TO ADD (9/25): UK student reading book on terrorism is accused of being a terrorist. He was reading the book for a class he was taking. I’ll let you guess his ethnicity.

Posted on September 24, 2015 at 11:39 AM89 Comments

Bringing Frozen Liquids through Airport Security

Gizmodo reports that UK airport security confiscates frozen liquids:

“He told me that it wasn’t allowed so I asked under what grounds, given it is not a liquid. When he said I couldn’t take it I asked if he knew that for sure or just assumed. He grabbed his supervisor and the supervisor told me that ‘the government does not classify that as a solid’. I decided to leave it at that point. I expect they’re probably wrong to take it from me. They’d probably not seen it before, didn’t know the rules, and being a bit of an eccentric request, decided to act on the side of caution. They didn’t spend the time to look it up.”

As it happens, I have a comparable recent experience. Last week, I tried to bring through a small cooler containing, among other things, a bag of ice. I expected to have to dump the ice at the security checkpoint and refill it inside the airport, but the TSA official looked at it and let it through. Turns out that frozen liquids are fine. I confirmed this with TSA officials at two other airports this week.

One of the TSA officials even told me that what he was officially told is that liquid explosives don’t freeze.

So there you go. The US policy is more sensible. And anyone landing in the UK from the US will have to go through security before any onward flight, so there’s no chance at flouting the UK rules that way.

And while we’re on the general subject, I am continually amazed by how lax the liquid rules are here in the US. Yesterday I went through airport security at SFO with an opened 5-ounce bottle of hot sauce in my carry-on. The screener flagged it; it was obvious on the x-ray. Another screener searched my bag, found it and looked at it, and then let me keep it.

And, in general, I never bother taking my liquids out of my suitcase anymore. I don’t have to when I am in the PreCheck lane, but no one seems to care in the regular lane either. It is different in the UK.

EDITED TO ADD (10/13): According to a 2009 TSA blog post, frozen ice (not semi-melted) is allowed.

Hannibal Burgess routine about the TSA liquids rules.

Posted on September 22, 2015 at 1:22 PM56 Comments

SYNful Knock Attack Against Cisco Routers

FireEye is reporting the discovery of persistent malware that compromises Cisco routers:

While this attack could be possible on any router technology, in this case, the targeted victims were Cisco routers. The Mandiant team found 14 instances of this router implant, dubbed SYNful Knock, across four countries: Ukraine, Philippines, Mexico, and India.

[…]

The implant uses techniques that make it very difficult to detect. A clandestine modification of the router’s firmware image can be utilized to maintain perpetual presence to an environment. However, it mainly surpasses detection because very few, if any, are monitoring these devices for compromise.

I don’t know if the attack is related to this attack against Cisco routers discovered in August.

As I wrote then, this is very much the sort of attack you’d expect from a government eavesdropping agency. We know, for example, that the NSA likes to attack routers. If I had to guess, I would guess that this is an NSA exploit. (Note the lack of Five Eyes countries in the target list.)

Posted on September 21, 2015 at 11:45 AM23 Comments

History of Hacktivism

Nice article by Dorothy Denning.

Hacktivism emerged in the late 1980s at a time when hacking for fun and profit were becoming noticeable threats. Initially it took the form of computer viruses and worms that spread messages of protest. A good example of early hacktivism is “Worms Against Nuclear Killers (WANK),” a computer worm that anti-nuclear activists in Australia unleashed into the networks of the National Aeronautics and Space Administration and the US Department of Energy in 1989 to protest the launch of a shuttle which carried radioactive plutonium.

By the mid-1990s, denial of service (DoS) attacks had been added to the hacktivist’s toolbox, usually taking the form of message or traffic floods. In 1994, journalist Joshua Quittner lost access to his e-mail after thousands of messages slamming “capitalistic pig” corporations swamped his inbox, and a group called itself “The Zippies” flooded e-mail accounts in the United Kingdom with traffic to protest a bill that would have outlawed outdoor dance festivals. Then in 1995, an international group called Strano Network organized a one-hour “Net’strike” against French government websites to protest nuclear and social policies. At the designated time, participants visited the target websites and hit the “reload” button over and over in an attempt to tie up traffic to the sites.

Her conclusion comes as no surprise:

Hacktivism, including state-sponsored or conducted hacktivism, is likely to become an increasingly common method for voicing dissent and taking direct action against adversaries. It offers an easy and inexpensive means to make a statement and inflict harm without seriously risking prosecution under criminal law or a response under international law. Hacking gives non-state actors an attractive alternative to street protests and state actors an appealing substitute for armed attacks. It has become not only a popular means of activism, but also an instrument of national power that is challenging international relations and international law.

Posted on September 21, 2015 at 6:34 AM15 Comments

Friday Squid Blogging: Giant Squid Sculpture at Burning Man

It looks impressive, maybe 20-30 feet long:

“I think this might be the coolest thing I have ever built,” said Barry Crawford about his giant, metal squid that was installed at Burning Man.

The sculpture is entirely made of found objects including half of a dropped airplane tank and a metal vegetable strainer. The eyeball opens and closes and the tentacles can be moved by participating viewers.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on September 18, 2015 at 5:47 PM168 Comments

Smart Watch that Monitors Typing

Here’s a watch that monitors the movements of your hand and can guess what you’re typing.

Using the watch’s built-in motion sensors, more specifically data from the accelerometer and gyroscope, researchers were able to create a 3D map of the user’s hand movements while typing on a keyboard.

The researchers then created two algorithms, one for detecting what keys were being pressed, and one for guessing what word was typed.

The first algorithm recorded the places where the smartwatch’s sensors would detect a dip in movement, considering this spot as a keystroke, and then created a heatmap of common spots where the user would press down.

Based on known keyboard layouts, these spots were attributed to letters on the left side of the keyboard.

The second algorithm took this data, and analyzing the pauses between smartwatch (left hand) keystrokes, it was able to detect how many letters were pressed with the right hand, based on the user’s regular keystroke frequency.

Based on a simple dictionary lookup, the algorithm then managed to reliably reproduce what words were typed on the keyboard.

Posted on September 18, 2015 at 5:20 AM34 Comments

Self-Destructing Computer Chip

The chip is built on glass:

Shattering the glass is straightforward. When the proper circuit is toggled, a small resistor within the substrate heats up until the glass shatters. According to Corning, it will continue shattering even after the initial break, rendering the entire chip unusable. The demo chip resistor was triggered by a photo diode that switched the circuit when a laser shone upon it. The glass plate quickly shattered into fragments once the laser touches it.

Posted on September 17, 2015 at 7:17 AM38 Comments

Anonymous Browsing at the Library

A rural New Hampshire library decided to install Tor on their computers and allow anonymous Internet browsing. The Department of Homeland pressured them to stop:

A special agent in a Boston DHS office forwarded the article to the New Hampshire police, who forwarded it to a sergeant at the Lebanon Police Department.

DHS spokesman Shawn Neudauer said the agent was simply providing “visibility/situational awareness,” and did not have any direct contact with the Lebanon police or library. “The use of a Tor browser is not, in [or] of itself, illegal and there are legitimate purposes for its use,” Neudauer said, “However, the protections that Tor offers can be attractive to criminal enterprises or actors and HSI [Homeland Security Investigations] will continue to pursue those individuals who seek to use the anonymizing technology to further their illicit activity.”

When the DHS inquiry was brought to his attention, Lt. Matthew Isham of the Lebanon Police Department was concerned. “For all the good that a Tor may allow as far as speech, there is also the criminal side that would take advantage of that as well,” Isham said. “We felt we needed to make the city aware of it.”

The good news is that the library is resisting the pressure and keeping Tor running.

This is an important issue for reasons that go beyond the New Hampshire library. The goal of the Library Freedom Project is to set up Tor exit nodes at libraries. Exit nodes help every Tor user in the world; the more of them there are, the harder it is to subvert the system. The Kilton Public Library isn’t just allowing its patrons to browse the Internet anonymously; it is helping dissidents around the world stay alive.

Librarians have been protecting our privacy for decades, and I’m happy to see that tradition continue.

EDITED TO ADD (10/13): As a result of the story, more libraries are planning to run Tor nodes.

Posted on September 16, 2015 at 1:40 PM52 Comments

Child Arrested Because Adults Are Stupid

A Texas 9th-grader makes an electronic clock and brings it to school. Teachers immediately become stupid and call the police:

The bell rang at least twice, he said, while the officers searched his belongings and questioned his intentions. The principal threatened to expel him if he didn’t make a written statement, he said.

“They were like, ‘So you tried to make a bomb?'” Ahmed said.

“I told them no, I was trying to make a clock.”

“He said, It looks like a movie bomb to me.'”

The student’s name is Ahmed Mohamed, which certainly didn’t help.

I am reminded of the 2007 story of an MIT student getting arrested for bringing a piece of wearable electronic art to the airport. And I wrote about the “war on the unexpected” back in 2007, too.

We simply have to stop terrorizing ourselves. We just look stupid when we do it.

EDITED TO ADD: New York Times article. Glenn Greenwald commentary.

EDITED TO ADD (9/21): There’s more to the story. He’s been invited to the White House, Google, MIT, and Facebook, and offered internships by Reddit and Twitter. On the other hand, Sarah Palin doesn’t believe it was just a clock. And he’s changing schools.

EDITED TO ADD (10/13): Two more essays.

Posted on September 16, 2015 at 10:09 AM111 Comments

Hacking Team, Computer Vulnerabilities, and the NSA

When the National Security Administration (NSA)—or any government agency—discovers a vulnerability in a popular computer system, should it disclose it or not? The debate exists because vulnerabilities have both offensive and defensive uses. Offensively, vulnerabilities can be exploited to penetrate others’ computers and networks, either for espionage or destructive purposes. Defensively, publicly revealing security flaws can be used to make our own systems less vulnerable to those same attacks. The two options are mutually exclusive: either we can help to secure both our own networks and the systems we might want to attack, or we can keep both networks vulnerable. Many, myself included, have long argued that defense is more important than offense, and that we should patch almost every vulnerability we find. Even the President’s Review Group on Intelligence and Communications Technologies recommended in 2013 that “U.S. policy should generally move to ensure that Zero Days are quickly blocked, so that the underlying vulnerabilities are patched on U.S. Government and other networks.”

Both the NSA and the White House have talked about a secret “vulnerability equities process” they go through when they find a security flaw. Both groups maintain the process is heavily weighted in favor or disclosing vulnerabilities to the vendors and having them patched.

An undated document—declassified last week with heavy redactions after a year-long Freedom of Information Act lawsuit—shines some light on the process but still leaves many questions unanswered. An important question is: which vulnerabilities go through the equities process, and which don’t?

A real-world example of the ambiguity surrounding the equities process emerged from the recent hacking of the cyber weapons arms manufacturer Hacking Team. The corporation sells Internet attack and espionage software to countries around the world, including many reprehensible governments to allow them to eavesdrop on their citizens, sometimes as a prelude to arrest and torture. The computer tools were used against U.S. journalists.

In July, unidentified hackers penetrated Hacking Team’s corporate network and stole almost everything of value, including corporate documents, e-mails, and source code. The hackers proceeded to post it all online.

The NSA was most likely able to penetrate Hacking Team’s network and steal the same data. The agency probably did it years ago. They would have learned the same things about Hacking Team’s network software that we did in July: how it worked, what vulnerabilities they were using, and which countries were using their cyber weapons. Armed with that knowledge, the NSA could have quietly neutralized many of the company’s products. The United States could have alerted software vendors about the zero-day exploits and had them patched. It could have told the antivirus companies how to detect and remove Hacking Team’s malware. It could have done a lot. Assuming that the NSA did infiltrate Hacking Team’s network, the fact that the United States chose not to reveal the vulnerabilities it uncovered is both revealing and interesting, and the decision provides a window into the vulnerability equities process.

The first question to ask is why? There are three possible reasons. One, the software was also being used by the United States, and the government did not want to lose its benefits. Two, NSA was able to eavesdrop on other entities using Hacking Team’s software, and they wanted to continue benefitting from the intelligence. And three, the agency did not want to expose their own hacking capabilities by demonstrating that they had compromised Hacking Team’s network. In reality, the decision may have been due to a combination of the three possibilities.

How was this decision made? More explicitly, did any vulnerabilities that Hacking Team exploited, and the NSA was aware of, go through the vulnerability equities process? It is unclear. The NSA plays fast and loose when deciding which security flaws go through the procedure. The process document states that it applies to vulnerabilities that are “newly discovered and not publicly known.” Does that refer only to vulnerabilities discovered by the NSA, or does the process also apply to zero-day vulnerabilities that the NSA discovers others are using? If vulnerabilities used in others’ cyber weapons are excluded, it is very difficult to talk about the process as it is currently formulated.

The U.S. government should close the vulnerabilities that foreign governments are using to attack people and networks. If taking action is as easy as plugging security vulnerabilities in products and making everyone in the world more secure, that should be standard procedure. The fact that the NSA—we assume—chose not to suggests that the United States has its priorities wrong.

Undoubtedly, there would be blowback from closing vulnerabilities utilized in others’ cyber weapons. Several companies sell information about vulnerabilities to different countries, and if they found that those security gaps were regularly closed soon after they started trying to sell them, they would quickly suspect espionage and take more defensive precautions. The new wariness of sellers and decrease in available security flaws would also raise the price of vulnerabilities worldwide. The United States is one of the biggest buyers, meaning that we benefit from greater availability and lower prices.

If we assume the NSA has penetrated these companies’ networks, we should also assume that the intelligence agencies of countries like Russia and China have done the same. Are those countries using Hacking Team’s vulnerabilities in their cyber weapons? We are all embroiled in a cyber arms race—finding, buying, stockpiling, using, and exposing vulnerabilities—and our actions will affect the actions of all the other players.

It seems foolish that we would not take every opportunity to neutralize the cyberweapons of those countries that would attack the United States or use them against their own people for totalitarian gain. Is it truly possible that when the NSA intercepts and reverse-engineers a cyberweapon used by one of our enemies—whether a Hacking Team customer or a country like China—we don’t close the vulnerabilities that that weapon uses? Does the NSA use knowledge of the weapon to defend the U.S. government networks whose security it maintains, at the expense of everyone else in the country and the world? That seems incredibly dangerous.

In my book Data and Goliath, I suggested breaking apart the NSA’s offensive and defensive components, in part to resolve the agency’s internal conflict between attack and defense. One part would be focused on foreign espionage, and another on cyberdefense. This Hacking Team discussion demonstrates that even separating the agency would not be enough. The espionage-focused organization that penetrates and analyzes the products of cyberweapons arms manufacturers would regularly learn about vulnerabilities used to attack systems and networks worldwide. Thus, that section of the agency would still have to transfer that knowledge to the defense-focused organization. That is not going to happen as long as the United States prioritizes surveillance over security and attack over defense. The norms governing actions in cyberspace need to be changed, a task far more difficult than any reform of the NSA.

This essay previously appeared in the Georgetown Journal of International Affairs.

EDITED TO ADD: Hacker News thread.

Posted on September 15, 2015 at 6:38 AM43 Comments

Wanted: Cryptography Products for Worldwide Survey

In 1999, Lance Hoffman, David Balenson, and others published a survey of non-US cryptographic products. The point of the survey was to illustrate that there was a robust international market in these products, and that US-only export restrictions on strong encryption did nothing to prevent its adoption and everything to disadvantage US corporations. This was an important contribution during the First Crypto War, and Hoffman testified before a Senate committee on his findings.

I want to redo that survey for 2015.

Here, at the beginning of the Second Crypto War, we again need to understand which encryption products are outside the reach of US regulation (or UK regulation). Are there so many foreign crypto products that any regulation by only one country will be easily circumvented? Or has the industry consolidated around only a few products made by only a few countries, so that effective regulation of strong encryption is possible? What are the possibilities for encrypted communication and data storage? I honestly don’t know the answer—and I think it’s important to find out.

To that end, I am asking for help. Please respond in the comments with the names—and URLs—of non-US encryption software and hardware products. I am only interested in those useful for protecting communications and data storage. I don’t care about encrypting financial transactions, or anything of that sort.

Thank you for your help. And please forward this blog post to anyone else who might help.

EDITED TO ADD: Thinking about it more, I want to compile a list of domestic (U.S.) encryption products as well. Since right now the FBI seems intent on just pressuring the big companies like Apple and Microsoft, and not regulating cryptography in general, knowing what else is out there in the U.S. will be useful.

Posted on September 11, 2015 at 2:08 PM253 Comments

Drone Self-Defense and the Law

Last month, a Kentucky man shot down a drone that was hovering near his backyard.

WDRB News reported that the camera drone’s owners soon showed up at the home of the shooter, William H. Merideth: “Four guys came over to confront me about it, and I happened to be armed, so that changed their minds,” Merideth said. “They asked me, ‘Are you the S-O-B that shot my drone?’ and I said, ‘Yes I am,'” he said. “I had my 40 mm Glock on me and they started toward me and I told them, ‘If you cross my sidewalk, there’s gonna be another shooting.'” Police charged Meredith with criminal mischief and wanton endangerment.

This is a trend. People have shot down drones in southern New Jersey and rural California as well. It’s illegal, and they get arrested for it.

Technology changes everything. Specifically, it upends long-standing societal balances around issues like security and privacy. When a capability becomes possible, or cheaper, or more common, the changes can be far-reaching. Rebalancing security and privacy after technology changes capabilities can be very difficult, and take years. And we’re not very good at it.

The security threats from drones are real, and the government is taking them seriously. In January, a man lost control of his drone, which crashed on the White House lawn. In May, another man was arrested for trying to fly his drone over the White House fence, and another last week for flying a drone into the stadium where the U.S. Open was taking place.

Drones have attempted to deliver drugs to prisons in Maryland, Ohio and South Carolina ­so far.

There have been many near-misses between drones and airplanes. Many people have written about the possible terrorist uses of drones.

Defenses are being developed. Both Lockheed Martin and Boeing sell anti-drone laser weapons. One company sells shotgun shells specifically designed to shoot down drones.

Other companies are working on technologies to detect and disable them safely. Some of those technologies were used to provide security at this year’s Boston Marathon.

Law enforcement can deploy these technologies, but under current law it’s illegal to shoot down a drone, even if it’s hovering above your own property. In our society, you’re generally not allowed to take the law into your own hands. You’re expected to call the police and let them deal with it.

There’s an alternate theory, though, from law professor Michael Froomkin. He argues that self-defense should be permissible against drones simply because you don’t know their capabilities. We know, for example, that people have mounted guns on drones, which means they could pose a threat to life. Note that this legal theory has not been tested in court.

Increasingly, government is regulating drones and drone flights both at the state level and by the FAA. There are proposals to require that drones have an identifiable transponder, or no-fly zones programmed into the drone software.

Still, a large number of security issues remain unresolved. How do we feel about drones with long-range listening devices, for example? Or drones hovering outside our property and photographing us through our windows?

What’s going on is that drones have changed how we think about security and privacy within our homes, by removing the protections we used to get from fences and walls. Of course, being spied on and shot at from above is nothing new, but access to those technologies was expensive and largely the purview of governments and some corporations. Drones put these capabilities into the hands of hobbyists, and we don’t know what to do about it.

The issues around drones will get worse as we move from remotely piloted aircraft to true drones: aircraft that operate autonomously from a computer program. For the first time, autonomous robots—­with ever-increasing intelligence and capabilities at an ever-decreasing cost—­will have access to public spaces. This will create serious problems for society, because our legal system is largely based on deterring human miscreants rather than their proxies.

Our desire to shoot down a drone hovering nearby is understandable, given its potential threat. Society’s need for people not to take the law into their own hands­—and especially not to fire guns into the air­—is also understandable. These two positions are increasingly coming into conflict, and will require increasing government regulation to sort out. But more importantly, we need to rethink our assumptions of security and privacy in a world of autonomous drones, long-range cameras, face recognition, and the myriad other technologies that are increasingly in the hands of everyone.

This essay previously appeared on CNN.com.

Posted on September 11, 2015 at 6:45 AM79 Comments

Cheating News from the Chess World

Chess player caught cheating at a tournament:

I kept on looking at him. He was always sitting down, he never got up. It was very strange; we are taking about hours and hours of playing. But most suspicious of all, he always had his arms folded with his thumb under his armpit. He never took it out.”

Mr Coqueraut said he was also “batting his eyelids in the most unnatural way.”

“Then I understood it,” he said. “He was deciphering signals in Morse code.”

The referee attempted to expose Mr Ricciardi by asking him to empty his pockets, but nothing was found. When the Italian was asked to open his shirt, he refused.

Tournament organisers then asked the 37-year old to pass through a metal detector and a sophisticated pendant was found hanging around his neck underneath a shirt. The pendant contained a tiny video camera as well as a mass of wires attached to his body and a 4cm box under his armpit. Mr Ricciardi claimed they were good luck charms.

Older posts. A grandmaster was caught cheating in April.

Posted on September 10, 2015 at 12:30 PM16 Comments

FBI and Apple's Encryption

The New York Times is reporting that Apple encryption is hampering an FBI investigation:

In an investigation involving guns and drugs, the Justice Department obtained a court order this summer demanding that Apple turn over, in real time, text messages between suspects using iPhones.

Apple’s response: Its iMessage system was encrypted and the company could not comply.

Government officials had warned for months that this type of standoff was inevitable as technology companies like Apple and Google embraced tougher encryption. The case, coming after several others in which similar requests were rebuffed, prompted some senior Justice Department and F.B.I. officials to advocate taking Apple to court, several current and former law enforcement officials said.

While that prospect has been shelved for now, the Justice Department is engaged in a court dispute with another tech company, Microsoft.

Several people have asked me in e-mail if this is the case I was referrring to here:

There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly—and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

It’s not. The rumor I am hearing is not about access to a particular user and his communications. It is about general access to iOS data and communications. And it’s in the FISA court, which means that it’s not a domestic criminal matter.

To reiterate: this is a rumor. I have no confirmation. But I know three reporters that are poking around, looking for the story.

EDITED TO ADD (9/11): Nicholas Weaver, Matthew Green, and

This coming Thursday, I’ll be talking with Larry Poneman about cyber-resilience and the results of a new survey he’s releasing. Join us here. The event is sponsored by my company, Resilient Systems, Inc.

Posted on September 4, 2015 at 2:19 PM5 Comments

China's "Great Cannon"

Interesting research: “An Analysis of China’s ‘Great Cannon.’

Abstract: On March 16th, 2015, the Chinese censorship apparatus employed a new tool, the “Great Cannon”, to engineer a denial-of-service attack on GreatFire.org, an organization dedicated to resisting China’s censorship. We present a technical analysis of the attack and what it reveals about the Great Cannon’s working, underscoring that in essence it constitutes a selective nation-state Man-in-the-Middle attack tool. Although sharing some code similarities and network locations with the Great Firewall, the Great Cannon is a distinct tool, designed to compromise foreign visitors to Chinese sites. We identify the Great Cannon’s operational behavior, localize it in the network topology, verify its distinctive side-channel, and attribute the system as likely operated by the Chinese government. We also discuss the substantial policy implications raised by its use, including the potential imposition on any user whose browser might visit (even inadvertently) a Chinese web site.

Posted on September 4, 2015 at 8:16 AM6 Comments

"The Declining Half-Life of Secrets"

Several times I’ve mentioned Peter Swire’s concept of “the declining half-life of secrets.” He’s finally written it up:

The nature of secrets is changing. Secrets that would once have survived the 25 or 50 year test of time are more and more prone to leaks. The declining half-life of secrets has implications for the intelligence community and other secretive agencies, as they must now wrestle with new challenges posed by the transformative power of information technology innovation as well as the changing methods and targets of intelligence collection.

Posted on September 3, 2015 at 8:43 AM31 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.