Blog: January 2016 Archives

Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage—once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day—there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Posted on January 29, 2016 at 7:29 AM44 Comments

Psychological Model of Selfishness

This is interesting:

Game theory decision-making is based entirely on reason, but humans don’t always behave rationally. David Rand, assistant professor of psychology, economics, cognitive science, and management at Yale University, and psychology doctoral student Adam Bear incorporated theories on intuition into their model, allowing agents to make a decision either based on instinct or rational deliberation.

In the model, there are multiple games of prisoners dilemma. But while some have the standard set-up, others introduce punishment for those who refuse to cooperate with a willing partner. Rand and Bear found that agents who went through many games with repercussions for selfishness became instinctively cooperative, though they could override their instinct to behave selfishly in cases where it made sense to do so.

However, those who became instinctively selfish were far less flexible. Even in situations where refusing to cooperate was punished, they would not then deliberate and rationally choose to cooperate instead.

The paper:

Abstract: Humans often cooperate with strangers, despite the costs involved. A long tradition of theoretical modeling has sought ultimate evolutionary explanations for this seemingly altruistic behavior. More recently, an entirely separate body of experimental work has begun to investigate cooperation’s proximate cognitive underpinnings using a dual-process framework: Is deliberative self-control necessary to reign in selfish impulses, or does self-interested deliberation restrain an intuitive desire to cooperate? Integrating these ultimate and proximate approaches, we introduce dual-process cognition into a formal game-theoretic model of the evolution of cooperation. Agents play prisoner’s dilemma games, some of which are one-shot and others of which involve reciprocity. They can either respond by using a generalized intuition, which is not sensitive to whether the game is one-shot or reciprocal, or pay a (stochastically varying) cost to deliberate and tailor their strategy to the type of game they are facing. We find that, depending on the level of reciprocity and assortment, selection favors one of two strategies: intuitive defectors who never deliberate, or dual-process agents who intuitively cooperate but sometimes use deliberation to defect in one-shot games. Critically, selection never favors agents who use deliberation to override selfish impulses: Deliberation only serves to undermine cooperation with strangers. Thus, by introducing a formal theoretical framework for exploring cooperation through a dual-process lens, we provide a clear answer regarding the role of deliberation in cooperation based on evolutionary modeling, help to organize a growing body of sometimes-conflicting empirical results, and shed light on the nature of human cognition and social decision making.

Very much in line with what I wrote in Liars and Outliers.

Posted on January 28, 2016 at 6:18 AM8 Comments

Horrible Story of Digital Harassment

This is just awful.

Their troll—or trolls, as the case may be—have harassed Paul and Amy in nearly every way imaginable. Bomb threats have been made under their names. Police cars and fire trucks have arrived at their house in the middle of the night to respond to fake hostage calls. Their email and social media accounts have been hacked, and used to bring ruin to their social lives. They’ve lost jobs, friends, and relationships. They’ve developed chronic anxiety and other psychological problems. More than once, they described their lives as having been “ruined” by their mystery tormenter.

We need to figure out how to identify perpetrators like this without destroying Internet privacy in the process.

EDITED TO ADD: One of the important points is the international nature of many of these cases. Even once the attackers are identified, the existing legal system isn’t adequate for shutting them down.

Posted on January 27, 2016 at 6:20 AM80 Comments

UK Government Promoting Backdoor-Enabled Voice Encryption Protocol

The UK government is pushing something called the MIKEY-SAKKE protocol to secure voice. Basically, it’s an identity-based system that necessarily requires a trusted key-distribution center. So key escrow is inherently built in, and there’s no perfect forward secrecy. The only reasonable explanation for designing a protocol with these properties is third-party eavesdropping.

Steven Murdoch has explained the details. The upshot:

The design of MIKEY-SAKKE is motivated by the desire to allow undetectable and unauditable mass surveillance, which may be a requirement in exceptional scenarios such as within government departments processing classified information. However, in the vast majority of cases the properties that MIKEY-SAKKE offers are actively harmful for security. It creates a vulnerable single point of failure, which would require huge effort, skill and cost to secure ­ requiring resource beyond the capability of most companies. Better options for voice encryption exist today, though they are not perfect either. In particular, more work is needed on providing scalable and usable protection against man-in-the-middle attacks, and protection of metadata for contact discovery and calls. More broadly, designers of protocols and systems need to appreciate the ethical consequences of their actions in terms of the political and power structures which naturally follow from their use. MIKEY-SAKKE is the latest example to raise questions over the policy of many governments, including the UK, to put intelligence agencies in charge of protecting companies and individuals from spying, given the conflict of interest it creates.

And GCHQ previously rejected a more secure standard, MIKEY-IBAKE, because it didn’t allow undetectable spying.

Both the NSA and GCHQ repeatedly choose surveillance over security. We need to reject that decision.

Posted on January 22, 2016 at 2:23 PM16 Comments

Security Trade-offs in the Longbow vs. Crossbow Decision

Interesting research: Douglas W. Allen and Peter T. Leeson, “Institutionally Constrained Technology Adoption: Resolving the Longbow Puzzle,” Journal of Law and Economics, v. 58, Aug 2015.

Abstract: For over a century the longbow reigned as undisputed king of medieval European missile weapons. Yet only England used the longbow as a mainstay in its military arsenal; France and Scotland clung to the technologically inferior crossbow. This longbow puzzle has perplexed historians for decades. We resolve it by developing a theory of institutionally constrained technology adoption. Unlike the crossbow, the longbow was cheap and easy to make and required rulers who adopted the weapon to train large numbers of citizens in its use. These features enabled usurping nobles whose rulers adopted the longbow to potentially organize effective rebellions against them. Rulers choosing between missile technologies thus confronted a trade-off with respect to internal and external security. England alone in late medieval Europe was sufficiently politically stable to allow its rulers the first-best technology option. In France and Scotland political instability prevailed, constraining rulers in these nations to the crossbow.

It’s nice to see my security interests intersect with my D&D interests.

Posted on January 22, 2016 at 6:44 AM56 Comments

El Chapo's Opsec

I’ve already written about Sean Penn’s opsec while communicating with El Chapo. Here’s the technique of mirroring, explained:

El chapo then switched to a complex system of using BBM (Blackberry’s Instant Messaging) and Proxies. The way it worked was if you needed to contact The Boss, you would send a BBM text to an intermediary (who would spend his days at a public place with Wi-Fi) this intermediary (or “mirror”) would then transcribe the text to an I-Pad and then send that over a Wi-Fi network (not cellular networks which were monitored constantly by law enforcement). This WiFi text was then sent to another cut-out who would finally transcribe the message into a Blackberry BBM text and transmit it to Guzman. Although Guzman continued to use his Blackberry, it was almost impossible to analyze the traffic because it now only communicated with one other device. This “mirror” system is difficult to crack because the intermediaries or proxies, can constantly change their location by moving to new WiFi spots.

This article claims he was caught because of a large food order:

After construction was complete, the safehouse was quiet. Until 7 January 2016, when a car arrives carrying unknown passengers. Security forces suspected that this was Guzman. There was one final indicator that someone important enough to require an entourage was inside. A white van went off, at midnight, to fetch enough tacos to feed a large group of people. The police raided the house 4 hours later.

Here’s more detail about El Chapo’s opsec at the time of his previous capture.

EDITED TO ADD (2/11): More on his opsec.

Posted on January 21, 2016 at 6:19 AM24 Comments

France Rejects Backdoors in Encryption Products

For the right reasons, too:

Axelle Lemaire, the Euro nation’s digital affairs minister, shot down the amendment during the committee stage of the forthcoming omnibus digital bill, saying it would be counterproductive and would leave personal data unprotected.

“Recent events show how the fact of introducing faults deliberately at the request—sometimes even without knowing—the intelligence agencies has an effect that is harming the whole community,” she said according to Numerama.

“Even if the intention [to empower the police] is laudable, it also opens the door to the players who have less laudable intentions, not to mention the potential for economic damage to the credibility of companies planning these flaws. You are right to fuel the debate, but this is not the right solution according to the Government’s opinion.”

France joins the Netherlands on this issue.

And Apple’s Tim Cook is going after the Obama administration on the issue.

EDITED TO ADD (1/20): In related news, Congress will introduce a bill to establish a commission to study the issue. This is what kicking the can down the road looks like.

Posted on January 20, 2016 at 5:02 AM27 Comments

Match Fixing in Tennis

The BBC and Buzzfeed are jointly reporting on match fixing in tennis. Their story is based partially on leaked documents and partly on data analysis.

BuzzFeed News began its investigation after devising an algorithm to analyse gambling on professional tennis matches over the past seven years. It identified 15 players who regularly lost matches in which heavily lopsided betting appeared to substantially shift the odds – a red flag for possible match-fixing.

Four players showed particularly unusual patterns, losing almost all of these red-flag matches. Given the bookmakers’ initial odds, the chances that the players would perform that badly were less than 1 in 1,000.

More details of the analysis here.

EDITED TO ADD (2/11): This is also a problem in sumo wrestling.

Posted on January 18, 2016 at 10:50 AM21 Comments

Should We Allow Bulk Searching of Cloud Archives?

Jonathan Zittrain proposes a very interesting hypothetical:

Suppose a laptop were found at the apartment of one of the perpetrators of last year’s Paris attacks. It’s searched by the authorities pursuant to a warrant, and they find a file on the laptop that’s a set of instructions for carrying out the attacks.

The discovery would surely help in the prosecution of the laptop’s owner, tying him to the crime. But a junior prosecutor has a further idea. The private document was likely shared among other conspirators, some of whom are still on the run or unknown entirely. Surely Google has the ability to run a search of all Gmail inboxes, outboxes, and message drafts folders, plus Google Drive cloud storage, to see if any of its 900 million users are currently in possession of that exact document. If Google could be persuaded or ordered to run the search, it could generate a list of only those Google accounts possessing the precise file ­ and all other Google users would remain undisturbed, except for the briefest of computerized “touches” on their accounts to see if the file reposed there.

He then goes through the reasons why Google should run the search, and then reasons why Google shouldn’t—and finally says what he would do.

I think it’s important to think through hypotheticals like this before they happen. We’re better able to reason about them now, when they are just hypothetical.

Posted on January 16, 2016 at 5:26 AM120 Comments

Fighting DRM in the W3C

Cory Doctorow has a good post on the EFF website about how they’re trying to fight digital rights management software in the World Wide Web Consortium.

So we came back with a new proposal: the W3C could have its cake and eat it too. It could adopt a rule that requires members who help make DRM standards to promise not to sue people who report bugs in tools that conform to those standards, nor could they sue people just for making a standards-based tool that connected to theirs. They could make DRM, but only if they made sure that they took steps to stop that DRM from being used to attack the open Web.

The W3C added DRM to the web’s standards in 2013. This doesn’t reverse that terrible decision, but it’s a step in the right direction.

Posted on January 14, 2016 at 2:13 PM12 Comments

Sean Penn's Opsec

This article talks about the opsec used by Sean Penn surrounding his meeting with El Chapo.

Security experts say there aren’t enough public details to fully analyze Penn’s operational security (opsec). But they described the paragraph above as “incomprehensible” and “gibberish.” Let’s try to break it down:

  • Penn describes using “TracPhones,” by which he likely means TracFones, which are cheap phones that take calling cards so they’re not linked to a credit card or account. They’re often called burners, but you don’t actually throw it in the trash after a call; instead you might swap out the SIM card or use different calling cards for different people. Hollywood loves these! Katie Holmes reportedly used one to plan her divorce from Tom Cruise. They’re a reasonable security measure, but it still creates phone records that live with, and can be requested from, cell phone carriers.
  • Penn says he “mirror[ed] through Blackphones,” which are relatively expensive phones sold by Silent Circle that offer a more secure operating system than a typical off-the-shelf phone. It runs Internet through a VPN (to shield the user’s IP address and encrypt their Web traffic) and end-to-end encrypts calls and messages sent to other Blackphones. Unlike with the TracFone, Penn would have a credit card tied to the account on this phone. It’s unclear what Penn means when he says he “mirrored” through the phone; the phrase “mirrored” typically means to duplicate something. As he wrote it, it sounds like he duplicated messages on the secure Blackphone that were being sent some other, potentially less secure, way, which would be dumb, if true. “I’m not sure what he means.” said Silent Circle CEO Mike Janke via email. “It’s a strange term and most likely he doesn’t know what he is saying.”
  • Penn says he used “anonymous” email addresses and that he and his companions accessed messages left as drafts in a shared email account. That likely means the emails were stored unencrypted, a bad security practice. If he were sharing the account with a person using an IP address that was the target of an investigation, i.e. any IP address associated with El Chapo’s crew, then all messages shared this way would be monitored. For the record, that did not work out very well for former CIA director David Petraeus, who used draft messages to communicate with his mistress and got busted when her IP address was targeted in an online harassment investigation.
  • Elsewhere in the article, Penn says Guzman corresponded with Mexican actress Kate del Castillo via BBMs (Blackberry messages). Those only have unique end-to-end encryption if a user has opted for BBM Protected. Law enforcement has been able to intercept BBMs in the past. And Mexican officials have told the media that they were monitoring del Castillo for months, following a meeting she had last summer with El Chapo’s lawyers, before she had reached out to Penn. Law enforcement even reportedly got photos of Penn’s arrival at the airport in Mexico.
  • In the most impressive operational, if not personal, security on display, Sean Penn says that when he traveled to Mexico, he left all of his electronics in Los Angeles, knowing that El Chapo’s crew would force him to leave them behind.

There has been lots of speculation about whether this was enough, or whether Mexican officials tracked El Chapo down because of his meeting with Penn.

Posted on January 14, 2016 at 6:32 AM55 Comments

The Internet of Things that Talk About You Behind Your Back

French translation

SilverPush is an Indian startup that’s trying to figure out all the different computing devices you own. It embeds inaudible sounds into the webpages you read and the television commercials you watch. Software secretly embedded in your computers, tablets, and smartphones picks up the signals, and then uses cookies to transmit that information back to SilverPush. The result is that the company can track you across your different devices. It can correlate the television commercials you watch with the web searches you make. It can link the things you do on your tablet with the things you do on your work computer.

Your computerized things are talking about you behind your back, and for the most part you can’t stop them­—or even learn what they’re saying.

This isn’t new, but it’s getting worse.

Surveillance is the business model of the Internet, and the more these companies know about the intimate details of your life, the more they can profit from it. Already there are dozens of companies that secretly spy on you as you browse the Internet, connecting your behavior on different sites and using that information to target advertisements. You know it when you search for something like a Hawaiian vacation, and ads for similar vacations follow you around the Internet for weeks. Companies like Google and Facebook make an enormous profit connecting the things you write about and are interested in with companies trying to sell you things.

Cross-device tracking is the latest obsession for Internet marketers. You probably use multiple Internet devices: your computer, your smartphone, your tablet, maybe your Internet-enabled television—­and, increasingly, “Internet of Things” devices like smart thermostats and appliances. All of these devices are spying on you, but the different spies are largely unaware of each other. Start-up companies like SilverPush, 4Info, Drawbridge, Flurry, and Cross Screen Consultants, as well as the big players like Google, Facebook, and Yahoo, are all experimenting with different technologies to “fix” this problem.

Retailers want this information very much. They want to know whether their television advertising causes people to search for their products on the Internet. They want to correlate people’s web searching on their smartphones with their buying behavior on their computers. They want to track people’s locations using the surveillance capabilities of their smartphones, and use that information to send geographically targeted ads to their computers. They want the surveillance data from smart appliances correlated with everything else.

This is where the Internet of Things makes the problem worse. As computers get embedded into more of the objects we live with and use, and permeate more aspects of our lives, more companies want to use them to spy on us without our knowledge or consent.

Technically, of course, we did consent. The license agreement we didn’t read but legally agreed to when we unthinkingly clicked “I agree” on a screen, or opened a package we purchased, gives all of those companies the legal right to conduct all of this surveillance. And the way US privacy law is currently written, they own all of that data and don’t need to allow us to see it.

We accept all of this Internet surveillance because we don’t really think about it. If there were a dozen people from Internet marketing companies with pens and clipboards peering over our shoulders as we sent our Gmails and browsed the Internet, most of us would object immediately. If the companies that made our smartphone apps actually followed us around all day, or if the companies that collected our license plate data could be seen as we drove, we would demand they stop. And if our televisions, computer, and mobile devices talked about us and coordinated their behavior in a way we could hear, we would be creeped out.

The Federal Trade Commission is looking at cross-device tracking technologies, with an eye to regulating them. But if recent history is a guide, any regulations will be minor and largely ineffective at addressing the larger problem.

We need to do better. We need to have a conversation about the privacy implications of cross-device tracking, but—more importantly­—we need to think about the ethics of our surveillance economy. Do we want companies knowing the intimate details of our lives, and being able to store that data forever? Do we truly believe that we have no rights to see the data that’s collected about us, to correct data that’s wrong, or to have data deleted that’s personal or embarrassing? At a minimum, we need limits on the behavioral data that can legally be collected about us and how long it can be stored, a right to download data collected about us, and a ban on third-party ad tracking. The last one is vital: it’s the companies that spy on us from website to website, or from device to device, that are doing the most damage to our privacy.

The Internet surveillance economy is less than 20 years old, and emerged because there was no regulation limiting any of this behavior. It’s now a powerful industry, and it’s expanding past computers and smartphones into every aspect of our lives. It’s long past time we set limits on what these computers, and the companies that control them, can say about us and do to us behind our backs.

This essay previously appeared on Vice Motherboard.

Posted on January 13, 2016 at 5:35 AM87 Comments

Michael Hayden and the Dutch Government Are against Crypto Backdoors

Last week, former NSA Director Michael Hayden made a very strong argument against deliberately weakening security products by adding backdoors:

Americans’ safety is best served by the highest level of technology possible, and that the country’s intelligence agencies have figured out ways to get around encryption.

“Before any civil libertarians want to come up to me afterwards and get my autograph,” he explained at a Tuesday panel on national security hosted by the Council on Foreign Relations, “let me tell you how we got around it: Bulk data and metadata [collection].”

Encryption is “a law enforcement issue more than an intelligence issue,” Hayden argued, “because, frankly, intelligence gets to break all sorts of rules, to cheat, to use other paths.”

[…]

“I don’t think it’s a winning hand to attempt to legislate against technological progress,” Hayden said.

[…]

“It’s a combination of technology and business,” Hayden said. “Creating a door for the government to enter, at the technological level, creates a very bad business decision on the parts of these companies because that is by definition weaker encryption than would otherwise be available … Both of those realities are true.”

This isn’t new, and is yet another example of the split between the law-enforcement and intelligence communities on this issue. What is new is Hayden saying, effectively: Hey FBI, you guys are idiots for trying to get back doors.

On the other side of the Atlantic Ocean, the Dutch government has come out against backdoors in security products, and in favor of strong encryption.

Meanwhile, I have been hearing rumors that “serious” US legislation mandating backdoors is about to be introduced. These rumors are pervasive, but without details.

Posted on January 12, 2016 at 1:22 PM34 Comments

Mac OS X, iOS, and Flash Had the Most Discovered Vulnerabilities in 2015

Interesting analysis:

Which software had the most publicly disclosed vulnerabilities this year? The winner is none other than Apple’s Mac OS X, with 384 vulnerabilities. The runner-up? Apple’s iOS, with 375 vulnerabilities.

Rounding out the top five are Adobe’s Flash Player, with 314 vulnerabilities; Adobe’s AIR SDK, with 246 vulnerabilities; and Adobe AIR itself, also with 246 vulnerabilities. For comparison, last year the top five (in order) were: Microsoft’s Internet Explorer, Apple’s Mac OS X, the Linux Kernel, Google’s Chrome, and Apple’s iOS.

The article goes on to explain why Windows vulnerabilities might be counted higher, and gives the top 50 software packages for vulnerabilities.

The interesting discussion topic is how this relates to how secure the software is. Is software with more discovered vulnerabilities better because they’re all fixed? Is software with more discovered vulnerabilities less secure because there are so many? Or are they all equally bad, and people just look at some software more than others? No one knows.

Posted on January 11, 2016 at 2:33 PM29 Comments

IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AM28 Comments

"How Stories Deceive"

Fascinating New Yorker article about Samantha Azzopardi, serial con artist and deceiver.

The article is really about how our brains allow stories to deceive us:

Stories bring us together. We can talk about them and bond over them. They are shared knowledge, shared legend, and shared history; often, they shape our shared future. Stories are so natural that we don’t notice how much they permeate our lives. And stories are on our side: they are meant to delight us, not deceive us—an ever-present form of entertainment.

That’s precisely why they can be such a powerful tool of deception. When we’re immersed in a story, we let down our guard. We focus in a way we wouldn’t if someone were just trying to catch us with a random phrase or picture or interaction. (“He has a secret” makes for a far more intriguing proposition than “He has a bicycle.”) In those moments of fully immersed attention, we may absorb things, under the radar, that would normally pass us by or put us on high alert. Later, we may find ourselves thinking that some idea or concept is coming from our own brilliant, fertile minds, when, in reality, it was planted there by the story we just heard or read.

Posted on January 8, 2016 at 12:54 PM20 Comments

Replacing Judgment with Algorithms

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.

Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data—and more algorithms—we need to ensure that we reap the benefits while avoiding the harms.

Right now, the Chinese government is watching how companies use “social credit” scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it’s much more than a financial scoring system.

Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors abound about how this system works. Various news sites are speculating that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem overblown, but could certainly come to pass in the future.

This story has spread because it’s just the sort of behavior you’d expect from the authoritarian government in China. But there’s little about the scoring systems used by Sesame Credit that’s unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it’s often the byproduct. And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well.

Sesame Credit is largely based on a US system called FICO. That’s the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you’re offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you’ve been at repaying your debt, how long your credit history is and so on.

There’s nothing about your social network, but that might change. In August, Facebook was awarded a patent on using a borrower’s social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you’re more likely to be judged as one.

Your associations can be used to judge you in other ways as well. It’s now common for employers to use social media sites to screen job applicants. This manual process is increasingly being outsourced and automated; companies like Social Intelligence, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The dangers of this type of system—from discriminatory biases resulting from the data to an obsession with scores over more social measures—are too many.

The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.

The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist’s tweet caused the US to deny him entry into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.

All of these systems, from Sesame Credit to the NSA’s secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn’t scale. It made no sense for a bank across the state to give you a loan, because they didn’t know you. Loans stayed local.

FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don’t need to know you; your score is all they need to decide whether you’re trustworthy.

This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don’t conform to the financial norm—having and using credit cards, for example—can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.

The secrecy of the algorithms further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you’re less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying “subversive” products or being friends with dissidents, you’re more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.

Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people being blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.

Many have documented a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google “pressure cooker bomb”? How many are a bit worried that I used it in this essay?

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.

It doesn’t have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard.

The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.

The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.

We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.

Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.

This essay previously appeared on CNN.com.

Posted on January 8, 2016 at 5:21 AM36 Comments

Straight Talk about Terrorism

Nice essay that lists ten “truths” about terrorism:

  1. We can’t keep the bad guys out.
  2. Besides, the threat is already inside.
  3. More surveillance won’t get rid of terrorism, either.
  4. Defeating the Islamic State won’t make terrorism go away.
  5. Terrorism still remains a relatively minor threat, statistically speaking.
  6. But don’t relax too much, because things will probably get worse before they get better.
  7. Meanwhile, poorly planned Western actions can make things still worse.
  8. Terrorism is a problem to be managed.
  9. To do this, however, we need to move beyond the political posturing that characterizes most public debates about counterterrorism and instead speak honestly about the costs and benefits of different approaches.
  10. We need to stop rewarding terrorism.

Nothing here will be news to regular readers of this blog.

Posted on January 7, 2016 at 7:00 AM112 Comments

How the US Is Playing Both Ends on Data Privacy

There’s an excellent article in Foreign Affairs on how the European insistence on data privacy—most recently illustrated by their invalidation of the “safe harbor” agreement—is really about the US talking out of both sides of its mouth on the issue: championing privacy in public, but spying on everyone in private. As long as the US keeps this up, the authors argue, this issue will get worse.

From the conclusion:

The United States faces a profound choice. It can continue to work in a world of blurred lines and unilateral demands, making no concessions on surveillance and denouncing privacy rights as protectionism in disguise. Yet if it does so, it is U.S. companies that will suffer.

Alternatively, it can recognize that globalization comes in different flavors and that Europeans have real and legitimate problems with ubiquitous U.S. surveillance and unilateralism. An ambitious strategy would seek to reform EU and U.S. privacy rules so as to put in place a comprehensive institutional infrastructure that could protect the privacy rights of European and U.S. citizens alike, creating rules and institutions to restrict general surveillance to uses that are genuinely in the security interests of all the countries.

More broadly, the United States needs to disentangle the power of a U.S.-led order from the temptations of manipulating that order to its national security advantage. If it wants globalization to continue working as it has in the past, the United States is going to have to stop thinking of flows of goods and information as weapons and start seeing them as public goods that need to be maintained and nurtured. Ultimately, it is U.S. firms and the American economy that stand to benefit most.

EDITED TO ADD (1/13): Stewart Baker on the same topic.

Posted on January 6, 2016 at 6:14 AM46 Comments

NSA Spies on Israeli Prime Minister

The Wall Street Journal has a story that the NSA spied on Israeli Prime Minister Benjamin Netanyahu and other Israeli government officials, and incidentally collected conversations between US citizens—including lawmakers—and those officials.

US lawmakers who are usually completely fine with NSA surveillance are aghast at this behavior, as both Glenn Greenwald and Trevor Timm explain. Greenwald:

So now, with yesterday’s WSJ report, we witness the tawdry spectacle of large numbers of people who for years were fine with, responsible for, and even giddy about NSA mass surveillance suddenly objecting. Now they’ve learned that they themselves, or the officials of the foreign country they most love, have been caught up in this surveillance dragnet, and they can hardly contain their indignation. Overnight, privacy is of the highest value because now it’s their privacy, rather than just yours, that is invaded.

This reminds me of the 2013 story that the NSA eavesdropped on the cell phone of the German Chancellor Angela Merkel. Back then, I wrote:

Spying on foreign governments is what the NSA is supposed to do. Much more problematic, and dangerous, is that the NSA is spying on entire populations.

Greenwald said the same thing:

I’ve always argued that on the spectrum of spying stories, revelations about targeting foreign leaders is the least important, since that is the most justifiable type of espionage. Whether the U.S. should be surveilling the private conversations of officials of allied democracies is certainly worth debating, but, as I argued in my 2014 book, those “revelations … are less significant than the agency’s warrantless mass surveillance of whole populations” since “countries have spied on heads of state for centuries, including allies.”

And that’s the key point. I am less concerned about Angela Merkel than the other 82 million Germans that are being spied on, and I am less concerned about Benjamin Netanyahu than I am about the other 8 million people living in that country.

Over on Lawfare, Ben Wittes agrees:

There is absolutely nothing surprising about NSA’s activities here—or about the administration’s activities. There is no reason to expect illegality or impropriety. In fact, the remarkable aspect of this story is how constrained both the administration’s and the agency’s behavior appears to have been by rules and norms in exactly the fashion one would hope to see.

[…]

So let’s boil this down to brass tacks: NSA spied on a foreign leader at a time when his country had a major public foreign policy showdown with the President of the United States over a sharp differences between the two countries over Iran’s nuclearization—indeed, at a time when the US believed that leader was contemplating military action without advance notice to the United States. In the course of this surveillance, NSA incidentally collected communications involving members of Congress, who were being heavily lobbied by the Israeli government and Netanyahu personally. There is no indication that the members of Congress were targeted for collection. Moreover, there’s no indication that the rules that govern incidental collection involving members of Congress were not followed. The White House, for its part, appears to have taken a hands-off approach, directing NSA to follow its own policies about what to report, even on a sensitive matter involving delicate negotiations in a tense period with an ally.

The words that really matter are “incidental collection.” I have no doubt that the NSA followed its own rules in that regard. The discussion we need to have is about whether those rules are the correct ones. Section 702 incidental collection is a huge loophole that allows the NSA to collect information on millions of innocent Americans.

Greenwald again:

This claim of “incidental collection” has always been deceitful, designed to mask the fact that the NSA does indeed frequently spy on the conversations of American citizens without warrants of any kind. Indeed, as I detailed here, the 2008 FISA law enacted by Congress had as one of its principal, explicit purposes allowing the NSA to eavesdrop on Americans’ conversations without warrants of any kind. “The principal purpose of the 2008 law was to make it possible for the government to collect Americans’ international communications—and to collect those communications without reference to whether any party to those communications was doing anything illegal,” the ACLU’s Jameel Jaffer said. “And a lot of the government’s advocacy is meant to obscure this fact, but it’s a crucial one: The government doesn’t need to ‘target’ Americans in order to collect huge volumes of their communications.”

If you’re a member of Congress, there are special rules that the NSA has to follow if you’re incidentally spied on:

Special safeguards for lawmakers, dubbed the “Gates Rule,” were put in place starting in the 1990s. Robert Gates, who headed the Central Intelligence Agency from 1991 to 1993, and later went on to be President Barack Obama’s Defense Secretary, required intelligence agencies to notify the leaders of the congressional intelligence committees whenever a lawmaker’s identity was revealed to an executive branch official.

If you’re a regular American citizen, don’t expect any such notification. Your information can be collected, searched, and then saved for later searching, without a warrant. And if you’re a common German, Israeli, or any other countries’ citizen, you have even fewer rights.

In 2014, I argued that we need to separate the NSA’s espionage mission against target agents for a foreign power from any broad surveillance of Americans. I still believe that. But more urgently, we need to reform Section 702 when it comes up for reauthorization in 2017.

EDITED TO ADD: A good article on the topic. And Marcy Wheeler’s interesting take.

Posted on January 5, 2016 at 6:36 AM37 Comments

De-Anonymizing Users from their Coding Styles

Interesting blog post:

We are able to de-anonymize executable binaries of 20 programmers with 96% correct classification accuracy. In the de-anonymization process, the machine learning classifier trains on 8 executable binaries for each programmer to generate numeric representations of their coding styles. Such a high accuracy with this small amount of training data has not been reached in previous attempts. After scaling up the approach by increasing the dataset size, we de-anonymize 600 programmers with 52% accuracy. There has been no previous attempt to de-anonymize such a large binary dataset. The abovementioned executable binaries are compiled without any compiler optimizations, which are options to make binaries smaller and faster while transforming the source code more than plain compilation. As a result, compiler optimizations further normalize authorial style. For the first time in programmer de-anonymization, we show that we can still identify programmers of optimized executable binaries. While we can de-anonymize 100 programmers from unoptimized executable binaries with 78% accuracy, we can de-anonymize them from optimized executable binaries with 64% accuracy. We also show that stripping and removing symbol information from the executable binaries reduces the accuracy to 66%, which is a surprisingly small drop. This suggests that coding style survives complicated transformations.

Here’s the paper.

And here’s their previous paper, de-anonymizing programmers from their source code.

Posted on January 4, 2016 at 7:41 AM38 Comments

Friday Squid Blogging: Video of Live Giant Squid

Giant squid filmed swimming through a harbor in Japan:

Reports in Japanese say that the creature was filmed on December 24, seen by an underwater camera swimming near boat moorings. It was reportedly about 13 feet long and 3 feet around. Some on Twitter have suggested that the species may be Architeuthis, a deep-ocean dwelling creature that can grow up to 43 feet.

Some more news stories.

A few days later, a diver helped him get back out to sea. More amazing video at that link.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

And Happy New Year, everyone.

Posted on January 1, 2016 at 12:29 PM155 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.