Blog: December 2015 Archives

Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AM39 Comments

Another Scandal Resulting from E-mails Gone Public

A lot of Pennsylvania government officials are being hurt as a result of e-mails being made public. This is all the result of a political pressure to release the e-mails, and not an organizational doxing attack, but the effects are the same.

Our psychology of e-mail doesn’t match the reality. We treat it as ephemeral, even though it’s not. And the archival nature of e-mail—or text messages, or Twitter chats, or Facebook conversations—isn’t salient.

Posted on December 30, 2015 at 6:29 AM58 Comments

DMCA and the Internet of Things

In theory, the Internet of Things—the connected network of tiny computers inside home appliances, household objects, even clothing—promises to make your life easier and your work more efficient. These computers will communicate with each other and the Internet in homes and public spaces, collecting data about their environment and making changes based on the information they receive. In theory, connected sensors will anticipate your needs, saving you time, money, and energy.

Except when the companies that make these connected objects act in a way that runs counter to the consumer’s best interests—as the technology company Philips did recently with its smart ambient-lighting system, Hue, which consists of a central controller that can remotely communicate with light bulbs. In mid-December, the company pushed out a software update that made the system incompatible with some other manufacturers’ light bulbs, including bulbs that had previously been supported.

The complaints began rolling in almost immediately. The Hue system was supposed to be compatible with an industry standard called ZigBee, but the bulbs that Philips cut off were ZigBee-compliant. Philips backed down and restored compatibility a few days later.

But the story of the Hue debacle—the story of a company using copy protection technology to lock out competitors—isn’t a new one. Plenty of companies set up proprietary standards to ensure that their customers don’t use someone else’s products with theirs. Keurig, for example, puts codes on its single-cup coffee pods, and engineers its coffeemakers to work only with those codes. HP has done the same thing with its printers and ink cartridges.

To stop competitors just reverse-engineering the proprietary standard and making compatible peripherals (for example, another coffee manufacturer putting Keurig’s codes on its own pods), these companies rely on a 1998 law called the Digital Millennium Copyright Act (DCMA). The law was originally passed to prevent people from pirating music and movies; while it hasn’t done a lot of good in that regard (as anyone who uses BitTorrent can attest), it has done a lot to inhibit security and compatibility research.

Specifically, the DMCA includes an anti-circumvention provision, which prohibits companies from circumventing “technological protection measures” that “effectively control access” to copyrighted works. That means it’s illegal for someone to create a Hue-compatible light bulb without Philips’ permission, a K-cup-compatible coffee pod without Keurigs’, or an HP-printer compatible cartridge without HP’s.

By now, we’re used to this in the computer world. In the 1990s, Microsoft used a strategy it called “embrace, extend, extinguish,” in which it gradually added proprietary capabilities to products that already adhered to widely used standards. Some more recent examples: Amazon’s e-book format doesn’t work on other companies’ readers, music purchased from Apple’s iTunes store doesn’t work with other music players, and every game console has its own proprietary game cartridge format.

Because companies can enforce anti-competitive behavior this way, there’s a litany of things that just don’t exist, even though they would make life easier for consumers in significant ways. You can’t have custom software for your cochlear implant, or your programmable thermostat, or your computer-enabled Barbie doll. An auto repair shop can’t design a better diagnostic system that interfaces with a car’s computers. And John Deere has claimed that it owns the software on all of its tractors, meaning the farmers that purchase them are prohibited from repairing or modifying their property.

As the Internet of Things becomes more prevalent, so too will this kind of anti-competitive behavior—which undercuts the purpose of having smart objects in the first place. We’ll want our light bulbs to communicate with a central controller, regardless of manufacturer. We’ll want our clothes to communicate with our washing machines and our cars to communicate with traffic signs.

We can’t have this when companies can cut off compatible products, or use the law to prevent competitors from reverse-engineering their products to ensure compatibility across brands. For the Internet of Things to provide any value, what we need is a world that looks like the automotive industry, where you can go to a store and buy replacement parts made by a wide variety of different manufacturers. Instead, the Internet of Things is on track to become a battleground of competing standards, as companies try to build monopolies by locking each other out.

This essay previously appeared on TheAtlantic.com.

Slashdot thread.

EDITED TO ADD (1/5): Interesting commentary.

Posted on December 29, 2015 at 5:58 AM39 Comments

NSA/GCHQ Exploits against Juniper Networking Equipment

The Intercept just published a 2011 GCHQ document outlining its exploit capabilities against Juniper networking equipment, including routers and NetScreen firewalls as part of this article.

GCHQ currently has capabilities against:

  • Juniper NetScreen Firewalls models Ns5gt, N25, NS50, NS500, NS204, NS208, NS5200, NS5000, SSG5, SSG20, SSG140, ISG 1000, ISG 2000. Some reverse engineering maybe required depending on firmware revisions.
  • Juniper Routers: M320 is currently being worked on and we would expect to have full support by the end of 2010.
  • No other models are currently supported.
  • Juniper technology sharing with NSA improved dramatically during CY2010 to exploit several target networks where GCHQ had access primacy.

Yes, the document said “end of 2010” even though the document is dated February 3, 2011.

This doesn’t have much to do with the Juniper backdoor currently in the news, but the document does provide even more evidence that (despite what the government says) the NSA hoards vulnerabilities in commonly used software for attack purposes instead of improving security for everyone by disclosing it.

Note: In case anyone is researching this issue, here is my complete list of useful links on various different aspects of the ongoing debate.

EDITED TO ADD: In thinking about the equities process, it’s worth differentiating among three different things: bugs, vulnerabilities, and exploits. Bugs are plentiful in code, but not all bugs can be turned into vulnerabilities. And not all vulnerabilities can be turned into exploits. Exploits are what matter; they’re what everyone uses to compromise our security. Fixing bugs and vulnerabilities is important because they could potentially be turned into exploits.

I think the US government deliberately clouds the issue when they say that they disclose almost all bugs they discover, ignoring the much more important question of how often they disclose exploits they discover. What this document shows is that—despite their insistence that they prioritize security over surveillance—they like to hoard exploits against commonly used network equipment.

Posted on December 28, 2015 at 6:54 AM26 Comments

Using Law against Technology

On Thursday, a Brazilian judge ordered the text messaging service WhatsApp shut down for 48 hours. It was a monumental action.

WhatsApp is the most popular app in Brazil, used by about 100 million people. The Brazilian telecoms hate the service because it entices people away from more expensive text messaging services, and they have been lobbying for months to convince the government that it’s unregulated and illegal. A judge finally agreed.

In Brazil’s case, WhatsApp was blocked for allegedly failing to respond to a court order. Another judge reversed the ban 12 hours later, but there is a pattern forming here. In Egypt, Vodafone has complained about the legality of WhatsApp’s free voice-calls, while India’s telecoms firms have been lobbying hard to curb messaging apps such as WhatsApp and Viber. Earlier this year, the United Arab Emirates blocked WhatsApp’s free voice call feature.

All this is part of a massive power struggle going on right now between traditional companies and new Internet companies, and we’re all in the blast radius.

It’s one aspect of a tech policy problem that has been plaguing us for at least 25 years: technologists and policymakers don’t understand each other, and they inflict damage on society because of that. But it’s worse today. The speed of technological progress makes it worse. And the types of technology­—especially the current Internet of mobile devices everywhere, cloud computing, always-on connections and the Internet of Things—­make it worse.

The Internet has been disrupting and destroying long-standing business models since its popularization in the mid-1990s. And traditional industries have long fought back with every tool at their disposal. The movie and music industries have tried for decades to hamstring computers in an effort to prevent illegal copying of their products. Publishers have battled with Google over whether their books could be indexed for online searching.

More recently, municipal taxi companies and large hotel chains are fighting with ride-sharing companies such as Uber and apartment-sharing companies such as Airbnb. Both the old companies and the new upstarts have tried to bend laws to their will in an effort to outmaneuver each other.

Sometimes the actions of these companies harm the users of these systems and services. And the results can seem crazy. Why would the Brazilian telecoms want to provoke the ire of almost everyone in the country? They’re trying to protect their monopoly. If they win in not just shutting down WhatsApp, but Telegram and all the other text-message services, their customers will have no choice. This is how high-stakes these battles can be.

This isn’t just companies competing in the marketplace. These are battles between competing visions of how technology should apply to business, and traditional businesses and “disruptive” new businesses. The fundamental problem is that technology and law are in conflict, and what’s worked in the past is increasingly failing today.

First, the speeds of technology and law have reversed. Traditionally, new technologies were adopted slowly over decades. There was time for people to figure them out, and for their social repercussions to percolate through society. Legislatures and courts had time to figure out rules for these technologies and how they should integrate into the existing legal structures.

They don’t always get it right—­ the sad history of copyright law in the United States is an example of how they can get it badly wrong again and again­—but at least they had a chance before the technologies become widely adopted.

That’s just not true anymore. A new technology can go from zero to a hundred million users in a year or less. That’s just too fast for the political or legal process. By the time they’re asked to make rules, these technologies are well-entrenched in society.

Second, the technologies have become more complicated and specialized. This means that the normal system of legislators passing laws, regulators making rules based on those laws and courts providing a second check on those rules fails. None of these people has the expertise necessary to understand these technologies, let alone the subtle and potentially pernicious ramifications of any rules they make.

We see the same thing between governments and law-enforcement and militaries. In the United States, we’re expecting policymakers to understand the debate between the FBI’s desire to read the encrypted e-mails and computers of crime suspects and the security researchers who maintain that giving them that capability will render everyone insecure. We’re expecting legislators to provide meaningful oversight over the National Security Agency, when they can only read highly technical documents about the agency’s activities in special rooms and without any aides who might be conversant in the issues.

The result is that we end up in situations such as the one Brazil finds itself in. WhatsApp went from zero to 100 million users in five years. The telecoms are advancing all sorts of weird legal arguments to get the service banned, and judges are ill-equipped to separate fact from fiction.

This isn’t a simple matter of needing government to get out of the way and let companies battle in the marketplace. These companies are for-profit entities, and their business models are so complicated that they regularly don’t do what’s best for their users. (For example, remember that you’re not really Facebook’s customer. You’re their product.)

The fact that people’s resumes are effectively the first 10 hits on a Google search of their name is a problem—­ something that the European “right to be forgotten” tried ham-fistedly to address. There’s a lot of smart writing that says that Uber’s disruption of traditional taxis will be worse for the people who regularly use the services. And many people worry about Amazon’s increasing dominance of the publishing industry.

We need a better way of regulating new technologies.

That’s going to require bridging the gap between technologists and policymakers. Each needs to understand the other ­—not enough to be experts in each other’s fields, but enough to engage in meaningful conversations and debates. That’s also going to require laws that are agile and written to be as technologically invariant as possible.

It’s a tall order, I know, and one that has been on the wish list of every tech policymaker for decades. But today, the stakes are higher and the issues come faster. Not doing so will become increasingly harmful for all of us.

This essay originally appeared on CNN.com.

EDITED TO ADD (12/23): Slashdot thread.

Posted on December 23, 2015 at 6:48 AM59 Comments

"The Medieval Origins of Mass Surveillance"

This interesting article by medieval historian Amanda Power traces our culture’s relationship with the concept of mass surveillance from the medieval characterization of the Christian god and how piety was policed by the church:

What is all this but a fundamental trust in the experience of being watched? One must wonder about the subtle, unspoken fear of the consequences of refusing to participate in systems of surveillance, or even to critique them seriously. This would be to risk isolation. Those who have exposed the extent of surveillance are fugitives and exiles from our paradise. They have played the role of the cursed serpent of Eden: the purveyor of illicit knowledge who broke the harmony between watcher and watched. The rest of us contemplate the prospect of dissent with careful unease, feeling that our individual and collective security depends on compliance.

[…]

Eight centuries ago, in November 1215, Pope Innocent III presided over a Great Council of the Church in Rome known as the Fourth Lateran Council. It was attended by high-ranking members of the ecclesiastical hierarchy and the monastic world, together with representatives of emperors, kings, and other secular leaders from throughout Christendom. Their decisions were promulgated through seventy-one constitutions. They began with a statement of what all Christians were required to believe, including specifics on the nature of God­by this time: “eternal and immeasurable, almighty, unchangeable, incomprehensible and ineffable”—and the view that salvation could be found only through the Roman Catholic Church. Anyone who disagreed, according to the third constitution, was to be handed over to secular lords for punishment, stripped of their property, and cast out of society until they proved their orthodoxy, or else be executed if they did not. Anyone in authority would be punished if they did not seek out and expel such people from their lands; their subjects would be released from obedience and their territories handed over to true Catholics. There was nothing empty about this threat: the council occurred in the middle of the bitter Albigensian Crusade, during which heresy—likened to a cancer in the body of Christendom—was purportedly being cut out of Languedoc by the swords of the pious.

The Fourth Lateran Council was talking about crimes of thought, of dissent over matters of belief, matters not susceptible of proof. But whether individuals were heretics could not, in theory, be established without investigating the contents of their minds. To this end, the council decreed that bishops’ representatives should inquire in every parish at least once a year to discover “if anyone knows of heretics there or of any persons who hold secret conventicles or who differ in their life and habits from the normal way of living of the faithful.” These representatives were to follow these external indications of nonconformity into the recesses of the mind and establish their meaning in each case. Over the decades the role of the inquisitor was developed into an art and a science, and elaborate handbooks were produced. But in 1215 it was stated merely that individuals should be punished if “unable to clear themselves of the charge.”

[…]

What is all this but a fundamental trust in the experience of being watched? Our trust is so strong that it seems to have found its own protective rationality, deeply rooted in Western consciousness. It’s an addict’s rationality, by which we’re unable to refrain from making public a stream of intimate details of our lives and those of children too young to consent. One must wonder about the subtle, unspoken fear of the consequences of refusing to participate in systems of surveillance, or even to critique them seriously. This would be to risk isolation. It would be a trifle paranoid to reveal less—a little eccentric, not quite rational.

Posted on December 21, 2015 at 1:09 PM80 Comments

Back Door in Juniper Firewalls

Juniper has warned about a malicious back door in its firewalls that automatically decrypts VPN traffic. It’s been there for years.

Hopefully details are forthcoming, but the folks at Hacker News have pointed to this page about Juniper’s use of the DUAL_EC_DBRG random number generator. For those who don’t immediately recognize that name, it’s the pseudo-random-number generator that was backdoored by the NSA. Basically, the PRNG uses two secret parameters to create a public parameter, and anyone who knows those secret parameters can predict the output. In the standard, the NSA chose those parameters. Juniper doesn’t use those tainted parameters. Instead:

ScreenOS does make use of the Dual_EC_DRBG standard, but is designed to not use Dual_EC_DRBG as its primary random number generator. ScreenOS uses it in a way that should not be vulnerable to the possible issue that has been brought to light. Instead of using the NIST recommended curve points it uses self-generated basis points and then takes the output as an input to FIPS/ANSI X.9.31 PRNG, which is the random number generator used in ScreenOS cryptographic operations.

This means that all anyone has to do to break the PRNG is to hack into the firewall and copy or modify those “self-generated basis points.”

Here’s a good summary of what we know. The conclusion:

Again, assuming this hypothesis is correct then, if it wasn’t the NSA who did this, we have a case where a US government backdoor effort (Dual-EC) laid the groundwork for someone else to attack US interests. Certainly this attack would be a lot easier given the presence of a backdoor-friendly RNG already in place. And I’ve not even discussed the SSH backdoor which, as Wired notes, could have been the work of a different group entirely. That backdoor certainly isn’t NOBUS—Fox-IT claim to have found the backdoor password in six hours.

More details to come, I’m sure.

EDITED TO ADD (12/21): A technical overview of the SSH backdoor.

EDITED TO ADD (12/22): Matthew Green wrote a really good technical post about this.

They then piggybacked on top of it to build a backdoor of their own, something they were able to do because all of the hard work had already been done for them. The end result was a period in which someone—maybe a foreign government—was able to decrypt Juniper traffic in the U.S. and around the world. And all because Juniper had already paved the road.

Another good article.

Posted on December 21, 2015 at 6:52 AM41 Comments

Friday Squid Blogging: Penguins Fight over Squid

Watch this video of gentoo penguins fighting over a large squid.

This underwater brawl was captured on a video camera taped to the back of the second penguin, revealing this unexpected foraging behaviour for the first time. “This is completely new behaviour, not just for gentoo penguins but for penguins in general,” says Jonathan Handley, a doctoral student at Nelson Mandela Metropolitan University in Port Elizabeth, South Africa.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 18, 2015 at 4:11 PM130 Comments

DOS Attack Against Los Angeles Schools

Yesterday, the city of Los Angeles closed all of its schools—over 1,000 schools—because of a bomb threat. It was a hoax.

LA officials defended the move, with that city’s police chief dismissing the criticism as “irresponsible.”

“It is very easy in hindsight to criticize a decision based on results the decider could never have known,” Chief Charlie Beck said at a news conference.

I wrote about this back in 2007, where I called it CYA security: given the choice between overreacting to a threat and wasting everyone’s time, and underreacting and potentially losing your job, it’s easy to overreact.

What’s interesting is that New York received the same threat, and treated it as the hoax it was. Why the difference?

EDITED TO ADD (12/17): Best part of the story: the e-mailer’s address was madbomber@cock.li.

EDITED TO ADD (1/13): There have been copycats.

Posted on December 16, 2015 at 6:28 AM69 Comments

Friday Squid Blogging: Rare Octopus Squid Video from Hawaii

Neat:

While the Dana octopus squid may lack a squid’s trademark trailing tentacles, it makes up for them in spectacular lighting equipment, with two of its muscular arms ending in lidded light organs called “photophores.” About the size of lemons, these photophores are the largest known light-producing organs in the animal kingdom, said Mike Vecchione, a zoologist at the NOAA National Systematics Laboratory at the Smithsonian Institution and a curator of cephalopods at the National Museum of Natural History, both in Washington, D.C.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 11, 2015 at 4:02 PM213 Comments

How People Learn about Computer Security

Interesting research: “Identifying patterns in informal sources of security information,” by Emilee Rader and Rick Wash, Journal of Cybersecurity, 1 Dec 2015.

Abstract: Computer users have access to computer security information from many different sources, but few people receive explicit computer security training. Despite this lack of formal education, users regularly make many important security decisions, such as “Should I click on this potentially shady link?” or “Should I enter my password into this form?” For these decisions, much knowledge comes from incidental and informal learning. To better understand differences in the security-related information available to users for such learning, we compared three informal sources of computer security information: news articles, web pages containing computer security advice, and stories about the experiences of friends and family. Using a Latent Dirichlet Allocation topic model, we found that security information from peers usually focuses on who conducts attacks, information containing expertise focuses instead on how attacks are conducted, and information from the news focuses on the consequences of attacks. These differences may prevent users from understanding the persistence and frequency of seemingly mundane threats (viruses, phishing), or from associating protective measures with the generalized threats the users are concerned about (hackers). Our findings highlight the potential for sources of informal security education to create patterns in user knowledge that affect their ability to make good security decisions.

Posted on December 10, 2015 at 6:54 AM22 Comments

Terrifying Technologies

I’ve written about the difference between risk perception and risk reality. I thought about that when reading this list of Americans’ top technology fears:

  1. Cyberterrorism
  2. Corporate tracking of personal information
  3. Government tracking of personal information
  4. Robots replacing workforce
  5. Trusting artificial intelligence to do work
  6. Robots
  7. Artificial intelligence
  8. Technology I don’t understand

More at the link.

Posted on December 9, 2015 at 1:48 PM46 Comments

How Israel Regulates Encryption

Interesting essay about how Israel regulates encryption:

…the Israeli encryption control mechanisms operate without directly legislating any form of encryption-key depositories, built-in back or front door access points, or other similar requirements. Instead, Israel’s system emphasizes smooth initial licensing processes and cultivates government-private sector collaboration. These processes help ensure that Israeli authorities are apprised of the latest encryption and cyber developments and position the government to engage effectively with the private sector when national security risks are identified.

Basically, it looks like secret agreements made in smoke-filled rooms, very discreet with no oversight or accountability. The fact that pretty much everyone in IT security has served in an offensive cybersecurity capacity for the Israeli army helps. As does the fact that the country is so small, making informal deal-making manageable. It doesn’t scale.

Why is this important?

…companies in Israel, a country comprising less than 0.11% of the world’s population, are estimated to have sold 10% ($6 billion out of $60 billion) of global encryption and cyber technologies for 2014.

Posted on December 8, 2015 at 7:25 AM131 Comments

Forced Authorization Attacks Against Chip-and-Pin Credit Card Terminals

Clever:

The way forced authorisation fraud works is that the retailer sets up the terminal for a transaction by inserting the customer’s card and entering the amount, then hands the terminal over to the customer so they can type in the PIN. But the criminal has used a stolen or counterfeit card, and due to the high value of the transaction the terminal performs a “referral”—asking the retailer to call the bank to perform additional checks such as the customer answering a security question. If the security checks pass, the bank will give the retailer an authorisation code to enter into the terminal.

The problem is that when the terminal asks for these security checks, it’s still in the hands of the criminal, and it’s the criminal that follows the steps that the retailer should have. Since there’s no phone conversation with the bank, the criminal doesn’t know the correct authorisation code. But what surprises retailers is that the criminal can type in anything at this stage and the transaction will go through. The criminal might also be able to bypass other security features, for example they could override the checking of the PIN by following the steps the retailer would if the customer has forgotten the PIN.

By the time the terminal is passed back to the retailer, it looks like the transaction was completed successfully. The receipt will differ only very subtly from that of a normal transaction, if at all. The criminal walks off with the goods and it’s only at the end of the day that the authorisation code is checked by the bank. By that time, the criminal is long gone. Because some of the security checks the bank asked for weren’t completed, the retailer doesn’t get the money.

Posted on December 7, 2015 at 5:35 AM29 Comments

Friday Squid Blogging: North Korean Squid Fisherman Found Dead in Boats

I don’t know if you’ve been following the story of the boats full of corpses that have been found in Japanese waters:

Over the past two months, at least 12 wooden boats have been found adrift or on the coast, carrying chilling cargo—the decaying bodies of 22 people, police and Japan’s coast guard said.

All the bodies were “partially skeletonized”—two were found without heads—and one boat contained six skulls, the coast guard said. The first boat was found in October, then a series of boats were found in November.

Writing on the boats suggests that they are from North Korea, and there’s other evidence that they strayed into Japanese waters hunting squid:

Squid fishing equipment found in the boats suggest that the bodies could be of fisherman from food-short North Korea who have been increasingly entering Japanese waters to hunt squid…

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 4, 2015 at 4:22 PM150 Comments

Worldwide Cryptographic Products Survey: Edits and Additions Wanted

Back in September, I announced my intention to survey the world market of cryptographic products. The goal is to compile a list of both free and commercial encryption products that can be used to protect arbitrary data and messages. That is, I’m not interested in products that are specifically designed for a narrow application, like financial transactions, or products that provide authentication or data integrity. I am interested in products that people like FBI director James Comey can possibly claim help criminals communicate securely.

Together with a student here at Harvard University, we’ve compiled a spreadsheet of over 400 products from many different countries.

At this point, we would like your help. Please look at the list. Please correct anything that is wrong, and add anything that is missing. Use this form to submit changes and additions. If it’s more complicated than that, please e-mail me.

As the rhetoric surrounding weakening or banning strong encryption continues, it’s important for policymakers to understand how international the cryptographic market is, and how much of it is not under their control. My hope is that this survey will contribute to the debate by making that point.

Posted on December 3, 2015 at 7:55 AM50 Comments

Security vs. Business Flexibility

This article demonstrates that security is less important than functionality.

When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.

But show them the money and things change, when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.

The reactions I’ve read call this a sad commentary on security, but I think it’s a perfectly reasonable result. Security is important, but when there’s an immediate conflicting requirement, security takes a back seat. I don’t think this is a problem of security literacy, or of awareness, or of training. It’s a consequence of our natural proclivity to take risks when the rewards are great.

Given the option, I would choose the security threat, too.

In the IT world, we need to recognize this reality. We need to build security that’s flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

This essay previously appeared on Resilient Systems’s blog.

Posted on December 2, 2015 at 6:14 AM40 Comments

Tracking Someone Using LifeLock

Someone opened a LifeLock account in his ex-wife’s name, and used the service to track her bank accounts, credit cards, and other financial activities.

The article is mostly about how appalling LifeLock was about this, but I’m more interested in the surveillance possibilities. Certainly the FBI can use LifeLock to surveil people with a warrant. The FBI/NSA can also collect the financial data of every LifeLock customer with a National Security Letter. But it’s interesting how easy it was for an individual to open an account for another individual.

Posted on December 1, 2015 at 5:41 AM22 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.