Entries Tagged "security engineering"

Page 10 of 15

Possible Government Demand for WhatsApp Backdoor

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can’t decrypt.

This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability—one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

We live in scary times when our governments want us to reduce our own security.

EDITED TO ADD (3/15): More commentary.

Posted on March 15, 2016 at 6:17 AMView Comments

The Importance of Strong Encryption to Security

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop—and I hope you do—it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It’s a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn’t.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited—by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or—even worse—nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a “backdoor,” because it’s a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can’t build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn’t work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it’s a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That’s just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you’ll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It’s not. It’s a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can’t. If the FBI gets its way and forces companies to weaken encryption, all of us—our data, our networks, our infrastructure, our society—will be at risk.

This essay previously appeared in the New York Times “Room for Debate” blog. It’s something I seem to need to say again and again.

Posted on February 25, 2016 at 6:40 AMView Comments

Worldwide Encryption Products Survey

Today I released my worldwide survey of encryption products.

The findings of this survey identified 619 entities that sell encryption products. Of those 412, or two-thirds, are outside the U.S.-calling into question the efficacy of any US mandates forcing backdoors for law-enforcement access. It also showed that anyone who wants to avoid US surveillance has over 567 competing products to choose from. These foreign products offer a wide variety of secure applications­—voice encryption, text message encryption, file encryption, network-traffic encryption, anonymous currency­—providing the same levels of security as US products do today.

Details:

  • There are at least 865 hardware or software products incorporating encryption from 55 different countries. This includes 546 encryption products from outside the US, representing two-thirds of the total.
  • The most common non-US country for encryption products is Germany, with 112 products. This is followed by the United Kingdom, Canada, France, and Sweden, in that order.
  • The five most common countries for encryption products­—including the US­—account for two-thirds of the total. But smaller countries like Algeria, Argentina, Belize, the British Virgin Islands, Chile, Cyprus, Estonia, Iraq, Malaysia, St. Kitts and Nevis, Tanzania, and Thailand each produce at least one encryption product.
  • Of the 546 foreign encryption products we found, 56% are available for sale and 44% are free. 66% are proprietary, and 34% are open source. Some for-sale products also have a free version.
  • At least 587 entities­—primarily companies—­either sell or give away encryption products. Of those, 374, or about two-thirds, are outside the US.
  • Of the 546 foreign encryption products, 47 are file encryption products, 68 e-mail encryption products, 104 message encryption products, 35 voice encryption products, and 61 virtual private networking products.

The report is here, here, and here. The data, in Excel form, is here.

Press articles are starting to come in. (Here are the previous blog posts on the effort.)

I know the database is incomplete, and I know there are errors. I welcome both additions and corrections, and will be releasing a 1.1 version of this survey in a few weeks.

EDITED TO ADD (2/13): More news.

Posted on February 11, 2016 at 11:05 AMView Comments

IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AMView Comments

Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AMView Comments

Security vs. Business Flexibility

This article demonstrates that security is less important than functionality.

When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.

But show them the money and things change, when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.

The reactions I’ve read call this a sad commentary on security, but I think it’s a perfectly reasonable result. Security is important, but when there’s an immediate conflicting requirement, security takes a back seat. I don’t think this is a problem of security literacy, or of awareness, or of training. It’s a consequence of our natural proclivity to take risks when the rewards are great.

Given the option, I would choose the security threat, too.

In the IT world, we need to recognize this reality. We need to build security that’s flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

This essay previously appeared on Resilient Systems’s blog.

Posted on December 2, 2015 at 6:14 AMView Comments

Self-Destructing Computer Chip

The chip is built on glass:

Shattering the glass is straightforward. When the proper circuit is toggled, a small resistor within the substrate heats up until the glass shatters. According to Corning, it will continue shattering even after the initial break, rendering the entire chip unusable. The demo chip resistor was triggered by a photo diode that switched the circuit when a laser shone upon it. The glass plate quickly shattered into fragments once the laser touches it.

Posted on September 17, 2015 at 7:17 AMView Comments

1 8 9 10 11 12 15

Sidebar photo of Bruce Schneier by Joe MacInnis.