Entries Tagged "security engineering"

Page 10 of 15

CONIKS

CONIKS is an new easy-to-use transparent key-management system:

CONIKS is a key management system for end users capable of integration in end-to-end secure communication services. The main idea is that users should not have to worry about managing encryption keys when they want to communicate securely, but they also should not have to trust their secure communication service providers to act in their interest.

Here’s the academic paper. And here’s a good discussion of the protocol and how it works. This is the problem they’re trying to solve:

One of the main challenges to building usable end-to-end encrypted communication tools is key management. Services such as Apple’s iMessage have made encrypted communication available to the masses with an excellent user experience because Apple manages a directory of public keys in a centralized server on behalf of their users. But this also means users have to trust that Apple’s key server won’t be compromised or compelled by hackers or nation-state actors to insert spurious keys to intercept and manipulate users’ encrypted messages. The alternative, and more secure, approach is to have the service provider delegate key management to the users so they aren’t vulnerable to a compromised centralized key server. This is how Google’s End-To-End works right now. But decentralized key management means users must “manually” verify each other’s keys to be sure that the keys they see for one another are valid, a process that several studies have shown to be cumbersome and error-prone for the vast majority of users. So users must make the choice between strong security and great usability.

And this is CONIKS:

In CONIKS, communication service providers (e.g. Google, Apple) run centralized key servers so that users don’t have to worry about encryption keys, but the main difference is CONIKS key servers store the public keys in a tamper-evident directory that is publicly auditable yet privacy-preserving. On a regular basis, CONIKS key servers publish directory summaries, which allow users in the system to verify they are seeing consistent information. To achieve this transparent key management, CONIKS uses various cryptographic mechanisms that leave undeniable evidence if any malicious outsider or insider were to tamper with any key in the directory and present different parties different views of the directory. These consistency checks can be automated and built into the communication apps to minimize user involvement.

Posted on April 6, 2016 at 10:27 AMView Comments

Cryptography Is Harder Than It Looks

Writing a magazine column is always an exercise in time travel. I’m writing these words in early December. You’re reading them in February. This means anything that’s news as I write this will be old hat in two months, and anything that’s news to you hasn’t happened yet as I’m writing.

This past November, a group of researchers found some serious vulnerabilities in an encryption protocol that I, and probably most of you, use regularly. The group alerted the vendor, who is currently working to update the protocol and patch the vulnerabilities. The news will probably go public in the middle of February, unless the vendor successfully pleads for more time to finish their security patch. Until then, I’ve agreed not to talk about the specifics.

I’m writing about this now because these vulnerabilities illustrate two very important truisms about encryption and the current debate about adding back doors to security products:

  1. Cryptography is harder than it looks.
  2. Complexity is the worst enemy of security.

These aren’t new truisms. I wrote about the first in 1997 and the second in 1999. I’ve talked about them both in Secrets and Lies (2000) and Practical Cryptography (2003). They’ve been proven true again and again, as security vulnerabilities are discovered in cryptographic system after cryptographic system. They’re both still true today.

Cryptography is harder than it looks, primarily because it looks like math. Both algorithms and protocols can be precisely defined and analyzed. This isn’t easy, and there’s a lot of insecure crypto out there, but we cryptographers have gotten pretty good at getting this part right. However, math has no agency; it can’t actually secure anything. For cryptography to work, it needs to be written in software, embedded in a larger software system, managed by an operating system, run on hardware, connected to a network, and configured and operated by users. Each of these steps brings with it difficulties and vulnerabilities.

Although cryptography gives an inherent mathematical advantage to the defender, computer and network security are much more balanced. Again and again, we find vulnerabilities not in the underlying mathematics, but in all this other stuff. It’s far easier for an attacker to bypass cryptography by exploiting a vulnerability in the system than it is to break the mathematics. This has been true for decades, and it’s one of the lessons that Edward Snowden reiterated.

The second truism is that complexity is still the worst enemy of security. The more complex a system is, the more lines of code, interactions with other systems, configuration options, and vulnerabilities there are. Implementing cryptography involves getting everything right, and the more complexity there is, the more there is to get wrong.

Vulnerabilities come from options within a system, interactions between systems, interfaces between users and systems—everywhere. If good security comes from careful analysis of specifications, source code, and systems, then a complex system is more difficult and more expensive to analyze. We simply don’t know how to securely engineer anything but the simplest of systems.

I often refer to this quote, sometimes attributed to Albert Einstein and sometimes to Yogi Berra: “In theory, theory and practice are the same. In practice, they are not.”

These truisms are directly relevant to the current debate about adding back doors to encryption products. Many governments—from China to the US and the UK—want the ability to decrypt data and communications without users’ knowledge or consent. Almost all computer security experts have two arguments against this idea: first, adding this back door makes the system vulnerable to all attackers and doesn’t just provide surreptitious access for the “good guys,” and second, creating this sort of access greatly increases the underlying system’s complexity, exponentially increasing the possibility of getting the security wrong and introducing new vulnerabilities.

Going back to the new vulnerability that you’ll learn about in mid-February, the lead researcher wrote to me: “If anyone tells you that [the vendor] can just ‘tweak’ the system a little bit to add key escrow or to man-in-the-middle specific users, they need to spend a few days watching the authentication dance between [the client device/software] and the umpteen servers it talks to just to log into the network. I’m frankly amazed that any of it works at all, and you couldn’t pay me enough to tamper with any of it.” This is an important piece of wisdom.

The designers of this system aren’t novices. They’re an experienced team with some of the best security engineers in the field. If these guys can’t get the security right, just imagine how much worse it is for smaller companies without this team’s level of expertise and resources. Now imagine how much worse it would be if you added a government-mandated back door. There are more opportunities to get security wrong, and more engineering teams without the time and expertise necessary to get it right. It’s not a recipe for security.

Unlike what much of today’s political rhetoric says, strong cryptography is essential for our information security. It’s how we protect our information and our networks from hackers, criminals, foreign governments, and terrorists. Security vulnerabilities, whether deliberate backdoor access mechanisms or accidental flaws, make us all less secure. Getting security right is harder than it looks, and our best chance is to make the cryptography as simple and public as possible.

This essay previously appeared in IEEE Security & Privacy, and is an update of something I wrote in 1997.

That vulnerability I alluded to in the essay is the recent iMessage flaw.

Posted on March 24, 2016 at 6:37 AMView Comments

Possible Government Demand for WhatsApp Backdoor

The New York Times is reporting that WhatsApp, and its parent company Facebook, may be headed to court over encrypted chat data that the FBI can’t decrypt.

This case is fundamentally different from the Apple iPhone case. In that case, the FBI is demanding that Apple create a hacking tool to exploit an already existing vulnerability in the iPhone 5c, because they want to get at stored data on a phone that they have in their possession. In the WhatsApp case, chat data is end-to-end encrypted, and there is nothing the company can do to assist the FBI in reading already encrypted messages. This case would be about forcing WhatsApp to make an engineering change in the security of its software to create a new vulnerability—one that they would be forced to push onto the user’s device to allow the FBI to eavesdrop on future communications. This is a much further reach for the FBI, but potentially a reasonable additional step if they win the Apple case.

And once the US demands this, other countries will demand it as well. Note that the government of Brazil has arrested a Facebook employee because WhatsApp is secure.

We live in scary times when our governments want us to reduce our own security.

EDITED TO ADD (3/15): More commentary.

Posted on March 15, 2016 at 6:17 AMView Comments

The Importance of Strong Encryption to Security

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop—and I hope you do—it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It’s a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn’t.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited—by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or—even worse—nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a “backdoor,” because it’s a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can’t build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn’t work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it’s a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That’s just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you’ll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It’s not. It’s a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can’t. If the FBI gets its way and forces companies to weaken encryption, all of us—our data, our networks, our infrastructure, our society—will be at risk.

This essay previously appeared in the New York Times “Room for Debate” blog. It’s something I seem to need to say again and again.

Posted on February 25, 2016 at 6:40 AMView Comments

Worldwide Encryption Products Survey

Today I released my worldwide survey of encryption products.

The findings of this survey identified 619 entities that sell encryption products. Of those 412, or two-thirds, are outside the U.S.-calling into question the efficacy of any US mandates forcing backdoors for law-enforcement access. It also showed that anyone who wants to avoid US surveillance has over 567 competing products to choose from. These foreign products offer a wide variety of secure applications­—voice encryption, text message encryption, file encryption, network-traffic encryption, anonymous currency­—providing the same levels of security as US products do today.

Details:

  • There are at least 865 hardware or software products incorporating encryption from 55 different countries. This includes 546 encryption products from outside the US, representing two-thirds of the total.
  • The most common non-US country for encryption products is Germany, with 112 products. This is followed by the United Kingdom, Canada, France, and Sweden, in that order.
  • The five most common countries for encryption products­—including the US­—account for two-thirds of the total. But smaller countries like Algeria, Argentina, Belize, the British Virgin Islands, Chile, Cyprus, Estonia, Iraq, Malaysia, St. Kitts and Nevis, Tanzania, and Thailand each produce at least one encryption product.
  • Of the 546 foreign encryption products we found, 56% are available for sale and 44% are free. 66% are proprietary, and 34% are open source. Some for-sale products also have a free version.
  • At least 587 entities­—primarily companies—­either sell or give away encryption products. Of those, 374, or about two-thirds, are outside the US.
  • Of the 546 foreign encryption products, 47 are file encryption products, 68 e-mail encryption products, 104 message encryption products, 35 voice encryption products, and 61 virtual private networking products.

The report is here, here, and here. The data, in Excel form, is here.

Press articles are starting to come in. (Here are the previous blog posts on the effort.)

I know the database is incomplete, and I know there are errors. I welcome both additions and corrections, and will be releasing a 1.1 version of this survey in a few weeks.

EDITED TO ADD (2/13): More news.

Posted on February 11, 2016 at 11:05 AMView Comments

IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AMView Comments

Cory Doctorow on Software Security and the Internet of Things

Cory Doctorow has a good essay on software integrity and control problems and the Internet of Things. He’s writing about self-driving cars, but the issue is much more general. Basically, we’re going to want systems that prevent their owner from making certain changes to it. We know how to do this: digital rights management. We also know that this solution doesn’t work, and trying introduces all sorts of security vulnerabilities. So we have a problem.

This is an old problem. (Adam Shostack and I wrote a paper about it in 1999, about smart cards.) The Internet of Things is going to make it much worse. And it’s one we’re not anywhere near prepared to solve.

Posted on December 31, 2015 at 6:12 AMView Comments

Security vs. Business Flexibility

This article demonstrates that security is less important than functionality.

When asked about their preference if they needed to choose between IT security and business flexibility, 71 percent of respondents said that security should be equally or more important than business flexibility.

But show them the money and things change, when the same people were asked if they would take the risk of a potential security threat in order to achieve the biggest deal of their life, 69 percent of respondents say they would take the risk.

The reactions I’ve read call this a sad commentary on security, but I think it’s a perfectly reasonable result. Security is important, but when there’s an immediate conflicting requirement, security takes a back seat. I don’t think this is a problem of security literacy, or of awareness, or of training. It’s a consequence of our natural proclivity to take risks when the rewards are great.

Given the option, I would choose the security threat, too.

In the IT world, we need to recognize this reality. We need to build security that’s flexible and adaptable, that can respond to and mitigate security breaches, and can maintain security even in the face of business executives who would deliberately bypass security protection measures to achieve the biggest deal of their lives.

This essay previously appeared on Resilient Systems’s blog.

Posted on December 2, 2015 at 6:14 AMView Comments

1 8 9 10 11 12 15

Sidebar photo of Bruce Schneier by Joe MacInnis.