Entries Tagged "security education"

Page 2 of 6

I’m Writing a Book on Security

I’m writing a book on security in the highly connected Internet-of-Things world. Tentative title:

Click Here to Kill Everybody
Peril and Promise in a Hyper-Connected World

There are two underlying metaphors in the book. The first is what I have called the World-Sized Web, which is that combination of mobile, cloud, persistence, personalization, agents, cyber-physical systems, and the Internet of Things. The second is what I’m calling the “war of all against all,” which is the recognition that security policy is a series of “wars” between various interests, and that any policy decision in any one of the wars affects all the others. I am not wedded to either metaphor at this point.

This is the current table of contents, with three of the chapters broken out into sub-chapters:

  • Introduction
  • The World-Sized Web
  • The Coming Threats
    • Privacy Threats
    • Availability and Integrity Threats
    • Threats from Software-Controlled Systems
    • Threats from Interconnected Systems
    • Threats from Automatic Algorithms
    • Threats from Autonomous Systems
    • Other Threats of New Technologies
    • Catastrophic Risk
    • Cyberwar
  • The Current Wars
    • The Copyright Wars
    • The US/EU Data Privacy Wars
    • The War for Control of the Internet
    • The War of Secrecy
  • The Coming Wars
    • The War for Your Data
    • The War Against Your Computers
    • The War for Your Embedded Computers
    • The Militarization of the Internet
    • The Powerful vs. the Powerless
    • The Rights of the Individual vs. the Rights of Society
  • The State of Security
  • Near-Term Solutions
  • Security for an Empowered World
  • Conclusion

That will change, of course. If the past is any guide, everything will change.

Questions: Am I missing any threats? Am I missing any wars?

Current schedule is for me to finish writing this book by the end of September, and have it published at the end of April 2017. I hope to have pre-publication copies available for sale at the RSA Conference next year. As with my previous book, Norton is the publisher.

So if you notice me blogging less this summer, this is why.

Posted on April 29, 2016 at 1:02 PMView Comments

IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AMView Comments

Testing the Usability of PGP Encryption Tools

Why Johnny Still, Still Can’t Encrypt: Evaluating the Usability of a Modern PGP Client,” by Scott Ruoti, Jeff Andersen, Daniel Zappala, and Kent Seamons.

Abstract: This paper presents the results of a laboratory study involving Mailvelope, a modern PGP client that integrates tightly with existing webmail providers. In our study, we brought in pairs of participants and had them attempt to use Mailvelope to communicate with each other. Our results shown that more than a decade and a half after Why Johnny Can’t Encrypt, modern PGP tools are still unusable for the masses. We finish with a discussion of pain points encountered using Mailvelope, and discuss what might be done to address them in future PGP systems.

I have recently come to the conclusion that e-mail is fundamentally unsecurable. The things we want out of e-mail, and an e-mail system, are not readily compatible with encryption. I advise people who want communications security to not use e-mail, but instead use an encrypted message client like OTR or Signal.

Posted on November 12, 2015 at 2:28 PMView Comments

Comparing the Security Practices of Experts and Non-Experts

New paper: “‘…no one can hack my mind’: Comparing Expert and Non-Expert Security Practices,” by Iulia Ion, Rob Reeder, and Sunny Consolvo.

Abstract: The state of advice given to people today on how to stay safe online has plenty of room for improvement. Too many things are asked of them, which may be unrealistic, time consuming, or not really worth the effort. To improve the security advice, our community must find out what practices people use and what recommendations, if messaged well, are likely to bring the highest benefit while being realistic to ask of people. In this paper, we present the results of a study which aims to identify which practices people do that they consider most important at protecting their security on-line. We compare self-reported security practices of non-experts to those of security experts (i.e., participants who reported having five or more years of experience working in computer security). We report on the results of two online surveys—­one with 231 security experts and one with 294 MTurk participants­—on what the practices and attitudes of each group are. Our findings show a discrepancy between the security practices that experts and non-experts report taking. For instance, while experts most frequently report installing software updates, using two-factor authentication and using a password manager to stay safe online, non-experts report using antivirus software, visiting only known websites, and changing passwords frequently.

Posted on July 30, 2015 at 2:21 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.