Entries Tagged "security mindset"

Page 1 of 2

Buzzword Watch: Prosilience

Summer Fowler at CMU has invented a new word: prosilience:

I propose that we build operationally PROSILIENT organizations. If operational resilience, as we like to say, is risk management “all grown up,” then prosilience is resilience with consciousness of environment, self-awareness, and the capacity to evolve. It is not about being able to operate through disruption, it is about anticipating disruption and adapting before it even occurs—a proactive version of resilience. Nascent prosilient capabilities include exercises (tabletop or technical) that simulate how organizations would respond to a scenario. The goal, however, is to automate, expand, and perform continuous exercises based on real-world indicators rather than on scenarios.

I have long been a big fan of resilience as a security concept, and the property we should be aiming for. I’m not sure prosilience buys me anything new, but this is my first encounter with this new buzzword. It would certainly make for a best-selling business-book title.

Posted on March 2, 2017 at 6:08 AMView Comments

Cryptography Is Harder Than It Looks

Writing a magazine column is always an exercise in time travel. I’m writing these words in early December. You’re reading them in February. This means anything that’s news as I write this will be old hat in two months, and anything that’s news to you hasn’t happened yet as I’m writing.

This past November, a group of researchers found some serious vulnerabilities in an encryption protocol that I, and probably most of you, use regularly. The group alerted the vendor, who is currently working to update the protocol and patch the vulnerabilities. The news will probably go public in the middle of February, unless the vendor successfully pleads for more time to finish their security patch. Until then, I’ve agreed not to talk about the specifics.

I’m writing about this now because these vulnerabilities illustrate two very important truisms about encryption and the current debate about adding back doors to security products:

  1. Cryptography is harder than it looks.
  2. Complexity is the worst enemy of security.

These aren’t new truisms. I wrote about the first in 1997 and the second in 1999. I’ve talked about them both in Secrets and Lies (2000) and Practical Cryptography (2003). They’ve been proven true again and again, as security vulnerabilities are discovered in cryptographic system after cryptographic system. They’re both still true today.

Cryptography is harder than it looks, primarily because it looks like math. Both algorithms and protocols can be precisely defined and analyzed. This isn’t easy, and there’s a lot of insecure crypto out there, but we cryptographers have gotten pretty good at getting this part right. However, math has no agency; it can’t actually secure anything. For cryptography to work, it needs to be written in software, embedded in a larger software system, managed by an operating system, run on hardware, connected to a network, and configured and operated by users. Each of these steps brings with it difficulties and vulnerabilities.

Although cryptography gives an inherent mathematical advantage to the defender, computer and network security are much more balanced. Again and again, we find vulnerabilities not in the underlying mathematics, but in all this other stuff. It’s far easier for an attacker to bypass cryptography by exploiting a vulnerability in the system than it is to break the mathematics. This has been true for decades, and it’s one of the lessons that Edward Snowden reiterated.

The second truism is that complexity is still the worst enemy of security. The more complex a system is, the more lines of code, interactions with other systems, configuration options, and vulnerabilities there are. Implementing cryptography involves getting everything right, and the more complexity there is, the more there is to get wrong.

Vulnerabilities come from options within a system, interactions between systems, interfaces between users and systems—everywhere. If good security comes from careful analysis of specifications, source code, and systems, then a complex system is more difficult and more expensive to analyze. We simply don’t know how to securely engineer anything but the simplest of systems.

I often refer to this quote, sometimes attributed to Albert Einstein and sometimes to Yogi Berra: “In theory, theory and practice are the same. In practice, they are not.”

These truisms are directly relevant to the current debate about adding back doors to encryption products. Many governments—from China to the US and the UK—want the ability to decrypt data and communications without users’ knowledge or consent. Almost all computer security experts have two arguments against this idea: first, adding this back door makes the system vulnerable to all attackers and doesn’t just provide surreptitious access for the “good guys,” and second, creating this sort of access greatly increases the underlying system’s complexity, exponentially increasing the possibility of getting the security wrong and introducing new vulnerabilities.

Going back to the new vulnerability that you’ll learn about in mid-February, the lead researcher wrote to me: “If anyone tells you that [the vendor] can just ‘tweak’ the system a little bit to add key escrow or to man-in-the-middle specific users, they need to spend a few days watching the authentication dance between [the client device/software] and the umpteen servers it talks to just to log into the network. I’m frankly amazed that any of it works at all, and you couldn’t pay me enough to tamper with any of it.” This is an important piece of wisdom.

The designers of this system aren’t novices. They’re an experienced team with some of the best security engineers in the field. If these guys can’t get the security right, just imagine how much worse it is for smaller companies without this team’s level of expertise and resources. Now imagine how much worse it would be if you added a government-mandated back door. There are more opportunities to get security wrong, and more engineering teams without the time and expertise necessary to get it right. It’s not a recipe for security.

Unlike what much of today’s political rhetoric says, strong cryptography is essential for our information security. It’s how we protect our information and our networks from hackers, criminals, foreign governments, and terrorists. Security vulnerabilities, whether deliberate backdoor access mechanisms or accidental flaws, make us all less secure. Getting security right is harder than it looks, and our best chance is to make the cryptography as simple and public as possible.

This essay previously appeared in IEEE Security & Privacy, and is an update of something I wrote in 1997.

That vulnerability I alluded to in the essay is the recent iMessage flaw.

Posted on March 24, 2016 at 6:37 AMView Comments

Nice Security Mindset Example

A real-world one-way function:

Alice and Bob procure the same edition of the white pages book for a particular town, say Cambridge. For each letter Alice wants to encrypt, she finds a person in the book whose last name starts with this letter and uses his/her phone number as the encryption of that letter.

To decrypt the message Bob has to read through the whole book to find all the numbers.

And a way to break it:

I still use this example, with an assumption that there is no reverse look-up. I recently taught it to my AMSA students. And one of my 8th graders said, “If I were Bob, I would just call all the phone numbers and ask their last names.”

In the fifteen years since I’ve been using this example, this idea never occurred to me. I am very shy so it would never enter my mind to call a stranger and ask for their last name. My student made me realize that my own personality affected my mathematical inventiveness.

I’ve written about the security mindset in the past, and this is a great example of it.

Posted on April 9, 2013 at 1:49 PMView Comments

So You Want to Be a Security Expert

I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.

First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.

In general, though, I have three pieces of advice to anyone who wants to learn computer security.

  • Study. Studying can take many forms. It can be classwork, either at universities or at training conferences like SANS and Offensive Security. (These are good self-starter resources.) It can be reading; there are a lot of excellent books out there—and blogs—that teach different aspects of computer security out there. Don’t limit yourself to computer science, either. You can learn a lot by studying other areas of security, and soft sciences like economics, psychology, and sociology.
  • Do. Computer security is fundamentally a practitioner’s art, and that requires practice. This means using what you’ve learned to configure security systems, design new security systems, and—yes—break existing security systems. This is why many courses have strong hands-on components; you won’t learn much without it.
  • Show. It doesn’t matter what you know or what you can do if you can’t demonstrate it to someone who might want to hire you. This doesn’t just mean sounding good in an interview. It means sounding good on mailing lists and in blog comments. You can show your expertise by making podcasts and writing your own blog. You can teach seminars at your local user group meetings. You can write papers for conferences, or books.

I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.

I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset—one I consider essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.

One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote this guide and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.

This essay originally appeared on “Krebs on Security,” the second in a series of answers to the question. This is the first. There will be more.

Posted on July 5, 2012 at 6:17 AMView Comments

Teaching the Security Mindset

In 2008, I wrote about the security mindset and how difficult it is to teach. Two professors teaching a cyberwarfare class gave an exam where they expected their students to cheat:

Our variation of the Kobayashi Maru utilized a deliberately unfair exam—write the first 100 digits of pi (3.14159…) from memory and took place in the pilot offering of a governmental cyber warfare course. The topic of the test itself was somewhat arbitrary; we only sought a scenario that would be too challenging to meet through traditional studying. By design, students were given little advance warning for the exam. Insurrection immediately followed. Why were we giving them such an unfair exam? What conceivable purpose would it serve? Now that we had their attention, we informed the class that we had no expectation that they would actually memorize the digits of pi, we expected them to cheat. How they chose to cheat was entirely up to the student. Collaborative cheating was also encouraged, but importantly, students would fail the exam if caught.

Excerpt:

Students took diverse approaches to cheating, and of the 20 students in the course, none were caught. One student used his Mandarin Chinese skills to hide the answers. Another built a small PowerPoint presentation consisting of three slides (all black slide, digits of pi slide, all black slide). The idea being that the student could flip to the answer when the proctor wasn’t looking and easily flip forwards or backward to a blank screen to hide the answer. Several students chose to hide answers on a slip of paper under the keyboards on their desks. One student hand wrote the answers on a blank sheet of paper (in advance) and simply turned it in, exploiting the fact that we didn’t pass out a formal exam sheet. Another just memorized the first ten digits of pi and randomly filled in the rest, assuming the instructors would be too lazy to
check every digit. His assumption was correct.

Read the whole paper. This is the conclusion:

Teach yourself and your students to cheat. We’ve always been taught to color inside the lines, stick to the rules, and never, ever, cheat. In seeking cyber security, we must drop that mindset. It is difficult to defeat a creative and determined adversary who must find only a single flaw among myriad defensive measures to be successful. We must not tie our hands, and our intellects, at the same time. If we truly wish to create the best possible information security professionals, being able to think like an adversary is an essential skill. Cheating exercises provide long term remembrance, teach students how to effectively evaluate a system, and motivate them to think imaginatively. Cheating will challenge students’ assumptions about security and the trust models they envision. Some will find the process uncomfortable. That is
OK and by design. For it is only by learning the thought processes of our adversaries that we can hope to unleash the creative thinking needed to build the best secure systems, become effective at red teaming and penetration testing, defend against attacks, and conduct ethical hacking activities.

Here’s a Boing Boing post, including a video of a presentation about the exercise.

Posted on June 13, 2012 at 12:08 PMView Comments

James Randi on Magicians and the Security Mindset

Okay, so he doesn’t use that term. But he explains how a magician’s inherent ability to detect deception can be useful to science.

We can’t make magicians out of scientists—we wouldn’t want to—but we can help scientists “think in the groove”—think like a magician. And we should.

We are not scientists—with a few rare but important exceptions, like Ray Hyman and Richard Wiseman. But our highly specific expertise comes from knowledge of the ways in which our audiences can be led to quite false conclusions by calculated means ­ psychological, physical and especially sensory, visual being rather paramount since it has such a range of variety.

The fact that ours is a concealed art as well as one designed to confound persons of average and advanced thinking skills—our typical audience—makes it rather immune to ordinary analysis or solutions.

I’ve observed that scientists tend to think and perceive logically by using their training and observational skills—of course—and are thus often psychologically insulated from the possibility that there might be chicanery at work. This is where magicians can come in. No matter how well educated, or how basically intelligent, trained, or observant a scientist may be, s/he may be a poor judge of a methodology employed in deliberate deception.

Here’s my essay on the security mindset.

Posted on April 6, 2012 at 5:35 AMView Comments

Secret Government Communications Cables Buried Around Washington, DC

Interesting:

This part happens all the time: A construction crew putting up an office building in the heart of Tysons Corner a few years ago hit a fiber optic cable no one knew was there.

This part doesn’t: Within moments, three black sport-utility vehicles drove up, a half-dozen men in suits jumped out and one said, “You just hit our line.”

Whose line, you may ask? The guys in suits didn’t say, recalled Aaron Georgelas, whose company, the Georgelas Group, was developing the Greensboro Corporate Center on Spring Hill Road. But Georgelas assumed that he was dealing with the federal government and that the cable in question was “black” wire—a secure communications line used for some of the nation’s most secretive intelligence-gathering operations.

Black wire is one of the looming perils of the massive construction that has come to Tysons, where miles and miles of secure lines are thought to serve such nearby agencies as the Office of the Director of National Intelligence, the National Counterterrorism Center and, a few miles away in McLean, the Central Intelligence Agency. After decades spent cutting through red tape to begin work on a Metrorail extension and the widening of the Capital Beltway, crews are now stirring up tons of dirt where the black lines are located.

“Yeah, we heard about the black SUVs,” said Paul Goguen, the engineer in charge of relocating electric, gas, water, sewer, cable, telephone and other communications lines to make way for Metro through Tysons. “We were warned that if they were hit, the company responsible would show up before you even had a chance to make a phone call.”

EDITED TO ADD (6/4): In comments, Angel one gives a great demonstration of the security mindset:

So if I want to stop a construction project in the DC area, all I need to do is drive up in a black SUV, wear a suit and sunglasses, and refuse to identify myself.

Posted on June 4, 2009 at 1:07 PMView Comments

DHS Recruitment Drive

Anyone interested?

General Dynamics Information Technology put out an ad last month on behalf of the Homeland Security Department seeking someone who could “think like the bad guy.” Applicants, it said, must understand hackers’ tools and tactics and be able to analyze Internet traffic and identify vulnerabilities in the federal systems.

In the Pentagon’s budget request submitted last week, Defense Secretary Robert Gates said the Pentagon will increase the number of cyberexperts it can train each year from 80 to 250 by 2011.

Posted on April 21, 2009 at 6:25 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.