Entries Tagged "security policies"

Page 7 of 8

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Secure Version of Windows Created for the U.S. Air Force

I have long argued that the government should use its massive purchasing power to pressure software vendors to improve security. Seems like the U.S. Air Force has done just that:

The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn’t have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.

Surprisingly, Microsoft quickly agreed to the plan, and Ballmer got personally involved in the project.

“He has half-a-dozen clients that he personally gets involved with, and he saw that this just made a lot of sense,” Gilligan said. “They had already done preliminary work themselves trying to identify what would be a more secure configuration. So we fine-tuned and added to that.”

The NSA got together with the National Institute of Standards and Technology, the Defense Information Systems Agency and the Center for Internet Security to decide what to lock down in the Air Force special edition.

Many of the changes were complex and technical, but Gilligan says one of the most important and simplest was an obvious fix to how Windows XP handled passwords. The Air Force insisted the system be configured so administrative passwords were unique, and different from general user passwords, preventing an average user from obtaining administrative privileges. Specifications were added to increase the length and complexity of passwords and expire them every 60 days.

It then took two years for the Air Force to catalog and test all the software applications on its networks against the new configuration to uncover conflicts. In some cases, where internally designed software interacted with Windows XP in an insecure way, they had to change the in-house software.

Now I want Microsoft to offer this configuration to everyone.

EDITED TO ADD (5/6): Microsoft responds:

Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn’t a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn’t deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.

The work from the AF ended up morphing into the Federal Desktop Core Configuration (FDCC) recommendations maintained by NIST. There are differences, but they are essentially the same thing. NIST initially used even more secure settings in the hardening process (many of which have since been relaxed because of operational issues, and is now even closer to what the AF created).

Anyone can download the FDCC settings, documentation, and even complete images. I worked on the FDCC project for little over a year, and Aaron Margosis has been involved for many years, and continues to be involved. He offers all sorts of public knowledge and useful tools. Here, Aaron has written a couple of tools that anyone can use to apply FDCC settings to local group policy. It includes the source code, if anyone wants to customize them.

In the initial article, a lot of the other improvements, such as patching, came from the use of better tools (SCCM, etc.), and were not necessarily solely due to the changes in the base image (although that certainly didn’t hurt). So, it seems the author mixed up some of the different technology pushes and wrapped them up into a single story. He also seem to imply that this is something special and secret, but the truth is there is more openness with the FDCC program and the surrounding security outcomes than anything we’ve ever done before. Even better, there are huge agencies that have already gone first in trying to use these harden settings, and essentially been beta testers for the rest of the world. The FDCC settings may not be the best fit for every company, but it is a good model to compare against.

Let me know if you have any questions.

Roger A. Grimes, Security Architect, ACE Team, Microsoft

EDITED TO ADD (5/12): FDCC policy specs.

Posted on May 6, 2009 at 6:43 AMView Comments

Melissa Hathaway Interview

President Obama has tasked Melissa Hathaway with conducting a 60-day review of the nation’s cybersecurity policies.

Who is she?

Hathaway has been working as a cybercoordination executive for the Office of the Director of National Intelligence. She chaired a multiagency group called the National Cyber Study Group that was instrumental in developing the Comprehensive National Cyber Security Initiative, which was approved by former President George W. Bush early last year. Since then, she has been in charge of coordinating and monitoring the CNCI’s implementation.

Although, honestly, the best thing to read to get an idea of how she thinks is this interview from IEEE Security & Privacy:

In the technology field, concern to be first to market often does trump the need for security to be built in up front. Most of the nation’s infrastructure is owned, operated, and developed by the commercial sector. We depend on this sector to address the nation’s broader needs, so we’ll need a new information-sharing environment. Private-sector risk models aren’t congruent with the needs for national security. We need to think about a way to do business that meets both sets of needs. The proposed revisions to Federal Information Security Management Act [FISMA] legislation will raise awareness of vulnerabilities within broader-based commercial systems.

Increasingly, we see industry jointly addressing these vulnerabilities, such as with the Industry Consortium for Advancement of Security on the Internet to share common vulnerabilities and response mechanisms. In addition, there’s the Software Assurance Forum for Excellence in Code, an alliance of vendors who seek to improve software security. Industry is beginning to understand that [it has a] shared risk and shared responsibilities and sees the advantage of coordinating and collaborating up front during the development stage, so that we can start to address vulnerabilities from day one. We also need to look for niche partnerships to enhance product development and build trust into components. We need to understand when and how we introduce risk into the system and ask ourselves whether that risk is something we can live with.

The government is using its purchasing power to influence the market toward better security. We’re already seeing results with the Federal Desktop Core Configuration [FDCC] initiative, a mandated security configuration for federal computers set by the OMB. The Department of Commerce is working with several IT vendors on standardizing security settings for a wide variety of IT products and environments. Because a broad population of the government is using Windows XP and Vista, the FDCC imitative worked with Microsoft and others to determine security needs up front.

Posted on February 24, 2009 at 12:36 PMView Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PMView Comments

Horrible Identity Theft Story

This is a story of how smart people can be neutralized through stupid procedures.

Here’s the part of the story where some poor guy’s account get’s completely f-ed. This thief had been bounced to the out-sourced to security so often that he must have made a check list of any possible questions they would ask him. Through whatever means, he managed to get the answers to these questions. Now when he called, he could give us the information we were asking for, but by this point we knew his voice so well that we still tried to get him to security. It worked like this: We put him on hold and dial the extension for security. We get a security rep and start to explain the situation; we tell them he was able to give the right information, but that we know is the same guy that’s been calling for weeks and we are certain he is not the account holder. They begrudgingly take the call. Minutes later another one of us gets a call from a security rep saying they are giving us a customer who has been cleared by them. And here the thief was back in our department. For those of us who had come to know him, the fight waged on night after night.

Posted on October 30, 2008 at 12:10 PMView Comments

Security Idiocy Story

From the Dilbert blog:

They then said that I could not fill it out—my manager had to. I told them that my manager doesn’t work in the building, nor does anyone in my management chain. This posed a problem for the crack security team. At last, they formulated a brilliant solution to the problem. They told me that if I had grocery bag in my office I could put the laptop in it and everything would be okay . Of course, I don’t have grocery bags in my office. Who would? I did have a windbreaker, however. So I went up to my office, wrapped up the laptop in my windbreaker, and went back down.

People put in charge of implementing a security policy are more concerned with following the letter of the policy than they are about improving security. So even if what they do makes no sense—and they know it makes no sense—they have to do it in order to follow “policy.”

Posted on August 6, 2008 at 1:52 PMView Comments

People and Security Rules

In this article analyzing a security failure resulting in live nuclear warheads being flown over the U.S., there’s an interesting commentary on people and security rules:

Indeed, the gaff [sic] that allowed six nukes out over three major American cities (Omaha, Neb., Kansas City, Mo., and Little Rock, Ark.) could have been avoided if the Air Force personnel had followed procedure.

“Let’s not forget that the existing rules were pretty tight,” says Hans Kristensen, director of the Nuclear Information Project for the Federation of American Scientists. “Much of what went wrong occurred because people didn’t follow these tight rules. You can have all sorts of rules and regulations, but they still won’t do any good if the people don’t follow them.”

Procedures are a tough balancing act. If they’re too lax, there will be security problems. If they’re too tight, people will get around them and there will be security problems.

Posted on April 14, 2008 at 6:47 AMView Comments

British Nuclear Security Kind of Slipshod

No two-person control or complicated safety features: until 1998, you could arm British nukes with a bicycle lock key.

To arm the weapons you just open a panel held by two captive screws—like a battery cover on a radio—using a thumbnail or a coin.

Inside are the arming switch and a series of dials which you can turn with an Allen key to select high yield or low yield, air burst or groundburst and other parameters.

The Bomb is actually armed by inserting a bicycle lock key into the arming switch and turning it through 90 degrees. There is no code which needs to be entered or dual key system to prevent a rogue individual from arming the Bomb.

Certainly most of the security was procedural. But still….

Posted on November 21, 2007 at 12:50 PMView Comments

U.S. Government Contractor Injects Malicious Software into Critical Military Computers

This is just a frightening story. Basically, a contractor with a top secret security clearance was able to inject malicious code and sabotage computers used to track Navy submarines.

Yeah, it was annoying to find and fix the problem, but hang on. How is it possible for a single disgruntled idiot to damage a multi-billion-dollar weapons system? Why aren’t there any security systems in place to prevent this? I’ll bet anything that there was absolutely no control or review over who put what code in where. I’ll bet that if this guy had been just a little bit cleverer, he could have done a whole lot more damage without ever getting caught.

One of the ways to deal with the problem of trusted individuals is by making sure they’re trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.

I’ll also bet you anything that Microsoft has more security around its critical code than the U.S. military does.

Posted on April 13, 2007 at 12:33 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.