Entries Tagged "security policies"

Page 3 of 8

Research on the Timing of Security Warnings

fMRI experiments show that we are more likely to ignore security warnings when they interrupt other tasks.

A new study from BYU, in collaboration with Google Chrome engineers, finds the status quo of warning messages appearing haphazardly­—while people are typing, watching a video, uploading files, etc.­—results in up to 90 percent of users disregarding them.

Researchers found these times are less effective because of “dual task interference,” a neural limitation where even simple tasks can’t be simultaneously performed without significant performance loss. Or, in human terms, multitasking.

“We found that the brain can’t handle multitasking very well,” said study coauthor and BYU information systems professor Anthony Vance. “Software developers categorically present these messages without any regard to what the user is doing. They interrupt us constantly and our research shows there’s a high penalty that comes by presenting these messages at random times.”

[…]

For part of the study, researchers had participants complete computer tasks while an fMRI scanner measured their brain activity. The experiment showed neural activity was substantially reduced when security messages interrupted a task, as compared to when a user responded to the security message itself.

The BYU researchers used the functional MRI data as they collaborated with a team of Google Chrome security engineers to identify better times to display security messages during the browsing experience.

Research paper. News article.

Posted on August 22, 2016 at 7:03 AMView Comments

Frequent Password Changes Is a Bad Security Idea

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.

Posted on August 5, 2016 at 7:53 AMView Comments

Security Effectiveness of the Israeli West Bank Barrier

Interesting analysis:

Abstract: Objectives—Informed by situational crime prevention (SCP) this study evaluates the effectiveness of the “West Bank Barrier” that the Israeli government began to construct in 2002 in order to prevent suicide bombing attacks.

Methods—Drawing on crime wave models of past SCP research, the study uses a time series of terrorist attacks and fatalities and their location in respect to the Barrier, which was constructed in different sections over different periods of time, between 1999 and 2011.

Results—The Barrier together with associated security activities was effective in preventing suicide bombings and other attacks and fatalities with little if any apparent displacement. Changes in terrorist behavior likely resulted from the construction of the Barrier, not from other external factors or events.

Conclusions—In some locations, terrorists adapted to changed circumstances by committing more opportunistic attacks that require less planning. Fatalities and attacks were also reduced on the Palestinian side of the Barrier, producing an expected “diffusion of benefits” though the amount of reduction was considerably more than in past SCP studies. The defensive roles of the Barrier and offensive opportunities it presents, are identified as possible explanations. The study highlights the importance of SCP in crime and counter-terrorism policy.

Unfortunately, the whole paper is behind a paywall.

Note: This is not a political analysis of the net positive and negative effects of the wall, just a security analysis. Of course any full analysis needs to take the geopolitics into account. The comment section is not the place for this broader discussion.

Posted on July 14, 2016 at 5:58 AMView Comments

Security Behavior of Pro-ISIS Groups on Social Media

Interesting:

Since the team had tracked these groups daily, researchers could observe the tactics that pro-ISIS groups use to evade authorities. They found that 15 percent of groups changed their names during the study period, and 7 percent flipped their visibility from public to members only. Another 4 percent underwent what the researchers called reincarnation. That means the group disappeared completely but popped up later under a new name and earned more than 60 percent of its original followers back.

The researchers compared these behaviors in the pro-ISIS groups to the behaviors of other social groups made up of protestors or social activists (the entire project began in 2013 with a focus on predicting periods of social unrest). The pro-ISIS groups employed more of these strategies, presumably because the groups were under more pressure to evolve as authorities sought to shut them down.

Research paper.

Posted on June 21, 2016 at 6:01 AMView Comments

Detecting Explosives

Really interesting article on the difficulties involved with explosive detection at airport security checkpoints.

Abstract: The mid-air bombing of a Somali passenger jet in February was a wake-up call for security agencies and those working in the field of explosive detection. It was also a reminder that terrorist groups from Yemen to Syria to East Africa continue to explore innovative ways to get bombs onto passenger jets by trying to beat detection systems or recruit insiders. The layered state-of-the-art detection systems that are now in place at most airports in the developed world make it very hard for terrorists to sneak bombs onto planes, but the international aviation sector remains vulnerable because many airports in the developing world either have not deployed these technologies or have not provided rigorous training for operators. Technologies and security measures will need to improve to stay one step ahead of innovative terrorists. Given the pattern of recent Islamic State attacks, there is a strong argument for extending state-of-the-art explosive detection systems beyond the aviation sector to locations such as sports arenas and music venues.

I disagree with his conclusions—the last sentence above—but the technical information on explosives detection technology is really interesting.

Posted on May 20, 2016 at 2:06 PMView Comments

IRS Security

Monday is Tax Day. Many of us are thinking about our taxes. Are they too high or too low? What’s our money being spent on? Do we have a government worth paying for? I’m not here to answer any of those questions—I’m here to give you something else to think about. In addition to sending the IRS your money, you’re also sending them your data.

It’s a lot of highly personal financial data, so it’s sensitive and important information.

Is that data secure?

The short answer is “no.” Every year, the GAO—Government Accountability Office—reviews IRS security and issues a report. The title of this year’s report kind of says it all: “IRS Needs to Further Improve Controls over Financial and Taxpayer Data.” The details are ugly: failures in identification and authentication of network users, failures to encrypt data, failures in audit and monitoring and failures to patch vulnerabilities and update software.

To be fair, the GAO can sometimes be pedantic in its evaluations. And the 43 recommendations for the IRS to improve security aren’t being made public, so as not to advertise our vulnerabilities to the bad guys. But this is all pretty basic stuff, and it’s embarrassing.

More importantly, this lack of security is dangerous. We know that cybercriminals are using our financial information to commit fraud. Specifically, they’re using our personal tax information to file for tax refunds in our name to fraudulently collect the refunds.

We know that foreign governments are targeting U.S. government networks for personal information on U.S. citizens: Remember the OPM data theft that was made public last year in which a federal personnel database with records on 21.5 million people was stolen?

There have been some stories of hacks against IRS databases in the past. I think that the IRS has been hacked even more than is publicly reported, either because the government is keeping the attacks secret or because it doesn’t even realize it’s been attacked.

So what happens next?

If the past is any guide, not a lot. The GAO has been warning about problems with IRS security since it started writing these reports in 2007. In each report, the GAO has issued recommendations for the IRS to improve security. After each report, the IRS did a few of those things, but ignored most of the recommendations. In this year’s report, for example, the GAO complained that the IRS ignored 47 of its 70 recommendations from 2015. In its 2015 report, it complained that the IRS only mitigated 14 of the 69 weaknesses it identified in 2013. The 2012 report didn’t paint IRS security in any better light.

If I had to guess, I’d say the IRS’s security is this bad for the exact same reason that so much corporate network-security is so bad: lack of budget. It’s not uncommon for companies to skimp on their security budget. The budget at the IRS has been cut 17% since 2010; I am certain IT security was not exempt from those cuts.

So we’re stuck. We have no choice but to give the IRS our data. The IRS isn’t doing a good job securing our data. Congress isn’t giving the IRS enough budget to do a good job securing our data. Last Tuesday, the Senate Finance Committee urged the IRS to improve its security. We all need to urge Congress to give it the money to do so.

Nothing is absolutely hacker-proof, but there are a lot of security improvements the IRS can make. If we have to give the IRS all our information—and we do—we deserve to have it taken care of properly.

This essay previously appeared on CNN.com.

Posted on April 15, 2016 at 6:52 AMView Comments

Smart Essay on the Limitations of Anti-Terrorism Security

This is good:

Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.

We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.

No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.

I am reminded of my own 2006 “Refuse to be Terrorized” essay.

Posted on April 3, 2016 at 7:42 PMView Comments

The Importance of Strong Encryption to Security

Encryption keeps you safe. Encryption protects your financial details and passwords when you bank online. It protects your cell phone conversations from eavesdroppers. If you encrypt your laptop—and I hope you do—it protects your data if your computer is stolen. It protects our money and our privacy.

Encryption protects the identity of dissidents all over the world. It’s a vital tool to allow journalists to communicate securely with their sources, NGOs to protect their work in repressive countries, and lawyers to communicate privately with their clients. It protects our vital infrastructure: our communications network, the power grid and everything else. And as we move to the Internet of Things with its cars and thermostats and medical devices, all of which can destroy life and property if hacked and misused, encryption will become even more critical to our security.

Security is more than encryption, of course. But encryption is a critical component of security. You use strong encryption every day, and our Internet-laced world would be a far riskier place if you didn’t.

Strong encryption means unbreakable encryption. Any weakness in encryption will be exploited—by hackers, by criminals and by foreign governments. Many of the hacks that make the news can be attributed to weak or—even worse—nonexistent encryption.

The FBI wants the ability to bypass encryption in the course of criminal investigations. This is known as a “backdoor,” because it’s a way at the encrypted information that bypasses the normal encryption mechanisms. I am sympathetic to such claims, but as a technologist I can tell you that there is no way to give the FBI that capability without weakening the encryption against all adversaries. This is crucial to understand. I can’t build an access technology that only works with proper legal authorization, or only for people with a particular citizenship or the proper morality. The technology just doesn’t work that way.

If a backdoor exists, then anyone can exploit it. All it takes is knowledge of the backdoor and the capability to exploit it. And while it might temporarily be a secret, it’s a fragile secret. Backdoors are how everyone attacks computer systems.

This means that if the FBI can eavesdrop on your conversations or get into your computers without your consent, so can cybercriminals. So can the Chinese. So can terrorists. You might not care if the Chinese government is inside your computer, but lots of dissidents do. As do the many Americans who use computers to administer our critical infrastructure. Backdoors weaken us against all sorts of threats.

Either we build encryption systems to keep everyone secure, or we build them to leave everybody vulnerable.

Even a highly sophisticated backdoor that could only be exploited by nations like the United States and China today will leave us vulnerable to cybercriminals tomorrow. That’s just the way technology works: things become easier, cheaper, more widely accessible. Give the FBI the ability to hack into a cell phone today, and tomorrow you’ll hear reports that a criminal group used that same ability to hack into our power grid.

The FBI paints this as a trade-off between security and privacy. It’s not. It’s a trade-off between more security and less security. Our national security needs strong encryption. I wish I could give the good guys the access they want without also giving the bad guys access, but I can’t. If the FBI gets its way and forces companies to weaken encryption, all of us—our data, our networks, our infrastructure, our society—will be at risk.

This essay previously appeared in the New York Times “Room for Debate” blog. It’s something I seem to need to say again and again.

Posted on February 25, 2016 at 6:40 AMView Comments

IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.