Entries Tagged "cybercrime"

Page 10 of 14

David Dittrich on Criminal Malware

Good essay: “Malware to crimeware: How far have they gone, and how do we catch up?;login:, August 2009:

I have surveyed over a decade of advances in delivery of malware. Over this period, attackers have shifted to using complex, multi-phase attacks based on
subtle social engineering tactics, advanced cyptographic techniques to defeat takeover and analysis, and highly targeted attacks that are intended to fly below the radar of
current technical defenses. I will show how malicious technology combined with social manipulation is used against us and conclude that this understanding might even help us design our own combination of technical and social mechanisms to better protect us.

Posted on October 13, 2009 at 7:15 AMView Comments

Cybercrime Paper

Distributed Security: A New Model of Law Enforcement,” Susan W. Brenner and Leo L. Clarke.

Abstract:
Cybercrime, which is rapidly increasing in frequency and in severity, requires us to rethink how we should enforce our criminal laws. The current model of reactive, police-based enforcement, with its origins in real-world urbanization, does not and cannot protect society from criminals using computer technology. This article proposes a new model of distributed security that can supplement the traditional model and allow us to deal effectively with cybercrime. The new model employs criminal sanctions, primarily fines, to induce computer users and those who provide access to cyberspace to employ reasonable security measures as deterrents. We argue that criminal sanctions are preferable in this context to civil liability, and we suggest a system of administrative regulation backed by criminal sanctions that will provide the incentives necessary to create a workable deterrent to cybercrime.

It’s from 2005, but I’ve never seen it before.

Posted on July 20, 2009 at 6:43 AMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Researchers Hijack a Botnet

A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:

The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet’s controllers updated the system and cut the observation short.

During that time, however, UCSB’s researchers were able to gather massive amounts of information on how the botnet functions as well as what kind of information it’s gathering. Almost 300,000 unique login credentials were gathered over the time the researchers controlled the botnet, including 56,000 passwords gathered in a single hour using “simple replacement rules” and a password cracker. They found that 28 percent of victims reused their credentials for accessing 368,501 websites, making it an easy task for scammers to gather further personal information. The researchers noted that they were able to read through hundreds of e-mail, forum, and chat messages gathered by Torpig that “often contain detailed (and private) descriptions of the lives of their authors.”

Here’s the paper:

Abstract:

Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected. While botnets have been “hijacked” before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This opens the possibility to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards.

Another article.

Posted on May 11, 2009 at 6:56 AMView Comments

Virginia Data Ransom

This is bad:

On Thursday, April 30, the secure site for the Virginia Prescription Monitoring Program (PMP) was replaced with a $US10M ransom demand:

“I have your shit! In *my* possession, right now, are 8,257,378 patient records and a total of 35,548,087 prescriptions. Also, I made an encrypted backup and deleted the original. Unfortunately for Virginia, their backups seem to have gone missing, too. Uhoh :(For $10 million, I will gladly send along the password.”

More details:

Hackers last week broke into a Virginia state Web site used by pharmacists to track prescription drug abuse. They deleted records on more than 8 million patients and replaced the site’s homepage with a ransom note demanding $10 million for the return of the records, according to a posting on Wikileaks.org, an online clearinghouse for leaked documents.

[…]

Whitley Ryals said the state discovered the intrusion on April 30, after which time it shut down Web site site access to dozens of pages serving the Department of Health Professions. The state also has temporarily discontinued e-mail to and from the department pending the outcome of a security audit, Whitley Ryals said.

More. This doesn’t seem like a professional extortion/ransom demand, but still….

EDITED TO ADD (5/13): There are backups, and here’s a Q&A with details on exactly what they were storing.

Posted on May 7, 2009 at 7:10 AMView Comments

Security ROI

Return on investment, or ROI, is a big deal in business. Any business venture needs to demonstrate a positive return on investment, and a good one at that, in order to be viable.

It’s become a big deal in IT security, too. Many corporate customers are demanding ROI models to demonstrate that a particular security investment pays off. And in response, vendors are providing ROI models that demonstrate how their particular security solution provides the best return on investment.

It’s a good idea in theory, but it’s mostly bunk in practice.

Before I get into the details, there’s one point I have to make. “ROI” as used in a security context is inaccurate. Security is not an investment that provides a return, like a new factory or a financial instrument. It’s an expense that, hopefully, pays for itself in cost savings. Security is about loss prevention, not about earnings. The term just doesn’t make sense in this context.

But as anyone who has lived through a company’s vicious end-of-year budget-slashing exercises knows, when you’re trying to make your numbers, cutting costs is the same as increasing revenues. So while security can’t produce ROI, loss prevention most certainly affects a company’s bottom line.

And a company should implement only security countermeasures that affect its bottom line positively. It shouldn’t spend more on a security problem than the problem is worth. Conversely, it shouldn’t ignore problems that are costing it money when there are cheaper mitigation alternatives. A smart company needs to approach security as it would any other business decision: costs versus benefits.

The classic methodology is called annualized loss expectancy (ALE), and it’s straightforward. Calculate the cost of a security incident in both tangibles like time and money, and intangibles like reputation and competitive advantage. Multiply that by the chance the incident will occur in a year. That tells you how much you should spend to mitigate the risk. So, for example, if your store has a 10 percent chance of getting robbed and the cost of being robbed is $10,000, then you should spend $1,000 a year on security. Spend more than that, and you’re wasting money. Spend less than that, and you’re also wasting money.

Of course, that $1,000 has to reduce the chance of being robbed to zero in order to be cost-effective. If a security measure cuts the chance of robbery by 40 percent—to 6 percent a year—then you should spend no more than $400 on it. If another security measure reduces it by 80 percent, it’s worth $800. And if two security measures both reduce the chance of being robbed by 50 percent and one costs $300 and the other $700, the first one is worth it and the second isn’t.

The Data Imperative

The key to making this work is good data; the term of art is “actuarial tail.” If you’re doing an ALE analysis of a security camera at a convenience store, you need to know the crime rate in the store’s neighborhood and maybe have some idea of how much cameras improve the odds of convincing criminals to rob another store instead. You need to know how much a robbery costs: in merchandise, in time and annoyance, in lost sales due to spooked patrons, in employee morale. You need to know how much not having the cameras costs in terms of employee morale; maybe you’re having trouble hiring salespeople to work the night shift. With all that data, you can figure out if the cost of the camera is cheaper than the loss of revenue if you close the store at night—assuming that the closed store won’t get robbed as well. And then you can decide whether to install one.

Cybersecurity is considerably harder, because there just isn’t enough good data. There aren’t good crime rates for cyberspace, and we have a lot less data about how individual security countermeasures—or specific configurations of countermeasures—mitigate those risks. We don’t even have data on incident costs.

One problem is that the threat moves too quickly. The characteristics of the things we’re trying to prevent change so quickly that we can’t accumulate data fast enough. By the time we get some data, there’s a new threat model for which we don’t have enough data. So we can’t create ALE models.

But there’s another problem, and it’s that the math quickly falls apart when it comes to rare and expensive events. Imagine you calculate the cost—reputational costs, loss of customers, etc.—of having your company’s name in the newspaper after an embarrassing cybersecurity event to be $20 million. Also assume that the odds are 1 in 10,000 of that happening in any one year. ALE says you should spend no more than $2,000 mitigating that risk.

So far, so good. But maybe your CFO thinks an incident would cost only $10 million. You can’t argue, since we’re just estimating. But he just cut your security budget in half. A vendor trying to sell you a product finds a Web analysis claiming that the odds of this happening are actually 1 in 1,000. Accept this new number, and suddenly a product costing 10 times as much is still a good investment.

It gets worse when you deal with even more rare and expensive events. Imagine you’re in charge of terrorism mitigation at a chlorine plant. What’s the cost to your company, in money and reputation, of a large and very deadly explosion? $100 million? $1 billion? $10 billion? And the odds: 1 in a hundred thousand, 1 in a million, 1 in 10 million? Depending on how you answer those two questions—and any answer is really just a guess—you can justify spending anywhere from $10 to $100,000 annually to mitigate that risk.

Or take another example: airport security. Assume that all the new airport security measures increase the waiting time at airports by—and I’m making this up—30 minutes per passenger. There were 760 million passenger boardings in the United States in 2007. This means that the extra waiting time at airports has cost us a collective 43,000 years of extra waiting time. Assume a 70-year life expectancy, and the increased waiting time has “killed” 620 people per year—930 if you calculate the numbers based on 16 hours of awake time per day. So the question is: If we did away with increased airport security, would the result be more people dead from terrorism or fewer?

Caveat Emptor

This kind of thing is why most ROI models you get from security vendors are nonsense. Of course their model demonstrates that their product or service makes financial sense: They’ve jiggered the numbers so that they do.

This doesn’t mean that ALE is useless, but it does mean you should 1) mistrust any analyses that come from people with an agenda and 2) use any results as a general guideline only. So when you get an ROI model from your vendor, take its framework and plug in your own numbers. Don’t even show the vendor your improvements; it won’t consider any changes that make its product or service less cost-effective to be an “improvement.” And use those results as a general guide, along with risk management and compliance analyses, when you’re deciding what security products and services to buy.

This essay previously appeared in CSO Magazine.

Posted on September 2, 2008 at 6:05 AMView Comments

Dual-Use Technologies and the Equities Issue

On April 27, 2007, Estonia was attacked in cyberspace. Following a diplomatic incident with Russia about the relocation of a Soviet World War II memorial, the networks of many Estonian organizations, including the Estonian parliament, banks, ministries, newspapers and broadcasters, were attacked and—in many cases—shut down. Estonia was quick to blame Russia, which was equally quick to deny any involvement.

It was hyped as the first cyberwar: Russia attacking Estonia in cyberspace. But nearly a year later, evidence that the Russian government was involved in the denial-of-service attacks still hasn’t emerged. Though Russian hackers were indisputably the major instigators of the attack, the only individuals positively identified have been young ethnic Russians living inside Estonia, who were pissed off over the statue incident.

You know you’ve got a problem when you can’t tell a hostile attack by another nation from bored kids with an axe to grind.

Separating cyberwar, cyberterrorism and cybercrime isn’t easy; these days you need a scorecard to tell the difference. It’s not just that it’s hard to trace people in cyberspace, it’s that military and civilian attacks—and defenses—look the same.

The traditional term for technology the military shares with civilians is “dual use.” Unlike hand grenades and tanks and missile targeting systems, dual-use technologies have both military and civilian applications. Dual-use technologies used to be exceptions; even things you’d expect to be dual use, like radar systems and toilets, were designed differently for the military. But today, almost all information technology is dual use. We both use the same operating systems, the same networking protocols, the same applications, and even the same security software.

And attack technologies are the same. The recent spurt of targeted hacks against U.S. military networks, commonly attributed to China, exploit the same vulnerabilities and use the same techniques as criminal attacks against corporate networks. Internet worms make the jump to classified military networks in less than 24 hours, even if those networks are physically separate. The Navy Cyber Defense Operations Command uses the same tools against the same threats as any large corporation.

Because attackers and defenders use the same IT technology, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows: When a military discovers a vulnerability in a dual-use technology, they can do one of two things. They can alert the manufacturer and fix the vulnerability, thereby protecting both the good guys and the bad guys. Or they can keep quiet about the vulnerability and not tell anyone, thereby leaving the good guys insecure but also leaving the bad guys insecure.

The equities issue has long been hotly debated inside the NSA. Basically, the NSA has two roles: eavesdrop on their stuff, and protect our stuff. When both sides use the same stuff, the agency has to decide whether to exploit vulnerabilities to eavesdrop on their stuff or close the same vulnerabilities to protect our stuff.

In the 1980s and before, the tendency of the NSA was to keep vulnerabilities to themselves. In the 1990s, the tide shifted, and the NSA was starting to open up and help us all improve our security defense. But after the attacks of 9/11, the NSA shifted back to the attack: vulnerabilities were to be hoarded in secret. Slowly, things in the U.S. are shifting back again.

So now we’re seeing the NSA help secure Windows Vista and releasing their own version of Linux. The DHS, meanwhile, is funding a project to secure popular open source software packages, and across the Atlantic the UK’s GCHQ is finding bugs in PGPDisk and reporting them back to the company. (NSA is rumored to be doing the same thing with BitLocker.)

I’m in favor of this trend, because my security improves for free. Whenever the NSA finds a security problem and gets the vendor to fix it, our security gets better. It’s a side-benefit of dual-use technologies.

But I want governments to do more. I want them to use their buying power to improve my security. I want them to offer countrywide contracts for software, both security and non-security, that have explicit security requirements. If these contracts are big enough, companies will work to modify their products to meet those requirements. And again, we all benefit from the security improvements.

The only example of this model I know about is a U.S. government-wide procurement competition for full-disk encryption, but this can certainly be done with firewalls, intrusion detection systems, databases, networking hardware, even operating systems.

When it comes to IT technologies, the equities issue should be a no-brainer. The good uses of our common hardware, software, operating systems, network protocols, and everything else vastly outweigh the bad uses. It’s time that the government used its immense knowledge and experience, as well as its buying power, to improve cybersecurity for all of us.

This essay originally appeared on Wired.com.

Posted on May 6, 2008 at 5:17 AMView Comments

Comparing Cybersecurity to Early 1800s Security on the High Seas

This article in CSO compares modern cybersecurity to open seas piracy in the early 1800s. After a bit of history, the article talks about current events:

In modern times, the nearly ubiquitous availability of powerful computing systems, along with the proliferation of high-speed networks, have converged to create a new version of the high seas—the cyber seas. The Internet has the potential to significantly impact the United States’ position as a world leader. Nevertheless, for the last decade, U.S. cybersecurity policy has been inconsistent and reactionary. The private sector has often been left to fend for itself, and sporadic policy statements have left U.S. government organizations, private enterprises and allies uncertain of which tack the nation will take to secure the cyber frontier.

This should be a surprise to no one.

What to do?

With that goal in mind, let us consider how the United States could take a Jeffersonian approach to the cyber threats faced by our economy. The first step would be for the United States to develop a consistent policy that articulates America’s commitment to assuring the free navigation of the “cyber seas.” Perhaps most critical to the success of that policy will be a future president’s support for efforts that translate rhetoric to actions—developing initiatives to thwart cyber criminals, protecting U.S. technological sovereignty, and balancing any defensive actions to avoid violating U.S. citizens’ constitutional rights. Clearly articulated policy and consistent actions will assure a stable and predictable environment where electronic commerce can thrive, continuing to drive U.S. economic growth and avoiding the possibility of the U.S. becoming a cyber-colony subject to the whims of organized criminal efforts on the Internet.

I am reminded of comments comparing modern terrorism with piracy on the high seas.

Posted on April 16, 2008 at 2:27 PMView Comments

The Cybercrime Economy

Interesting article:

While standard commercial software vendors sell software as a service, malware vendors sell malware as a service, which is advertised and distributed like standard software. Communicating via internet relay chat (IRC) and forums, hackers advertise Iframe exploits, pop-unders, click fraud, posting and spam. “If you don’t have it, you can rent it here,” boasts one post, which also offers online video tutorials. Prices for services vary by as much as 100-200 percent across sites, while prices for non-Russian sites are often higher: “If you want the discount rate, buy via Russian sites,” says Genes.

In March the price quoted on malware sites for the Gozi Trojan, which steals data and sends it to hackers in an encrypted form, was between $1,000 (£500) and $2,000 for the basic version. Buyers could purchase add-on services at varying prices starting at $20.

This kind of thing is also discussed here.

Posted on January 2, 2008 at 7:21 AMView Comments

1 8 9 10 11 12 14

Sidebar photo of Bruce Schneier by Joe MacInnis.