Entries Tagged "cybercrime"

Page 13 of 14

Cybercrime Pays

This sentence jumped out at me in an otherwise pedestrian article on criminal fraud:

“Fraud is fundamentally fuelling the growth of organised crime in the UK, earning more from fraud than they do from drugs,” Chris Hill, head of fraud at the Norwich Union, told BBC News.

I’ll bet that most of that involves the Internet to some degree.

And then there’s this:

Global cybercrime turned over more money than drug trafficking last year, according to a US Treasury advisor. Valerie McNiven, an advisor to the US government on cybercrime, claimed that corporate espionage, child pornography, stock manipulation, phishing fraud and copyright offences cause more financial harm than the trade in illegal narcotics such as heroin and cocaine.

This doesn’t bode well for computer security in general.

Posted on November 30, 2005 at 6:05 AMView Comments

Hackers and Criminals

More evidence that hackers are migrating into crime:

Since then, organised crime units have continued to provide a fruitful income for a group of hackers that are effectively on their payroll. Their willingness to pay for hacking expertise has also given rise to a new subset of hackers. These are not hardcore criminals in pursuit of defrauding a bank or duping thousands of consumers. In one sense, they are the next generation of hackers that carry out their activities in pursuit of credibility from their peers and the ‘buzz’ of hacking systems considered to be unbreakable.

Where they come into contact with serious criminals is through underworld forums and chatrooms, where their findings are published and they are paid effectively for their intellectual property. This form of hacking – essentially ‘hacking for hire’ – is becoming more common with hackers trading zero-day exploit information, malcode, bandwidth, identities and toolkits underground for cash. So a hacker might package together a Trojan that defeats the latest version of an anti-virus client and sell that to a hacking community sponsored by criminals.

Posted on November 17, 2005 at 12:25 PMView Comments

Identity Theft Over-Reported

I’m glad to see that someone wrote this article. For a long time now, I’ve been saying that the rate of identity theft has been grossly overestimated: too many things are counted as identity theft that are just traditional fraud. Here’s some interesting data to back that claim up:

Multiple surveys have found that around 20 percent of Americans say they have been beset by identity theft. But what exactly is identity theft?

The Identity Theft and Assumption Deterrence Act of 1998 defines it as the illegal use of someone’s “means of identification” — including a credit card. So if you lose your card and someone else uses it to buy a candy bar, technically you have been the victim of identity theft.

Of course misuse of lost, stolen or surreptitiously copied credit cards is a serious matter. But it shouldn’t force anyone to hide in a cave.

Federal law caps our personal liability at $50, and even that amount is often waived. That’s why surveys have found that about two-thirds of people classified as identity theft victims end up paying nothing out of their own pockets.

The more pernicious versions of identity theft, in which fraudsters use someone else’s name to open lines of credit or obtain government documents, are much rarer.

Consider a February survey for insurer Chubb Corp. of 1,866 people nationwide. Nearly 21 percent said they had been an identity theft victim in the previous year.

But when the questioners asked about specific circumstances — and broadened the time frame beyond just the previous year — the percentages diminished. About 12 percent said a collection agency had demanded payment for purchases they hadn’t made. Some 8 percent said fraudulent checks had been drawn against their accounts.

In both cases, the survey didn’t ask whether a faulty memory or a family member — rather than a shadowy criminal — turned out to be to be the culprit.

It wouldn’t be uncommon. In a 2005 study by Synovate, a research firm, half of self-described victims blamed relatives, friends, neighbors or in-home employees.

When Chubb’s report asked whether people had suffered the huge headache of finding that someone else had taken out loans in their name, 2.4 percent — one in 41 people — said yes.

So what about the claim that 10 million Americans are hit every year, a number often used to pitch credit monitoring services? That statistic, which would amount to about one in 22 adults, also might not be what it seems.

The figure arose in a 2003 report by Synovate commissioned by the Federal Trade Commission. A 2005 update by Synovate put the figure closer to 9 million.

Both totals include misuse of existing credit cards.

Subtracting that, the identity theft numbers were still high but not as frightful: The FTC report determined that fraudsters had opened new accounts or committed similar misdeeds in the names of 3.2 million Americans in the previous year.

The average victim lost $1,180 and wasted 60 hours trying to resolve the problem. Clearly, it’s no picnic.

But there was one intriguing nugget deep in the report.

Some 38 percent of identity theft victims said they hadn’t bothered to notify anyone — not the police, not their credit card company, not a credit bureau. Even when fraud losses purportedly exceeded $5,000, the kept-it-to-myself rate was 19 percent.

Perhaps some people decide that raising a stink over a wrongful charge isn’t worth the trouble. Even so, the finding made the overall validity of the data seem questionable to Fred Cate, an Indiana University law professor who specializes in privacy and security issues.

“That’s not identity theft,” he said. “I’m just confident if you saw a charge that wasn’t yours, you’d contact somebody.”

Identity theft is a serious crime, and it’s a major growth industry in the criminal world. But we do everyone a disservice when we count things as identity theft that really aren’t.

Posted on November 16, 2005 at 1:21 PMView Comments

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly — less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other — stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install — at least Microsoft Windows system patches — large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

Fraudulent Stock Transactions

From a Business Week story:

During July 13-26, stocks and mutual funds had been sold, and the proceeds wired out of his account in six transactions of nearly $30,000 apiece. Murty, a 64-year-old nuclear engineering professor at North Carolina State University, could only think it was a mistake. He hadn’t sold any stock in months.

Murty dialed E*Trade the moment its call center opened at 7 a.m. A customer service rep urged him to change his password immediately. Too late. E*Trade says the computer in Murty’s Cary (N.C.) home lacked antivirus software and had been infected with code that enabled hackers to grab his user name and password.

The cybercriminals, pretending to be Murty, directed E*Trade to liquidate his holdings. Then they had the brokerage wire the proceeds to a phony account in his name at Wells Fargo Bank. The New York-based online broker says the wire instructions appeared to be legit because they contained the security code the company e-mailed to Murty to execute the transaction. But the cyberthieves had gained control of Murty’s e-mail, too.

E*Trade recovered some of the money from the Wells Fargo account and returned it to Murty. In October, the Indian-born professor reached what he calls a satisfactory settlement with the firm, which says it did nothing wrong.

That last clause is critical. E*trade insists it did nothing wrong. It executed $174,000 in fraudulent transactions, but it did nothing wrong. It sold stocks without the knowledge or consent of the owner of those stocks, but it did nothing wrong.

Now quite possibly, E*trade did nothing wrong legally. There may very well be a paragraph buried in whatever agreement this guy signed that says something like: “You agree that any trade request that comes to us with the right password, whether it came from you or not, will be processed.” But there’s the market failure. Until we fix that, these losses are an externality to E*Trade. They’ll only fix the problem up to the point where customers aren’t leaving them in droves, not to the point where the customers’ stocks are secure.

Posted on November 10, 2005 at 2:40 PMView Comments

Preventing Identity Theft: The Living and the Dead

A company called Metacharge has rolled out an e-commerce security service in the United Kingdom. For about $2 per name, website operators can verify their customers against the UK Electoral Roll, the British Telecom directory, and a mortality database.

That’s not cheap, and the company is mainly targeting customers in high-risk industries, such as online gaming. But the economics behind this system are interesting to examine. They illustrate externalities associated with fraud and identity theft, and why leaving matters to the companies won’t fix the problem.

The mortality database is interesting. According to Metacharge, “the fastest growing form of identity theft is not phishing; it is taking the identities of dead people and using them to get credit.”

For a website, the economics are straightforward. It costs $2 to verify that a customer is alive. If the probability the customer is actually dead (and therefore fraudulent) times the average losses due to this dead customer is more than $2, this service makes sense. If it is less, then the service doesn’t. For example, if dead customers are one in ten thousand, and they cost $15,000 each, then the service is not worth it. If they cost $25,000 each, or if they occur twice as often, then it is worth it.

Imagine now that there is a similar service that identifies identity fraud among living people. The same economic analysis would also hold. But in this case, there’s an externality: there is an additional cost of fraud borne by the victim and not by the website. So if fraud using the identity of living customers occurs at a rate of one in ten thousand, and each one costs $15,000 to the website and another $10,000 to the victim, the website will conclude that the service is not worthwhile, even though paying for it is cheaper overall. This is why legislation is needed: to raise the cost of fraud to the websites.

There’s another economic trade-off. Websites have two basic opportunities to verify customers using services such as these. The first is when they sign up the customer, and the second is after some kind of non-payment. Most of the damages to the customer occur after the non-payment is referred to a credit bureau, so it would make sense to perform some extra identification checks at that point. It would certainly be cheaper to the website, as far fewer checks would be paid for. But because this second opportunity comes after the website has suffered its losses, it has no real incentive to take advantage of it. Again, economics drives security.

Posted on October 28, 2005 at 8:08 AMView Comments

Phishing

My third Wired column is on line. It’s about phishing.

Financial companies have until now avoided taking on phishers in a serious way, because it’s cheaper and simpler to pay the costs of fraud. That’s unacceptable, however, because consumers who fall prey to these scams pay a price that goes beyond financial losses, in inconvenience, stress and, in some cases, blots on their credit reports that are hard to eradicate. As a result, lawmakers need to do more than create new punishments for wrongdoers — they need to create tough new incentives that will effectively force financial companies to change the status quo and improve the way they protect their customers’ assets.

EDITED TO ADD: There’s a discussion on Slashdot.

Posted on October 6, 2005 at 8:10 AMView Comments

Attack Trends: 2004 and 2005

Counterpane Internet Security, Inc., monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security “tickets.” What follows is an overview of what’s happening on the Internet right now, and what we expect to happen in the coming months.

In 2004, 41 percent of the attacks we saw were unauthorized activity of some kind, 21 percent were scanning, 26 percent were unauthorized access, 9 percent were DoS (denial of service), and 3 percent were misuse of applications.

Over the past few months, the two attack vectors that we saw in volume were against the Windows DCOM (Distributed Component Object Model) interface of the RPC (remote procedure call) service and against the Windows LSASS (Local Security Authority Subsystem Service). These seem to be the current favorites for virus and worm writers, and we expect this trend to continue.

The virus trend doesn’t look good. In the last six months of 2004, we saw a plethora of attacks based on browser vulnerabilities (such as GDI-JPEG image vulnerability and IFRAME) and an increase in sophisticated worm and virus attacks. More than 1,000 new worms and viruses were discovered in the last six months alone.

In 2005, we expect to see ever-more-complex worms and viruses in the wild, incorporating complex behavior: polymorphic worms, metamorphic worms, and worms that make use of entry-point obscuration. For example, SpyBot.KEG is a sophisticated vulnerability assessment worm that reports discovered vulnerabilities back to the author via IRC channels.

We expect to see more blended threats: exploit code that combines malicious code with vulnerabilities in order to launch an attack. We expect Microsoft’s IIS (Internet Information Services) Web server to continue to be an attractive target. As more and more companies migrate to Windows 2003 and IIS 6, however, we expect attacks against IIS to decrease.

We also expect to see peer-to-peer networking as a vector to launch viruses.

Targeted worms are another trend we’re starting to see. Recently there have been worms that use third-party information-gathering techniques, such as Google, for advanced reconnaissance. This leads to a more intelligent propagation methodology; instead of propagating scattershot, these worms are focusing on specific targets. By identifying targets through third-party information gathering, the worms reduce the noise they would normally make when randomly selecting targets, thus increasing the window of opportunity between release and first detection.

Another 2004 trend that we expect to continue in 2005 is crime. Hacking has moved from a hobbyist pursuit with a goal of notoriety to a criminal pursuit with a goal of money. Hackers can sell unknown vulnerabilities — “zero-day exploits” — on the black market to criminals who use them to break into computers. Hackers with networks of hacked machines can make money by selling them to spammers or phishers. They can use them to attack networks. We have started seeing criminal extortion over the Internet: hackers with networks of hacked machines threatening to launch DoS attacks against companies. Most of these attacks are against fringe industries — online gambling, online computer gaming, online pornography — and against offshore networks. The more these extortions are successful, the more emboldened the criminals will become.

We expect to see more attacks against financial institutions, as criminals look for new ways to commit fraud. We also expect to see more insider attacks with a criminal profit motive. Already most of the targeted attacks — as opposed to attacks of opportunity — originate from inside the attacked organization’s network.

We also expect to see more politically motivated hacking, whether against countries, companies in “political” industries (petrochemicals, pharmaceuticals, etc.), or political organizations. Although we don’t expect to see terrorism occur over the Internet, we do expect to see more nuisance attacks by hackers who have political motivations.

The Internet is still a dangerous place, but we don’t foresee people or companies abandoning it. The economic and social reasons for using the Internet are still far too compelling.

This essay originally appeared in the June 2005 issue of Queue.

Posted on June 6, 2005 at 1:02 PMView Comments

Stupid People Purchase Fake Concert Tickets

From the Boston Herald

Instead of rocking with Bono and The Edge, hundreds of U2 fans were forced to “walk away, walk away” from the sold-out FleetCenter show Tuesday night when their scalped tickets proved bogus.

Some heartbroken fans broke down in tears as they were turned away clutching worthless pieces of paper they shelled out as much as $2,000 for.

You might think this was some fancy counterfeiting scheme, but no.

It took Whelan and his staff a while to figure out what was going on, but a pattern soon emerged. The counterfeit tickets mostly were computer printouts bought online from cyberscalpers.

Online tickets are a great convenience. They contain a unique barcode. You can print as many as you like, but the barcode scanners at the concert door will only accept each barcode once.

Only an idiot would buy a printout from a scalper, because there’s no way to verify that he will only sell it once. This is probably obvious to anyone reading this, but it tuns out that it’s not obvious to everyone.

“On an average concert night we have zero, zilch, zip problems with counterfeit tickets,” Delaney said. “Apparently, U2 has whipped this city into such a frenzy that people are willing to take a risk.”

I find this fascinating. Online verification of authorization tokens is supposed to make counterfeiting more difficult, because it assumes the physical token can be copied. But it won’t work if people believe that the physical token is unique.

Note: Another write-up of the same story is here.

Posted on June 2, 2005 at 2:10 PMView Comments

Holding Computer Files Hostage

This one has been predicted for years. Someone breaks into your network, encrypts your data files, and then demands a ransom to hand over the key.

I don’t know how the attackers did it, but below is probably the best way. A worm could be programmed to do it.

1. Break into a computer.

2. Generate a random 256-bit file-encryption key.

3. Encrypt the file-encryption key with a common RSA public key.

4. Encrypt data files with the file-encryption key.

5. Wipe data files and file-encryption key.

6. Wipe all free space on the drive.

7. Output a file containing the RSA-encrypted, file encryption key.

8. Demand ransom.

9. Receive ransom.

10. Receive encrypted file-encryption key.

11. Decrypt it and send it back.

In any situation like this, step 9 is the hardest. It’s where you’re most likely to get caught. I don’t know much about anonymous money transfer, but I don’t think Swiss bank accounts have the anonymity they used to.

You also might have to prove that you can decrypt the data, so an easy modification is to encrypt a piece of the data with another file-encryption key so you can prove to the victim that you have the RSA private key.

Internet attacks have changed over the last couple of years. They’re no longer about hackers. They’re about criminals. And we should expect to see more of this sort of thing in the future.

Posted on May 30, 2005 at 8:18 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.