Entries Tagged "phishing"

Page 7 of 11

Spear Phishing Attacks from China Against Gmail Accounts

Reporters have been calling me pretty much constantly about this story, but I can’t figure out why in the world this is news. Attacks from China—old news; attacks from China against Google—old news; attacks from China against Google Gmail accounts—old news. Spear phishing attacks from China against senior government officials—old news. There’s even a WikiLeaks cable about this stuff.

When I first read the story, I decided it wasn’t worth blogging about. Why is this news?

Posted on June 2, 2011 at 9:48 AMView Comments

WikiLeaks Cable about Chinese Hacking of U.S. Networks

We know it’s prevalent, but there’s some new information:

Secret U.S. State Department cables, obtained by WikiLeaks and made available to Reuters by a third party, trace systems breaches—colorfully code-named “Byzantine Hades” by U.S. investigators—to the Chinese military. An April 2009 cable even pinpoints the attacks to a specific unit of China’s People’s Liberation Army.

Privately, U.S. officials have long suspected that the Chinese government and in particular the military was behind the cyber-attacks. What was never disclosed publicly, until now, was evidence.

U.S. efforts to halt Byzantine Hades hacks are ongoing, according to four sources familiar with investigations. In the April 2009 cable, officials in the State Department’s Cyber Threat Analysis Division noted that several Chinese-registered Web sites were “involved in Byzantine Hades intrusion activity in 2006.”

The sites were registered in the city of Chengdu, the capital of Sichuan Province in central China, according to the cable. A person named Chen Xingpeng set up the sites using the “precise” postal code in Chengdu used by the People’s Liberation Army Chengdu Province First Technical Reconnaissance Bureau (TRB), an electronic espionage unit of the Chinese military. “Much of the intrusion activity traced to Chengdu is similar in tactics, techniques and procedures to (Byzantine Hades) activity attributed to other” electronic spying units of the People’s Liberation Army, the cable says.

[…]

What is known is the extent to which Chinese hackers use “spear-phishing” as their preferred tactic to get inside otherwise forbidden networks. Compromised email accounts are the easiest way to launch spear-phish because the hackers can send the messages to entire contact lists.

The tactic is so prevalent, and so successful, that “we have given up on the idea we can keep our networks pristine,” says Stewart Baker, a former senior cyber-security official at the U.S. Department of Homeland Security and National Security Agency. It’s safer, government and private experts say, to assume the worst—that any network is vulnerable.

Two former national security officials involved in cyber-investigations told Reuters that Chinese intelligence and military units, and affiliated private hacker groups, actively engage in “target development” for spear-phish attacks by combing the Internet for details about U.S. government and commercial employees’ job descriptions, networks of associates, and even the way they sign their emails—such as U.S. military personnel’s use of “V/R,” which stands for “Very Respectfully” or “Virtual Regards.”

The spear-phish are “the dominant attack vector. They work. They’re getting better. It’s just hard to stop,” says Gregory J. Rattray, a partner at cyber-security consulting firm Delta Risk and a former director for cyber-security on the National Security Council.

Spear-phish are used in most Byzantine Hades intrusions, according to a review of State Department cables by Reuters. But Byzantine Hades is itself categorized into at least three specific parts known as “Byzantine Anchor,” “Byzantine Candor,” and “Byzantine Foothold.” A source close to the matter says the sub-codenames refer to intrusions which use common tactics and malicious code to extract data.

A State Department cable made public by WikiLeaks last December highlights the severity of the spear-phish problem. “Since 2002, (U.S. government) organizations have been targeted with social-engineering online attacks” which succeeded in “gaining access to hundreds of (U.S. government) and cleared defense contractor systems,” the cable said. The emails were aimed at the U.S. Army, the Departments of Defense, State and Energy, other government entities and commercial companies.

By the way, reading this blog entry might be illegal under the U.S. Espionage Act:

Dear Americans: If you are not “authorized” personnel, but you have read, written about, commented upon, tweeted, spread links by “liking” on Facebook, shared by email, or otherwise discussed “classified” information disclosed from WikiLeaks, you could be implicated for crimes under the U.S. Espionage Act—or so warns a legal expert who said the U.S. Espionage Act could make “felons of us all.”

As the U.S. Justice Department works on a legal case against WikiLeak’s Julian Assange for his role in helping publish 250,000 classified U.S. diplomatic cables, authorities are leaning toward charging Assange with spying under the Espionage Act of 1917. Legal experts warn that if there is an indictment under the Espionage Act, then any citizen who has discussed or accessed “classified” information can be arrested on “national security” grounds.

Maybe I should have warned you at the top of this post.

Posted on April 18, 2011 at 9:33 AMView Comments

Bulletproof Service Providers

From Brian Krebs:

Hacked and malicious sites designed to steal data from unsuspecting users via malware and phishing are a dime a dozen, often located in the United States, and are a key target for takedown by ISPs and security researchers. But when online miscreants seek stability in their Web projects, they often turn to so-called “bulletproof hosting” providers, mini-ISPs that specialize in offering services that are largely immune from takedown requests and pressure from Western law enforcement agencies.

Posted on November 11, 2010 at 12:45 PMView Comments

Cory Doctorow Gets Phished

It can happen to anyone:

Here’s how I got fooled. On Monday, I unlocked my Nexus One phone, installing a new and more powerful version of the Android operating system that allowed me to do some neat tricks, like using the phone as a wireless modem on my laptop. In the process of reinstallation, I deleted all my stored passwords from the phone. I also had a couple of editorials come out that day, and did a couple of interviews, and generally emitted a pretty fair whack of information.

The next day, Tuesday, we were ten minutes late getting out of the house. My wife and I dropped my daughter off at the daycare, then hurried to our regular coffee shop to get take-outs before parting ways to go to our respective offices. Because we were a little late arriving, the line was longer than usual. My wife went off to read the free newspapers, I stood in the line. Bored, I opened up my phone fired up my freshly reinstalled Twitter client and saw that I had a direct message from an old friend in Seattle, someone I know through fandom. The message read “Is this you????”? and was followed by one of those ubiquitous shortened URLs that consist of a domain and a short code, like this: http://owl.ly/iuefuew.

The whole story is worth reading.

Posted on May 7, 2010 at 6:56 AMView Comments

Online Credit/Debit Card Security Failure

Ross Anderson reports:

Online transactions with credit cards or debit cards are increasingly verified using the 3D Secure system, which is branded as “Verified by VISA” and “MasterCard SecureCode”. This is now the most widely-used single sign-on scheme ever, with over 200 million cardholders registered. It’s getting hard to shop online without being forced to use it.

In a paper I’m presenting today at Financial Cryptography, Steven Murdoch and I analyse 3D Secure. From the engineering point of view, it does just about everything wrong, and it’s becoming a fat target for phishing. So why did it succeed in the marketplace?

Quite simply, it has strong incentives for adoption. Merchants who use it push liability for fraud back to banks, who in turn push it on to cardholders. Properly designed single sign-on systems, like OpenID and InfoCard, can’t offer anything like this. So this is yet another case where security economics trumps security engineering, but in a predatory way that leaves cardholders less secure. We conclude with a suggestion on what bank regulators might do to fix the problem.

Posted on February 1, 2010 at 6:26 AMView Comments

Small Business Identity Theft and Fraud

The sorts of crimes we’ve been seeing perpetrated against individuals are starting to be perpetrated against small businesses:

In July, a school district near Pittsburgh sued to recover $700,000 taken from it. In May, a Texas company was robbed of $1.2 million. An electronics testing firm in Baton Rouge, La., said it was bilked of nearly $100,000.

In many cases, the advisory warned, the scammers infiltrate companies in a similar fashion: They send a targeted e-mail to the company’s controller or treasurer, a message that contains either a virus-laden attachment or a link that—when opened—surreptitiously installs malicious software designed to steal passwords. Armed with those credentials, the crooks then initiate a series of wire transfers, usually in increments of less than $10,000 to avoid banks’ anti-money-laundering reporting requirements.

The alert states that these scams typically rely on help from “money mules”—willing or unwitting individuals in the United States—often hired by the criminals via popular Internet job boards. Once enlisted, the mules are instructed to set up bank accounts, withdraw the fraudulent deposits and then wire the money to fraudsters, the majority of which are in Eastern Europe, according to the advisory.

This has the potential to grow into a very big problem. Even worse:

Businesses do not enjoy the same legal protections as consumers when banking online. Consumers typically have up to 60 days from the receipt of a monthly statement to dispute any unauthorized charges.

In contrast, companies that bank online are regulated under the Uniform Commercial Code, which holds that commercial banking customers have roughly two business days to spot and dispute unauthorized activity if they want to hold out any hope of recovering unauthorized transfers from their accounts.

And, of course, the security externality means that the banks care much less:

“The banks spend a lot of money on protecting consumer customers because they owe money if the consumer loses money,” Litan said. “But the banks don’t spend the same resources on the corporate accounts because they don’t have to refund the corporate losses.”

Posted on August 26, 2009 at 5:46 AMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

NSA Patent on Network Tampering Detection

The NSA has patented a technique to detect network tampering:

The NSA’s software does this by measuring the amount of time the network takes to send different types of data from one computer to another and raising a red flag if something takes too long, according to the patent filing.

Other researchers have looked into this problem in the past and proposed a technique called distance bounding, but the NSA patent takes a different tack, comparing different types of data travelling across the network. “The neat thing about this particular patent is that they look at the differences between the network layers,” said Tadayoshi Kohno, an assistant professor of computer science at the University of Washington.

The technique could be used for purposes such as detecting a fake phishing Web site that was intercepting data between users and their legitimate banking sites, he said. “This whole problem space has a lot of potential, [although] I don’t know if this is going to be the final solution that people end up using.”

Posted on December 30, 2008 at 12:07 PMView Comments

1 5 6 7 8 9 11

Sidebar photo of Bruce Schneier by Joe MacInnis.