Entries Tagged "hacking"

Page 68 of 72

Firefox JavaScript Flaw: Real or Hoax?

Two hackers—Mischa Spiegelmock and Andrew Wbeelsoi—have announced a flaw in Firefox’s JavaScript:

An attacker could commandeer a computer running the browser simply by crafting a Web page that contains some malicious JavaScript code, Mischa Spiegelmock and Andrew Wbeelsoi said in a presentation at the ToorCon hacker conference here. The flaw affects Firefox on Windows, Apple Computer’s Mac OS X and Linux, they said.

More interesting was this piece:

The hackers claim they know of about 30 unpatched Firefox flaws. They don’t plan to disclose them, instead holding onto the bugs.

Jesse Ruderman, a Mozilla security staffer, attended the presentation and was called up on the stage with the two hackers. He attempted to persuade the presenters to responsibly disclose flaws via Mozilla’s bug bounty program instead of using them for malicious purposes such as creating networks of hijacked PCs, called botnets.

“I do hope you guys change your minds and decide to report the holes to us and take away $500 per vulnerability instead of using them for botnets,” Ruderman said.

The two hackers laughed off the comment. “It is a double-edged sword, but what we’re doing is really for the greater good of the Internet. We’re setting up communication networks for black hats,” Wbeelsoi said.

Sounds pretty bad? But maybe it’s all a hoax:

Spiegelmock, a developer at Six Apart, a blog software company in San Francisco, now says the ToorCon talk was meant “to be humorous” and insists the code presented at the conference cannot result in code execution.

Spiegelmock’s strange about-face comes as Mozilla’s security response team is racing to piece together information from the ToorCon talk to figure out how to fix the issue.

[…]

On the claim that there are 30 undisclosed Firefox vulnerabilities, Spiegelmock pinned that entirely on co-presenter Wbeelsoi. “I have no undisclosed Firefox vulnerabilities. The person who was speaking with me made this claim, and I honestly have no idea if he has them or not. I apologize to everyone involved, and I hope I have made everything as clear as possible,” Spiegelmock added.

I vote: hoax, with maybe some seeds of real.

Posted on October 4, 2006 at 7:04 AMView Comments

Organized Cybercrime

Cybercrime is getting organized:

Cyberscams are increasingly being committed by organized crime syndicates out to profit from sophisticated ruses rather than hackers keen to make an online name for themselves, according to a top U.S. official.

Christopher Painter, deputy chief of the computer crimes and intellectual property section at the Department of Justice, said there had been a distinct shift in recent years in the type of cybercriminals that online detectives now encounter.

“There has been a change in the people who attack computer networks, away from the ‘bragging hacker’ toward those driven by monetary motives,” Painter told Reuters in an interview this week.

Although media reports often focus on stories about teenage hackers tracked down in their bedroom, the greater danger lies in the more anonymous virtual interlopers.

“There are still instances of these ‘lone-gunman’ hackers but more and more we are seeing organized criminal groups, groups that are often organized online targeting victims via the internet,” said Painter, in London for a cybercrime conference.

I’ve been saying this sort of thing for years, and have long complained that cyberterrorism gets all the press while cybercrime is the real threat. I don’t think this article is fear and hype; it’s a real problem.

Posted on September 19, 2006 at 7:16 AMView Comments

More on the HP Board Spying Scandal

Two weeks ago I wrote about a spying scandal involving the HP board. There’s more:

A secret investigation of news leaks at Hewlett-Packard was more elaborate than previously reported, and almost from the start involved the illicit gathering of private phone records and direct surveillance of board members and journalists, according to people briefed on the company’s review of the operation.

Given this, I predict a real investigation into the incident:

Those briefed on the company’s review of the operation say detectives tried to plant software on at least one journalist’s computer that would enable messages to be traced, and also followed directors and possibly a journalist in an attempt to identify a leaker on the board.

I’m amazed there isn’t more outcry. Pretexting, planting Trojans…this is the sort of thing that would get a “hacker” immediately arrested. But if the chairman of the HP board does it, suddenly it’s a gray area.

EDITED TO ADD (9/20): More info.

Posted on September 18, 2006 at 2:48 PMView Comments

What is a Hacker?

A hacker is someone who thinks outside the box. It’s someone who discards conventional wisdom, and does something else instead. It’s someone who looks at the edge and wonders what’s beyond. It’s someone who sees a set of rules and wonders what happens if you don’t follow them. A hacker is someone who experiments with the limitations of systems for intellectual curiosity.

I wrote that last sentence in the year 2000, in my book Secrets and Lies. And I’m sticking to that definition.

This is what else I wrote in Secrets and Lies (pages 43-44):

Hackers are as old as curiosity, although the term itself is modern. Galileo was a hacker. Mme. Curie was one, too. Aristotle wasn’t. (Aristotle had some theoretical proof that women had fewer teeth than men. A hacker would have simply counted his wife’s teeth. A good hacker would have counted his wife’s teeth without her knowing about it, while she was asleep. A good bad hacker might remove some of them, just to prove a point.)

When I was in college, I knew a group similar to hackers: the key freaks. They wanted access, and their goal was to have a key to every lock on campus. They would study lockpicking and learn new techniques, trade maps of the steam tunnels and where they led, and exchange copies of keys with each other. A locked door was a challenge, a personal affront to their ability. These people weren’t out to do damage—stealing stuff wasn’t their objective—although they certainly could have. Their hobby was the power to go anywhere they wanted to.

Remember the phone phreaks of yesteryear, the ones who could whistle into payphones and make free phone calls. Sure, they stole phone service. But it wasn’t like they needed to make eight-hour calls to Manila or McMurdo. And their real work was secret knowledge: The phone network was a vast maze of information. They wanted to know the system better than the designers, and they wanted the ability to modify it to their will. Understanding how the phone system worked—that was the true prize. Other early hackers were ham-radio hobbyists and model-train enthusiasts.

Richard Feynman was a hacker; read any of his books.

Computer hackers follow these evolutionary lines. Or, they are the same genus operating on a new system. Computers, and networks in particular, are the new landscape to be explored. Networks provide the ultimate maze of steam tunnels, where a new hacking technique becomes a key that can open computer after computer. And inside is knowledge, understanding. Access. How things work. Why things work. It’s all out there, waiting to be discovered.

Computers are the perfect playground for hackers. Computers, and computer networks, are vast treasure troves of secret knowledge. The Internet is an immense landscape of undiscovered information. The more you know, the more you can do.

And it should be no surprise that many hackers have focused their skills on computer security. Not only is it often the obstacle between the hacker and knowledge, and therefore something to be defeated, but also the very mindset necessary to be good at security is exactly the same mindset that hackers have: thinking outside the box, breaking the rules, exploring the limitations of a system. The easiest way to break a security system is to figure out what the system’s designers hadn’t thought of: that’s security hacking.

Hackers cheat. And breaking security regularly involves cheating. It’s figuring out a smart card’s RSA key by looking at the power fluctuations, because the designers of the card never realized anyone could do that. It’s self-signing a piece of code, because the signature-verification system didn’t think someone might try that. It’s using a piece of a protocol to break a completely different protocol, because all previous security analysis only looked at protocols individually and not in pairs.

That’s security hacking: breaking a system by thinking differently.

It all sounds criminal: recovering encrypted text, fooling signature algorithms, breaking protocols. But honestly, that’s just the way we security people talk. Hacking isn’t criminal. All the examples two paragraphs above were performed by respected security professionals, and all were presented at security conferences.

I remember one conversation I had at a Crypto conference, early in my career. It was outside amongst the jumbo shrimp, chocolate-covered strawberries, and other delectables. A bunch of us were talking about some cryptographic system, including Brian Snow of the NSA. Someone described an unconventional attack, one that didn’t follow the normal rules of cryptanalysis. I don’t remember any of the details, but I remember my response after hearing the description of the attack.

“That’s cheating,” I said.

Because it was.

I also remember Brian turning to look at me. He didn’t say anything, but his look conveyed everything. “There’s no such thing as cheating in this business.”

Because there isn’t.

Hacking is cheating, and it’s how we get better at security. It’s only after someone invents a new attack that the rest of us can figure out how to defend against it.

For years I have refused to play the semantic “hacker” vs. “cracker” game. There are good hackers and bad hackers, just as there are good electricians and bad electricians. “Hacker” is a mindset and a skill set; what you do with it is a different issue.

And I believe the best computer security experts have the hacker mindset. When I look to hire people, I look for someone who can’t walk into a store without figuring out how to shoplift. I look for someone who can’t test a computer security program without trying to get around it. I look for someone who, when told that things work in a particular way, immediately asks how things stop working if you do something else.

We need these people in security, and we need them on our side. Criminals are always trying to figure out how to break security systems. Field a new system—an ATM, an online banking system, a gambling machine—and criminals will try to make an illegal profit off it. They’ll figure it out eventually, because some hackers are also criminals. But if we have hackers working for us, they’ll figure it out first—and then we can defend ourselves.

It’s our only hope for security in this fast-moving technological world of ours.

This essay appeared in the Summer 2006 issue of 2600.

Posted on September 14, 2006 at 7:13 AMView Comments

Malware Distribution Project

In case you needed a comprehensive database of malware.

Malware Distribution Project (MD:Pro) offers developers of security systems and anti-malware products a vast collection of downloadable malware from a secure and reliable source, exclusively for the purposes of analysis, testing, research and development.

Bringing together for the first time a large back-catalogue of malware, computer underground related information and IT security resources under one project, this major new system also contains a large selection of undetected malware, along with an open, collaborative platform, where malware samples can be shared among its members. The database is constantly updated with new files, and maintained to keep it running at an optimum.

There are currently 271712 files in the system.

This isn’t free. You can subscribe at 1,250 euros for a month, or 13,500 euros a year. (There are cheaper packages with less comprehensive access.)

They claim to have a stringent vetting process, ensuring that only legitimate researchers have access to this database:

It should be noted that we are not a malware/VX distribution site, nor do we condone the public spreading and/or distribution of such information, hence we will be vetting our registrants stringently. We do appreciate that this puts a severe restriction on private (individual) malware researchers and enthusiasts with limited or no budget, but we do feel that providing free malware for public research is out of the scope of this project.

EDITED TO ADD (8/8): The hacker group Cult of the Dead Cow also has a malware repository, free and with looser access restrictions.

Posted on August 8, 2006 at 7:56 AMView Comments

Hackers Clone RFID Passports

It was demonstrated today at the BlackHat conference.

Grunwald says it took him only two weeks to figure out how to clone the passport chip. Most of that time he spent reading the standards for e-passports that are posted on a website for the International Civil Aviation Organization, a United Nations body that developed the standard. He tested the attack on a new European Union German passport, but the method would work on any country’s e-passport, since all of them will be adhering to the same ICAO standard.

In a demonstration for Wired News, Grunwald placed his passport on top of an official passport-inspection RFID reader used for border control. He obtained the reader by ordering it from the maker—Walluf, Germany-based ACG Identification Technologies—but says someone could easily make their own for about $200 just by adding an antenna to a standard RFID reader.

He then launched a program that border patrol stations use to read the passports—called Golden Reader Tool and made by secunet Security Networks—and within four seconds, the data from the passport chip appeared on screen in the Golden Reader template.

Grunwald then prepared a sample blank passport page embedded with an RFID tag by placing it on the reader—which can also act as a writer—and burning in the ICAO layout, so that the basic structure of the chip matched that of an official passport.

As the final step, he used a program that he and a partner designed two years ago, called RFDump, to program the new chip with the copied information.

The result was a blank document that looks, to electronic passport readers, like the original passport.

I’ve long been opposed (that last link is an op-ed from The International Herald-Tribune) to RFID chips in passports, although last year I—mistakenly—withdrew my objections based on the security measures the State Department was taking.

That’s silly. I’m not opposed to chips on ID cards, I am opposed to RFID chips. My fear is surreptitious access: someone could read the chip and learn your identity without your knowledge or consent.

Sure, the State Department is implementing security measures to prevent that. But as we all know, these measures won’t be perfect. And a passport has a ten-year lifetime. It’s sheer folly to believe the passport security won’t be hacked in that time. This hack took only two weeks!

The best way to solve a security problem is not to have it at all. If there’s an RFID chip on your passport, or any of your identity cards, you have to worry about securing it. If there’s no RFID chip, then the security problem is solved.

Until I hear a compelling case for why there must be an RFID chip on a passport, and why a normal smart-card chip can’t do, I am opposed to the idea.

Crossposted to the ACLU blog.

Posted on August 3, 2006 at 3:45 PMView Comments

Security and Monoculture

Interesting research.

EDITED TO ADD (8/1): The paper is only viewable by subscribers. Here are some excerpts:

Fortunately, buffer-overflow attacks have a weakness: the intruder must know precisely what part of the computer’s memory to target. In 1996, Forrest realised that these attacks could be foiled by scrambling the way a program uses a computer’s memory. When you launch a program, the operating system normally allocates the same locations in a computer’s random access memory (RAM) each time. Forrest wondered whether she could rewrite the operating system to force the program to use different memory locations that are picked randomly every time, thus flummoxing buffer-overflow attacks.

To test her concept, Forrest experimented with a version of the open-source operating system Linux. She altered the system to force programs to assign data to memory locations at random. Then she subjected the computer to several well-known attacks that used the buffer-overflow technique. None could get through. Instead, they targeted the wrong area of memory. Although part of the software would often crash, Linux would quickly restart it, and get rid of the virus in the process. In rare situations it would crash the entire operating system, a short-lived annoyance, certainly, but not bad considering the intruder had failed to take control of the machine.

Linux computer-security experts quickly picked up on Forrest’s idea. In 2003 Red Hat, the maker of a popular version of Linux, began including memory-space randomisation in its products. “We had several vulnerabilities which we could downgrade in severity,” says Marc J. Cox, a Red Hat security expert.

[…]

Memory scrambling isn’t the only way to add diversity to operating systems. Even more sophisticated techniques are in the works. Forrest has tried altering “instruction sets”, commands that programs use to communicate with a computer’s hardware, such as its processor chip or memory.

Her trick was to replace the “translator” program that interprets these instruction sets with a specially modified one. Every time the computer boots up, Forrest’s software loads into memory and encrypts the instruction sets in the hardware using a randomised encoding key. When a program wants to send a command to the computer, Forrest’s translator decrypts the command on the fly so the computer can understand it.

This produces an elegant form of protection. If an attacker manages to insert malicious code into a running program, that code will also be decrypted by the translator when it is passed to the hardware. However, since the attacker’s code is not encrypted in the first place, the decryption process turns it into digital gibberish so the computer hardware cannot understand it. Since it exists only in the computer’s memory and has not been written to the computer’s hard disc, it will vanish upon reboot.

Forrest has tested the process on several versions of Linux while launching buffer-overflow attacks. None were able to penetrate. As with memory randomisation, the failed attacks would, at worst, temporarily crash part of Linux – a small price to pay. Her translator program was a success. “It seemed like a crazy idea at first,” says Gabriel Barrantes, who worked with Forrest on the project. “But it turned out to be sound.”

[…]

In 2004, a group of researchers led by Hovav Shacham at Stanford University in California tried this trick against a copy of the popular web-server application Apache that was running on Linux, protected with memory randomisation. It took them 216 seconds per attack to break into it. They concluded that this protection is not sufficient to stop the most persistent viruses or a single, dedicated attacker.

Last year, a group of researchers at the University of Virginia, Charlottesville, performed a similar attack on a copy of Linux whose instruction set was protected by randomised encryption. They used a slightly more complex approach, making a series of guesses about different parts of the randomisation key. This time it took over 6 minutes to force a way in: the system was tougher, but hardly invulnerable.

[…]

Knight says that randomising the encryption on the instruction set is a more powerful technique because it can use larger and more complex forms of encryption. The only limitation is that as the encryption becomes more complicated, it takes the computer longer to decrypt each instruction, and this can slow the machine down. Barrantes found that instruction-set randomisation more than doubled the length of time an instruction took to execute. Make the encryption too robust, and computer users could find themselves drumming their fingers as they wait for a web page to load.

So he thinks the best approach is to combine different types of randomisation. Where one fails, another picks up. Last year, he took a variant of Linux and randomised both its memory-space allocation and its instruction sets. In December, he put 100 copies of the software online and hired a computer-security firm to try and penetrate them. The attacks failed. In May, he repeated the experiment but this time he provided the attackers with extra information about the randomised software. Their assault still failed.

The idea was to simulate what would happen if an adversary had a phenomenal amount of money, and secret information from an inside collaborator, says Knight. The results pleased him and, he hopes, will also please DARPA when he presents them to the agency. “We aren’t claiming we can do everything, but for broad classes of attack, these techniques appear to work very well. We have no reason to believe that there would be any change if we were to try to apply this to the real world.”

EDITED TO ADD (8/2): The article is online here.

Posted on August 1, 2006 at 6:26 AMView Comments

Bot Networks

What could you do if you controlled a network of thousands of computers—or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.

All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers—even if you have no idea what they are.) You’ve got a lot of cycles to spare. There’s no reason that your computer can’t help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.

The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.

The term used for a computer remotely controlled by someone else is a “bot”. A group of computers—thousands or even millions—controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.

Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other’s computers. The first widely publicized use of a distributed intruder tool—technically not a botnet, but practically the same thing—was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.

These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They’re being used for click fraud. They’re being used as an extortion tool: Pay up or we’ll DDoS you!

Mostly, they’re being used to collect personal data for fraud—commonly called “identity theft.” Modern bot software doesn’t just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose—to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.

Swindlers are also using bot networks for click fraud. Google’s anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it’s much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.

And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)

Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.

One application of bot networks that we haven’t seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about “flash worms” that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven’t we seen more of this? My guess is because there isn’t any profit in it.

There’s no real solution to the botnet problem, because there’s no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It’s the same thing as distributed.net or SETI@home, only the attacker doesn’t ask your permission first.

As long as networked computers have vulnerabilities—and that’ll be for the foreseeable future—there’ll be bot networks. It’s a natural side-effect of a computer network with bugs.

This essay originally appeared on Wired.com.

EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it’s primarily targeted against fringe industries—online gaming, online gambling, online porn—located offshore, but we’re seeing more and more of against mainstream companies in the U.S. and Europe.

EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.

Posted on July 27, 2006 at 6:35 AMView Comments

Hacked MySpace Server Infects a Million Computers with Malware

According to The Washington Post:

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows….

Clever attack.

EDITED TO ADD (7/27): It wasn’t MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.

EDITED TO ADD (8/5): Ed Felten comments.

Posted on July 24, 2006 at 6:46 AMView Comments

Paris Bank Hack at Center of National Scandal

From Wired News:

Among the falsified evidence produced by the conspirators before the fraud unraveled were confidential bank records originating with the Clearstream bank in Luxembourg, which were expertly modified to make it appear that some French politicians had secretly established offshore bank accounts to receive bribes. The falsified records were then sent to investigators, with enough authentic account information left in to make them appear credible.

Posted on July 17, 2006 at 6:42 AMView Comments

1 66 67 68 69 70 72

Sidebar photo of Bruce Schneier by Joe MacInnis.