Entries Tagged "security conferences"

Page 9 of 11

DefCon Badge Auction

I am auctioning my DefCon speaker badge on eBay.

The curious phrasing—”upon completion of this auction, Schneier will donate an amount equal to the purchase price to the Electronic Privacy Information Center”—is because eBay has complex rules for charity auctions. So, technically, I am not donating the proceeds of the auction; I am donating a completely different pile of money equal to the proceeds of the auction.

EDITED TO ADD (8/22): Sold for $335. Thank you all.

Posted on August 18, 2007 at 10:57 AMView Comments

Notes from the Hash Function Workshop

Last month, NIST hosted the Second Hash Workshop, primarily as a vehicle for discussing a replacement strategy for SHA-1. (I liveblogged NIST’s first Cryptographic Hash Workshop here, here, here, here, and here.)

As I’ve written about before, there are some impressive cryptanalytic results against SHA-1. These attacks are still not practical, and the hash function is still operationally secure, but it makes sense for NIST to start looking at replacement strategies—before these attacks get worse.

The conference covered a wide variety of topics (see the agenda for details) on hash function design, hash function attacks, hash function features, and so on.

Perhaps the most interesting part was a panel discussion called “SHA-256 Today and Maybe Something Else in a Few Years: Effects on Research and Design.” Moderated by Paul Hoffman (VPN Consortium) and Arjen Lenstra (Ecole Polytechnique Federale de Lausanne), the panel consisted of Niels Ferguson (Microsoft), Antoine Joux (Universite de Versailles-Saint-Quentin-en-Yvelines), Bart Preneel (Katholieke Universiteit Leuven), Ron Rivest (MIT), and Adi Shamir (Weismann Institute of Science).

Paul Hoffman has posted a composite set of notes from the panel discussion. If you’re interested in the current state of hash function research, it’s well worth reading.

My opinion is that we need a new hash function, and that a NIST-sponsored contest is a great way to stimulate research in the area. I think we need one function and one function only, because users won’t know how to choose between different functions. (It would be smart to design the function with a couple of parameters that can be easily changed to increase security—increase the number of rounds, for example—but it shouldn’t be a variable that users have to decide whether or not to change.) And I think it needs to be secure in the broadest definitions we can come up with: hash functions are the workhorse of cryptographic protocols, and they’re used in all sorts of places for all sorts of reasons in all sorts of applications. We can’t limit the use of hash functions, so we can’t put one out there that’s only secure if used in a certain way.

Posted on September 11, 2006 at 3:30 PMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

WiFi Driver Attack

In this attack, you can seize control of someone’s computer using his WiFi interface, even if he’s not connected to a network.

The two researchers used an open-source 802.11 hacking tool called LORCON (Loss of Radio Connectivity) to throw an extremely large number of wireless packets at different wireless cards. Hackers use this technique, called fuzzing, to see if they can cause programs to fail, or perhaps even run unauthorized software when they are bombarded with unexpected data.

Using tools like LORCON, Maynor and Ellch were able to discover many examples of wireless device driver flaws, including one that allowed them to take over a laptop by exploiting a bug in an 802.11 wireless driver. They also examined other networking technologies including Bluetooth, Ev-Do (EVolution-Data Only), and HSDPA (High Speed Downlink Packet Access).

The two researchers declined to disclose the specific details of their attack before the August 2 presentation, but they described it in dramatic terms.

“This would be the digital equivalent of a drive-by shooting,” said Maynor. An attacker could exploit this flaw by simply sitting in a public space and waiting for the right type of machine to come into range.

The victim would not even need to connect to a network for the attack to work.

No details yet. The researchers are presenting their results at BlackHat on August 2.

Posted on July 6, 2006 at 1:52 PMView Comments

Economics and Information Security

I’m sitting in a conference room at Cambridge University, trying to simultaneously finish this article for Wired News and pay attention to the presenter onstage.

I’m in this awkward situation because 1) this article is due tomorrow, and 2) I’m attending the fifth Workshop on the Economics of Information Security, or WEIS: to my mind, the most interesting computer security conference of the year.

The idea that economics has anything to do with computer security is relatively new. Ross Anderson and I seem to have stumbled upon the idea independently. He, in his brilliant article from 2001, “Why Information Security Is Hard—An Economic Perspective” (.pdf), and me in various essays and presentations from that same period.

WEIS began a year later at the University of California at Berkeley and has grown ever since. It’s the only workshop where technologists get together with economists and lawyers and try to understand the problems of computer security.

And economics has a lot to teach computer security. We generally think of computer security as a problem of technology, but often systems fail because of misplaced economic incentives: The people who could protect a system are not the ones who suffer the costs of failure.

When you start looking, economic considerations are everywhere in computer security. Hospitals’ medical-records systems provide comprehensive billing-management features for the administrators who specify them, but are not so good at protecting patients’ privacy. Automated teller machines suffered from fraud in countries like the United Kingdom and the Netherlands, where poor regulation left banks without sufficient incentive to secure their systems, and allowed them to pass the cost of fraud along to their customers. And one reason the internet is insecure is that liability for attacks is so diffuse.

In all of these examples, the economic considerations of security are more important than the technical considerations.

More generally, many of the most basic security questions are at least as much economic as technical. Do we spend enough on keeping hackers out of our computer systems? Or do we spend too much? For that matter, do we spend appropriate amounts on police and Army services? And are we spending our security budgets on the right things? In the shadow of 9/11, questions like these have a heightened importance.

Economics can actually explain many of the puzzling realities of internet security. Firewalls are common, e-mail encryption is rare: not because of the relative effectiveness of the technologies, but because of the economic pressures that drive companies to install them. Corporations rarely publicize information about intrusions; that’s because of economic incentives against doing so. And an insecure operating system is the international standard, in part, because its economic effects are largely borne not by the company that builds the operating system, but by the customers that buy it.

Some of the most controversial cyberpolicy issues also sit squarely between information security and economics. For example, the issue of digital rights management: Is copyright law too restrictive—or not restrictive enough—to maximize society’s creative output? And if it needs to be more restrictive, will DRM technologies benefit the music industry or the technology vendors? Is Microsoft’s Trusted Computing initiative a good idea, or just another way for the company to lock its customers into Windows, Media Player and Office? Any attempt to answer these questions becomes rapidly entangled with both information security and economic arguments.

WEIS encourages papers on these and other issues in economics and computer security. We heard papers presented on the economics of digital forensics of cell phones (.pdf)—if you have an uncommon phone, the police probably don’t have the tools to perform forensic analysis—and the effect of stock spam on stock prices: It actually works in the short term. We learned that more-educated wireless network users are not more likely to secure their access points (.pdf), and that the best predictor of wireless security is the default configuration of the router.

Other researchers presented economic models to explain patch management (.pdf), peer-to-peer worms (.pdf), investment in information security technologies (.pdf) and opt-in versus opt-out privacy policies (.pdf). There was a field study that tried to estimate the cost to the U.S. economy for information infrastructure failures (.pdf): less than you might think. And one of the most interesting papers looked at economic barriers to adopting new security protocols (.pdf), specifically DNS Security Extensions.

This is all heady stuff. In the early years, there was a bit of a struggle as the economists and the computer security technologists tried to learn each others’ languages. But now it seems that there’s a lot more synergy, and more collaborations between the two camps.

I’ve long said that the fundamental problems in computer security are no longer about technology; they’re about applying technology. Workshops like WEIS are helping us understand why good security technologies fail and bad ones succeed, and that kind of insight is critical if we’re going to improve security in the information age.

This essay originally appeared on Wired.com.

Posted on June 29, 2006 at 4:31 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.