My Talk on "Dual Use Technologies"
This is video from my talk at CPSR’s Technology in Wartime conference.
Page 9 of 11
This is video from my talk at CPSR’s Technology in Wartime conference.
I spoke at the Educause conference this year in Seattle. There’s a podcast and video of my talk available (“Ten Trends of Information Security”; I’ve given the talk before) as well as a podcast of an interview with me.
Here’s a video of my talk at Defcon 15.
I am auctioning my DefCon speaker badge on eBay.
The curious phrasing—”upon completion of this auction, Schneier will donate an amount equal to the purchase price to the Electronic Privacy Information Center”—is because eBay has complex rules for charity auctions. So, technically, I am not donating the proceeds of the auction; I am donating a completely different pile of money equal to the proceeds of the auction.
EDITED TO ADD (8/22): Sold for $335. Thank you all.
Protegrity? Counterstorm? Authentify?
I officially declare that the industry has run out of good names for security companies.
I’ve written about the 2006 Workshop on Economics of Information Security (WEIS); I think it’s the most interesting security conference out there.
WEIS 2007 will be held at Carnegie Mellon University on June 6-7. There’s a call for papers, if you want to submit something.
Last month, NIST hosted the Second Hash Workshop, primarily as a vehicle for discussing a replacement strategy for SHA-1. (I liveblogged NIST’s first Cryptographic Hash Workshop here, here, here, here, and here.)
As I’ve written about before, there are some impressive cryptanalytic results against SHA-1. These attacks are still not practical, and the hash function is still operationally secure, but it makes sense for NIST to start looking at replacement strategies—before these attacks get worse.
The conference covered a wide variety of topics (see the agenda for details) on hash function design, hash function attacks, hash function features, and so on.
Perhaps the most interesting part was a panel discussion called “SHA-256 Today and Maybe Something Else in a Few Years: Effects on Research and Design.” Moderated by Paul Hoffman (VPN Consortium) and Arjen Lenstra (Ecole Polytechnique Federale de Lausanne), the panel consisted of Niels Ferguson (Microsoft), Antoine Joux (Universite de Versailles-Saint-Quentin-en-Yvelines), Bart Preneel (Katholieke Universiteit Leuven), Ron Rivest (MIT), and Adi Shamir (Weismann Institute of Science).
Paul Hoffman has posted a composite set of notes from the panel discussion. If you’re interested in the current state of hash function research, it’s well worth reading.
My opinion is that we need a new hash function, and that a NIST-sponsored contest is a great way to stimulate research in the area. I think we need one function and one function only, because users won’t know how to choose between different functions. (It would be smart to design the function with a couple of parameters that can be easily changed to increase security—increase the number of rounds, for example—but it shouldn’t be a variable that users have to decide whether or not to change.) And I think it needs to be secure in the broadest definitions we can come up with: hash functions are the workhorse of cryptographic protocols, and they’re used in all sorts of places for all sorts of reasons in all sorts of applications. We can’t limit the use of hash functions, so we can’t put one out there that’s only secure if used in a certain way.
Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.
You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.[…]
How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.
It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”
That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.
Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.
Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.
In this attack, you can seize control of someone’s computer using his WiFi interface, even if he’s not connected to a network.
The two researchers used an open-source 802.11 hacking tool called LORCON (Loss of Radio Connectivity) to throw an extremely large number of wireless packets at different wireless cards. Hackers use this technique, called fuzzing, to see if they can cause programs to fail, or perhaps even run unauthorized software when they are bombarded with unexpected data.
Using tools like LORCON, Maynor and Ellch were able to discover many examples of wireless device driver flaws, including one that allowed them to take over a laptop by exploiting a bug in an 802.11 wireless driver. They also examined other networking technologies including Bluetooth, Ev-Do (EVolution-Data Only), and HSDPA (High Speed Downlink Packet Access).
The two researchers declined to disclose the specific details of their attack before the August 2 presentation, but they described it in dramatic terms.
“This would be the digital equivalent of a drive-by shooting,” said Maynor. An attacker could exploit this flaw by simply sitting in a public space and waiting for the right type of machine to come into range.
The victim would not even need to connect to a network for the attack to work.
No details yet. The researchers are presenting their results at BlackHat on August 2.
Sidebar photo of Bruce Schneier by Joe MacInnis.