Schneier on Security
A blog covering security and security technology.
July 2006 Archives
ScatterChat is unique in that it is intended for non-technical human rights activists and political dissidents operating behind oppressive national firewalls. It is an instant messaging client that provides end-to-end encryption over the Electronic Frontier Foundation-endorsed Tor network. Its security features include resiliency against partial compromise through perfect forward secrecy, immunity from replay attacks, and limited resistance to traffic analysis, all reinforced through a pro-actively secure design.
A nice application of Tor.
What happens if you distribute 50 million small,valuable, and easily sellable objects into the hands of men, women, and children all over the world, and tell them to walk around the streets with them? Why, people steal them, of course.
"Rise in crime blamed on iPods", yells the front page of London's Metro. "Muggers targeting iPod users", says ITV. This is the reaction to the government's revelation that robberies across the UK have risen by 8 per cent in the last year, from 90,747 to 98,204. The Home Secretary, John Reid, attributes this to the irresistible lure of "young people carrying expensive goods, such as mobile phones and MP3 players". A separate British Crime Survey, however, suggests robbery has risen by 22 per cent, to 311,000.
This shouldn't come as a surprise, just as it wasn't a surprise in the 1990s when there was a wave of high-priced sneaker thefts. Or that there is also a wave of laptop thefts.
What to do about it? Basically, there's not much you can do except be careful. Muggings have long been a low-risk crime, so it makes sense that we're seeing an increase in them as the value of what people are carrying on their person goes up. And people carrying portable music players have an unmistakable indicator: those ubiquitous ear buds.
The economics of this crime are such that it will continue until one of three things happens. One, portable music players become much less valuable. Two, the costs of the crime become much higher. Three, society deals with its underclass and gives them a better career option than iPod thief.
And on a related topic, here's a great essay by Cory Doctorow on how Apple's iTunes copy protection screws the music industry.
EDITED TO ADD (8/5): Eric Rescorla comments.
No explanation given, but it's annoying fishermen:
The problem took on alarming proportions in early July when fishermen netted more than 500 tons of squid bycatch in one week, Josh Keaton, a resource management specialist with the National Oceanic and Atmospheric Association, said Friday.
Does anyone other than me see a problem with this?
Some 30 European businesses and research institutes are working to create software that would make it possible from a distance to regain control of an aircraft from hijackers, according to the German news magazine.
Unless his goal were, um, hijacking the aircraft.
It seems to me that by designing remote-control software for airplanes, you open the possibility for someone to hijack the plane without even being on board. Sure, there are going to be computer-security controls protecting this thing, but we all know how well that sort of thing has worked in the past.
The system would be designed in such a way that even a computer hacker on board could not get round it.
But what about computer hackers on the ground?
I'm not saying this is a bad idea; it might be a good idea. But this security countermeasure opens up an entirely new vulnerability, and I hope that someone is studying that new vulnerability.
In 1994, Congress passed the Communications Assistance for Law Enforcement Act (CALEA). Basically, this is the law that forces the phone companies to make your telephone calls -- including cell phone calls -- available for government wiretapping.
But now the government wants access to VoIP calls, and SMS messages, and everything else. They're doing their best to interpret CALEA as broadly as possible, but they're also pursuing a legal angle. Ars Technica has the story:
The government hopes to shore up the legal basis for the program by passing amended legislation. The EFF took a look at the amendments and didn't like what it found.According to the Administration, the proposal would "confirm [CALEA's] coverage of push-to-talk, short message service, voice mail service and other communications services offered on a commercial basis to the public," along with "confirm[ing] CALEA's application to providers of broadband Internet access, and certain types of 'Voice-Over-Internet-Protocol' (VOIP)." Many of CALEA's express exceptions and limitations are also removed. Most importantly, while CALEA's applicability currently depends on whether broadband and VOIP can be considered "substantial replacements" for existing telephone services, the new proposal would remove this limit.
This person worked as an airport security screener years before 9/11, before the TSA, so hopefully things are different now. It's a pretty fascinating read, though.
Two things pop out at me. One, as I wrote, it's a mind-numbingly boring task. And two, the screeners were trained not to find weapons, but to find the particular example weapons that the FAA would test them on.
"How do you know it's a gun?" he asked me.
Not exactly the result we're looking for, but one that makes sense given the economic incentives that were at work.
I sure hope things are different today.
In Beyond Fear, I wrote about profiling (reprinted here). I talked a lot about how smart behavioral-based profiling is much more effective than dumb characteristic-based profiling, and how well-trained people are much better than computers.
The story I used was about how U.S. customs agent Diana Dean caught Ahmed Ressam in 1999. Here's another story:
An England football shirt gave away a Senegalese man attempting to enter Cyprus on a forged French passport, police on the Mediterranean island said on Monday.
That's just not the kind of thing you're going to get a computer to pick up on, at least not until artificial intelligence actually produces a working brain.
What could you do if you controlled a network of thousands of computers -- or, at least, could use the spare processor cycles on those machines? You could perform massively parallel computations: model nuclear explosions or global weather patterns, factor large numbers or find Mersenne primes, or break cryptographic problems.
All of these are legitimate applications. And you can visit distributed.net and download software that allows you to donate your spare computer cycles to some of these projects. (You can help search for Optimal Golomb Rulers -- even if you have no idea what they are.) You've got a lot of cycles to spare. There's no reason that your computer can't help search for extraterrestrial life as it, for example, sits idly waiting for you to read this essay.
The reason these things work is that they are consensual; none of these projects download software onto your computer without your knowledge. None of these projects control your computer without your consent. But there are lots of software programs that do just that.
The term used for a computer remotely controlled by someone else is a "bot". A group of computers -- thousands or even millions -- controlled by someone else is a bot network. Estimates are that millions of computers on the internet today are part of bot networks, and the largest bot networks have over 1.5 million machines.
Initially, bot networks were used for just one thing: denial-of-service attacks. Hackers would use them against each other, fighting hacker feuds in cyberspace by attacking each other's computers. The first widely publicized use of a distributed intruder tool -- technically not a botnet, but practically the same thing -- was in February 2000, when Canadian hacker Mafiaboy directed an army of compromised computers to flood CNN.com, Amazon.com, eBay, Dell Computer and other sites with debilitating volumes of traffic. Every newspaper carried that story.
These days, bot networks are more likely to be controlled by criminals than by hackers. The important difference is the motive: profit. Networks are being used to send phishing e-mails and other spam. They're being used for click fraud. They're being used as an extortion tool: Pay up or we'll DDoS you!
Mostly, they're being used to collect personal data for fraud -- commonly called "identity theft." Modern bot software doesn't just attack other computers; it attacks its hosts as well. The malware is packed with keystroke loggers to steal passwords and account numbers. In fact, many bots automatically hunt for financial information, and some botnets have been built solely for this purpose -- to gather credit card numbers, online banking passwords, PayPal accounts, and so on, from compromised hosts.
Swindlers are also using bot networks for click fraud. Google's anti-fraud systems are sophisticated enough to detect thousands of clicks by one computer; it's much harder to determine if a single click by each of thousands of computers is fraud, or just popularity.
And, of course, most bots constantly search for other computers that can be infected and added to the bot network. (A 1.5 million-node bot network was discovered in the Netherlands last year. The command-and-control system was dismantled, but some of the bots are still active, infecting other computers and adding them to this defunct network.)
Modern bot networks are remotely upgradeable, so the operators can add new functionality to the bots at any time, or switch from one bot program to another. Bot authors regularly upgrade their botnets during development, or to evade detection by anti-virus and malware cleanup tools.
One application of bot networks that we haven't seen all that much of is to launch a fast-spreading worm. (Some believe the Witty worm spread this way.) Much has been written about "flash worms" that can saturate the internet in 15 minutes or less. The situation gets even worse if 10 thousand bots synchronize their watches and release the worm at exactly the same time. Why haven't we seen more of this? My guess is because there isn't any profit in it.
There's no real solution to the botnet problem, because there's no single problem. There are many different bot networks, controlled in many different ways, consisting of computers infected through many different vulnerabilities. Really, a bot network is nothing more than an attacker taking advantage of 1) one or more software vulnerabilities, and 2) the economies of scale that computer networks bring. It's the same thing as distributed.net or SETI@home, only the attacker doesn't ask your permission first.
As long as networked computers have vulnerabilities -- and that'll be for the foreseeable future -- there'll be bot networks. It's a natural side-effect of a computer network with bugs.
This essay originally appeared on Wired.com.
EDITED TO ADD (7/27): DDOS extortion is a bigger problem than you might think. Right now it's primarily targeted against fringe industries -- online gaming, online gambling, online porn -- located offshore, but we're seeing more and more of against mainstream companies in the U.S. and Europe.
EDITED TO ADD (7/27): Seems that Witty was definitely not seeded from a bot network.
CIA agents exposed due to their use of frequent-flier miles and other mistakes:
The man and woman were pretending to be American business executives on international assignments, so they did what globe-trotting executives do. While traveling abroad they used their frequent-flier cards as often as possible to gain credits toward free flights.
I'm not sure how collecting frequent-flier miles is a problem, though. Assuming they're traveling under the cover of being business executives, it makes sense for them to act just like other business executives.
It's not like there's no other way to reconstruct their travel.
Problems of reporting from a war zone:
Among broadcasters there is a concern about how our small convoys of cars full of equipment and personnel look from the air. There is a risk Israelis (eyes in the sky: drones, satellites) could mistake them for a Hezbollah convoy headed closer to the border and within striking distance of Israel. So simply being on the road with several vehicles is a risk.
One news source is reporting that sky marshals are reporting on innocent people in order to meet a quota:
The air marshals, whose identities are being concealed, told 7NEWS that they're required to submit at least one report a month. If they don't, there's no raise, no bonus, no awards and no special assignments.
This is so insane, it can't possibly be true. But I have been stunned before by the stupidity of the Department of Homeland Security.
EDITED TO ADD (7/27): This is what Brock Meeks said on David Farber's IP mailing list:
Well, it so happens that I was the one that BROKE this story... way back in 2004. There were at least two offices, Miami and Las Vegas that had this quota system for writing up and filing "SDRs."
This seems like a really clever use of RFID. The idea is to embed chips in surgical equipment, and then wave a detector over surgical patients to make sure the doctors didn't accidentally leave something inside the body.
Nepenthes is a malware-collection tool. It emulates known vulnerabilities, and then downloads malware trying to exploit these vulnerabilities. Seems like a good idea for a research project.
According to The Washington Post:
An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows....
EDITED TO ADD (7/27): It wasn't MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.
EDITED TO ADD (8/5): Ed Felten comments.
Great article on the Humboldt squid from Outside Magazine:
I worry about these things because Cassell, 44, a world-class diver, underwater cameraman, and Special Operations vet from Escondido, California, is out to convince me -- live and up close -- that the undersea world's most intriguing predator is not one of the usual suspects (like the great white shark or killer whale) but a powerful, outsize squid that features eight snakelike arms lined with suckers full of nasty little teeth, a razor-sharp beak that can rapidly rip flesh into bite-size chunks, and an unrelenting hunger. It's called the Humboldt, or jumbo, squid, and it's not the sort of calamari you're used to forking off your dinner plate. This squid grows to seven feet or more and perhaps a couple hundred pounds. It has a rep as the outlaw biker of the marine world: intelligent and opportunistic, a stone-cold cannibal willing to attack divers with a seemingly deliberate hostility.
This is a good idea.
The built anti-phishing capability warns users when they come across Web forgeries, and offers to return the user to his or her home page. Meanwhile, microsummaries are regularly updated summaries of Web pages, small enough to fit in the space available to a bookmark label, but large enough to provide more useful information about pages than static page titles, and are regularly updated as new information becomes available.
This seems like a good idea, assuming it is reliable.
The introduction of voice verification was preceded by an extensive period of testing among more than 1,450 people and 25,000 test calls. These were made using both fixed-line and mobile telephones, at all times of day and also by relatives (including six twins). Special attention was devoted to people who were suffering from colds during the test period. ABN AMRO is the first major bank in the world to introduce this technology in this way.
I've long been hostile to certifications -- I've met too many bad security professionals with certifications and know many excellent security professionals without certifications. But, I've come to believe that, while certifications aren't perfect, they're a decent way for a security professional to learn some of the things he's going to know, and a potential employer to assess whether a job candidate has the security expertise he's going to need to know.
What's changed? Both the job requirements and the certification programs.
Anyone can invent a security system that he himself cannot break. I've said this so often that Cory Doctorow has named it "Schneier's Law": When someone hands you a security system and says, "I believe this is secure," the first thing you have to ask is, "Who the hell are you?" Show me what you've broken to demonstrate that your assertion of the system's security means something.
That kind of expertise can't be found in a certification. It's a combination of an innate feel for security, extensive knowledge of the academic security literature, extensive experience in existing security systems, and practice. When I've hired people to design and evaluate security systems, I've paid no attention to certifications. They are meaningless; I need a different set of skills and abilities.
But most organizations don't need to hire that kind of person. Network security has become standardized; organizations need a practitioner, not a researcher. This is good because there is so much demand for these practitioners that there aren't enough researchers to go around. Certification programs are good at churning out practitioners.
And over the years, certification programs have gotten better. They really do teach knowledge that security practitioners need. I might not want a graduate designing a security protocol or evaluating a cryptosystem, but certs are fine for any of the handful of network security jobs a large organization needs.
At my company, we encourage our security analysts to take certification courses. We find that it's the most cost-effective way to give them the skills they need to do ever-more-complex jobs.
Of course, none of this is perfect. I still meet bad security practitioners with certifications, and I still know excellent security professionals without any.
In the end, certifications are like profiling. They work , but they're sloppy. Just because someone has a particular certification doesn't mean that he has the security expertise you're looking for (in other words, there are false positives). And just because someone doesn't have a security certification doesn't mean that he doesn't have the required security expertise (false negatives). But we use them for the same reason we profile: We don't have the time, patience, or ability to test for what we're looking for explicitly.
Profiling based on security certifications is the easiest way for an organization to make a good hiring decision, and the easiest way for an organization to train its existing employees. And honestly, that's usually good enough.
This essay originally appeared as a point-counterpoint with Marcus Ranum in the July 2006 issue of Information Security Magazine. (You have to fill out an annoying survey to read Marcus's counterpoint, but 1) you can lie, and 2) it's worth it.)
EDITED TO ADD (7/21): A Guide to Information Security Certifications.
EDITED TO ADD (9/11): Here's Marcus's column.
They have a cryptanalysis contest with a $5,000 prize, but a $100 entry fee.
Sounds like a scam to me.
(My comments on cracking contests can be seen here.)
At least, according to CATO.
It's a seriously dumb list:
A federal inspector general has analyzed the nation's database of top terrorist targets. There are more than 77,000 of them -- up from 160 a few years ago, before the entire exercise morphed into a congressional porkfest.
What's going on? Pork barrel funding, that's what's going on.
We're never going to get security right if we continue to make it a parody of itself.
Symantec is reporting a zero-day PowerPoint exploit. Right now the threat assessment is low, but that could change overnight if someone writes an automatic worm that takes advantage of this vulnerability.
Note that the vulnerability appeared in the wild days after "Patch Tuesday," presumably to maximize the window of exposure before Microsoft issues a patch.
From Wired News:
Among the falsified evidence produced by the conspirators before the fraud unraveled were confidential bank records originating with the Clearstream bank in Luxembourg, which were expertly modified to make it appear that some French politicians had secretly established offshore bank accounts to receive bribes. The falsified records were then sent to investigators, with enough authentic account information left in to make them appear credible.
It's got squid:
Danna: As you can imagine, I was pleased with the strong cephalopod theme.
Good article on how complexity greatly limits the effectiveness of terror investigations. The stories of wasted resources are all from the UK, but the morals are universal.
The Committee's report accepts that the increasing number of investigations, together with their increasing complexity, will make longer detention inevitable in the future. The core calculation is essentially the one put forward by the police and accepted by the Government - technology has been an enabler for international terrorism, with email, the Internet and mobile telephony producing wide, diffuse, international networks. The data on hard drives and mobile phones needs to be examined, contacts need to be investigated and their data examined, and in the case of an incident, vast amounts of CCTV records need to be gone through. As more and more of this needs to be done, the time taken to do it will obviously climb, and as it's 'necessary' to detain the new breed of terrorist early in the investigation before he can strike, more time will be needed between arrest and charge in order to build a case.
This is a collection of "spy equipment" we have found for sale around the internet. Everything here is completely real, is sold at online stores, and almost any item listed here costs less than $500, and often times can be bought for less than $200.
What's interesting to me is less what is available commercially today, but what we can extrapolate is available to real spies.
Authorities had also severely limited the cellular network for fear it could be used to trigger more attacks.
Some of the injured were seen frantically dialing their cell phones. The mobile phone network collapsed adding to the sense of panic.
(Note: The story was changed online, and the second quote was deleted.)
Cell phones are useful to terrorists, but they're more useful to the rest of us.
Google's $6 billion-a-year advertising business is at risk because it can't be sure that anyone is looking at its ads. The problem is called click fraud, and it comes in two basic flavors.
With network click fraud, you host Google AdSense advertisements on your own website. Google pays you every time someone clicks on its ad on your site. It's fraud if you sit at the computer and repeatedly click on the ad or -- better yet -- write a computer program that repeatedly clicks on the ad. That kind of fraud is easy for Google to spot, so the clever network click fraudsters simulate different IP addresses, or install Trojan horses on other people's computers to generate the fake clicks.
The other kind of click fraud is competitive. You notice your business competitor has bought an ad on Google, paying Google for each click. So you use the above techniques to repeatedly click on his ads, forcing him to spend money -- sometimes a lot of money -- on nothing. (Here's a company that will commit click fraud for you.)
Click fraud has become a classic security arms race. Google improves its fraud-detection tools, so the fraudsters get increasingly clever ... and the cycle continues. Meanwhile, Google is facing multiple lawsuits from those who claim the company isn't doing enough. My guess is that everyone is right: It's in Google's interest both to solve and to downplay the importance of the problem.
But the overarching problem is both hard to solve and important: How do you tell if there's an actual person sitting in front of a computer screen? How do you tell that the person is paying attention, hasn't automated his responses, and isn't being assisted by friends? Authentication systems are big business, whether based on something you know (passwords), something you have (tokens) or something you are (biometrics). But none of those systems can secure you against someone who walks away and lets another person sit down at the keyboard, or a computer that's infected with a Trojan.
This problem manifests itself in other areas as well.
For years, online computer game companies have been battling players who use computer programs to assist their play: programs that allow them to shoot perfectly or see information they normally couldn't see.
Playing is less fun if everyone else is computer-assisted, but unless there's a cash prize on the line, the stakes are small. Not so with online poker sites, where computer-assisted players -- or even computers playing without a real person at all -- have the potential to drive all the human players away from the game.
Look around the internet, and you see this problem pop up again and again. The whole point of CAPTCHAs is to ensure that it's a real person visiting a website, not just a bot on a computer. Standard testing doesn't work online, because the tester can't be sure that the test taker doesn't have his book open, or a friend standing over his shoulder helping him. The solution in both cases is a proctor, of course, but that's not always practical and obviates the benefits of internet testing.
This problem has even come up in court cases. In one instance, the prosecution demonstrated that the defendant's computer committed some hacking offense, but the defense argued that it wasn't the defendant who did it -- that someone else was controlling his computer. And in another case, a defendant charged with a child porn offense argued that, while it was true that illegal material was on his computer, his computer was in a common room of his house and he hosted a lot of parties -- and it wasn't him who'd downloaded the porn.
Years ago, talking about security, I complained about the link between computer and chair. The easy part is securing digital information: on the desktop computer, in transit from computer to computer or on massive servers. The hard part is securing information from the computer to the person. Likewise, authenticating a computer is much easier than authenticating a person sitting in front of the computer. And verifying the integrity of data is much easier than verifying the integrity of the person looking at it -- in both senses of that word.
And it's a problem that will get worse as computers get better at imitating people.
Google is testing a new advertising model to deal with click fraud: cost-per-action ads. Advertisers don't pay unless the customer performs a certain action: buys a product, fills out a survey, whatever. It's a hard model to make work -- Google would become more of a partner in the final sale instead of an indifferent displayer of advertising -- but it's the right security response to click fraud: Change the rules of the game so that click fraud doesn't matter.
That's how to solve a security problem.
This essay appeared on Wired.com.
EDITED TO ADD (7/13): Click Monkeys is a hoax site.
EDITED TO ADD (7/25): An evalution of Google's anti-click-fraud efforts, as part of the Lane Gifts case. I'm not sure if this expert report was done for Google, for Lane Gifts, or for the judge.
When methamphetamine proliferated more recently, the police and prosecutors at first did not associate it with a rise in other crimes. There were break-ins at mailboxes and people stealing documents from garbage, Mr. Morales said, but those were handled by different parts of the Police Department.
Supposedly meth users are ideally suited to be computer hackers:
For example, crack cocaine or heroin dealers usually set up in well-defined urban strips run by armed gangs, which stimulates gun traffic and crimes that are suited to densely populated neighborhoods, including mugging, prostitution, carjacking and robbery. Because cocaine creates a rapid craving for more, addicts commit crimes that pay off instantly, even at high risk.
And there's the illegal alien tie-in:
"Look at the states that have the highest rates of identity theft -- Arizona, Nevada, California, Texas and Colorado,’’ Mr. Morales said. "The two things they all have in common are illegal immigration and meth."
I have no idea if any of this is actually true. But I do know if the drug user-identity thief connection story has legs, Congress is likely to start paying much closer attention.
Here's a report of phishers defeating two-factor authentication using a man-in-the-middle attack.
The site asks for your user name and password, as well as the token-generated key. If you visit the site and enter bogus information to test whether the site is legit -- a tactic used by some security-savvy people -- you might be fooled. That's because this site acts as the "man in the middle" -- it submits data provided by the user to the actual Citibusiness login site. If that data generates an error, so does the phishing site, thus making it look more real.
I predicted this last year.
Members of Cornell's Global Positioning System (GPS) Laboratory have cracked the so-called pseudo random number (PRN) codes of Europe's first global navigation satellite, despite efforts to keep the codes secret. That means free access for consumers who use navigation devices -- including handheld receivers and systems installed in vehicles -- that need PRNs to listen to satellites.
Security by obscurity: it doesn't work, and it's a royal pain to recover when it fails.
One response to software liability:
Now suppose that there was a magical wand for taking snapshots of computer states just before crashes. Or that the legal system would permit claims on grounds of only the second part of the proof. Then there would be a strong positive incentive to write software that fails unreproducibly: "If our software's errors cannot be demonstrated reliably in court, we will never lose money in product liability cases."
Follow the link for examples.
According to The Guardian:
Five senior Vodafone technicians have been accused of being the operational masterminds of an elaborate eavesdropping scandal enveloping the mobile phone giant's Greek subsidiary.
Still no word on who the technicians were working for.
Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes' Theorem, to demonstrate that the NSA's surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that "NSA's surveillance system is useless for finding terrorists."
What is the probability that people are terrorists given that NSA's mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00).
As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it's just useless for this particular application.
The tiny squid Gonatus onyx carries its eggs around, protecting them from predators.
Here's a chronology of data breaches since the ChoicePoint theft in February 2005.
Total identities stolen: 88,794,619. Although, almost certainly, many names are on that list multiple times.
At least, that's what it sounds like to me:
In a communication system having a plurality of networks, a method of achieving network separation between first and second networks is described. First and second networks with respective first and second degrees of trust are defined, the first degree of trust being higher than the second degree of trust. Communication between the first and second networks is enabled via a network interface system having a protocol stack, the protocol stack implemented by the network interface system in an application layer. Data communication from the second network to the first network is enabled while data communication from the first network to the second network is minimized.
In this attack, you can seize control of someone's computer using his WiFi interface, even if he's not connected to a network.
The two researchers used an open-source 802.11 hacking tool called LORCON (Loss of Radio Connectivity) to throw an extremely large number of wireless packets at different wireless cards. Hackers use this technique, called fuzzing, to see if they can cause programs to fail, or perhaps even run unauthorized software when they are bombarded with unexpected data.
No details yet. The researchers are presenting their results at BlackHat on August 2.
It is my duty, in this Annual Report, to present a solemn and urgent warning to every Member of Parliament and Senator, and indeed to every Canadian:
Why doesn't the United States have a Privacy Commissioner?
A popular response is: "If you have nothing to hide, you have nothing to fear."
EDITED TO ADD (7/6): That's the 2001-2002 report. This is the latest report.
No, it's not what you think. This phone has a built-in Breathalyzer:
Here's how it works: Users blow into a small spot on the phone, and if they've had too much to drink the phone issues a warning and shows a weaving car hitting traffic cones.
You can also configure the phone not to let you dial certain phone numbers if you're drunk. Think ex-lovers.
Now that's a security feature I can get behind.
For a long time, the League of Women Voters (LWV) had been on the wrong side of the electronic voting machine issue. They were in favor of electronic machines, and didn't see the need for voter-verifiable paper trails. (They use to have a horrid and misleading Q&A about the issue on their website, but it's gone now. Barbara Simons published a rebuttal, which includes their original Q&A.)
The politics of the LWV are byzantine, but basically there are local leagues under state leagues, which in turn are under the national (LWVUS) league. There is a national convention once every other year, and all sorts of resolutions are passed by the membership. But the national office can do a lot to undercut the membership and the state leagues. The politics of voting machines is an example of this.
At the 2004 convention, the LWV membership passed a resolution on electronic voting called "SARA," which stood for "Secure, Accurate, Recountable, and Accessible." Those in favor of the resolution thought that "recountable" meant auditable, which meant voter-verifiable paper trails. But the national LWV office decided to spin SARA to say that recountable does not imply paper. While they could no longer oppose paper outright, they refused to say that paper was desirable. For example, they held Georgia's system up as a model, and Georgia uses paperless Diebold DRE machines. It makes you wonder if the LWVUS leadership is in someone's pocket.
So at the 2006 convention, the LWV membership passed another resolution. This one was much more clearly worded: designed to make it impossible for the national office to pretend that the LWV was not in favor of voter-verified paper trails.
Unfortunately, the League of Women Voters has not issued a press release about this resolution. (There is a press release by VerifiedVoting.org about it.) I'm sure that the national office simply doesn't want to acknowledge the membership's position on the issue, and wishes the issue would just go away quietly. It's a pity; the resolution is a great one and worth publicizing.
Here's the text of the resolution:
Resolution Related to Program Requiring a Voter-Verifiable Paper Ballot or Paper Record with Electronic Voting Machines
By the way, the 2006 LWV membership also voted on a resolution in favor of net neutrality (the Connecticut league issued a press release, because they spearheaded the issue), and one against the death penalty. The national LWV office hasn't issued a press release about those two issues, either.
From the Executive Summary:
In 2005, the Brennan Center convened a Task Force of internationally renowned government, academic, and private-sector scientists, voting machine experts and security professionals to conduct the nation's first systematic analysis of security vulnerabilities in the three most commonly purchased electronic voting systems. The Task Force spent more than a year conducting its analysis and drafting this report. During this time, the methodology, analysis, and text were extensively peer reviewed by the National Institute of Standards and Technology ("NIST").
Voting machine vendors have dismissed many of the concerns, saying they are theoretical and do not reflect the real-life experience of running elections, such as how machines are kept in a secure environment.
I wish The Washington Post found someone to point out that there have been many, many irregularities with electronic voting machines over the years, and the lack of convincing evidence of fraud is exactly the problem with their no-audit-possible systems. Or that the "it's all theoretical" argument is the same on that software vendors used to use to discredit security vulnerabilities before the full-disclosure movement forced them to admit that their software had problems.
O2 is a UK cell phone network. The company gives you the option of setting up a PIN on your phone. The idea is that if someone steals your phone, they can't make calls. If they type the PIN incorrectly three times, the phone is blocked. To deal with the problems of phone owners mistyping their PIN -- or forgetting it -- they can contact O2 and get a Personal Unlock Code (PUK). Presumably, the operator goes through some authentication steps to ensure that the person calling is actually the legitimate owner of the phone.
So far, so good.
But O2 has decided to automate the PUK process. Now anyone on the Internet can visit this website, type in a valid mobile telephone number, and get a valid PUK to reset the PIN -- without any authentication whatsoever.
EDITED TO ADD (7/4): A representitive from O2 sent me the following:
"Yes, it does seem there is a security risk by O2 supplying such a service, but in fact we believe this risk is very small. The risk is when a customer’s phone is lost or stolen. There are two scenarios in that event:
This seems like a bad idea to me:
Microsoft is adding a brand-new feature to Windows Vista to allow businesses to load ActiveX controls on systems running without admin privileges.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..