Schneier on Security
A blog covering security and security technology.
May 2009 Archives
Step by step instructions on how to make squid pasta.
I am optimistic about President Obama's new cybersecurity policy and the appointment of a new "cybersecurity coordinator," though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.
His principles were all welcome -- securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he's overly optimistic that legislation won't be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we'll be in a decade.
Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president's commitment to transparency and privacy, both of which are vital for security.
But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won't improve.
As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.
Two, security coordination needs to happen at the highest level possible, whether that's evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.
The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president's ear.
Gus Hosein wrote a good essay on the need for privacy:
Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.
I wrote something similar in 2002 about the creation of the Department of Homeland Security:
The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it's more robust, reliable, and secure. You're alive and reading this because of it.
EDITED TO ADD (6/2): Gene Spafford's opinion.
EDITED TO ADD (6/4): Good commentary from Bob Blakley.
In other biometric news, four states have banned smiling in driver's license photographs.
The serious poses are urged by DMVs that have installed high-tech software that compares a new license photo with others that have already been shot. When a new photo seems to match an existing one, the software sends alarms that someone may be trying to assume another driver's identity.
A Singapore cancer patient was held for four hours by immigration officials in the United States when they could not detect his fingerprints -- which had apparently disappeared because of a drug he was taking.
What do you do if you have too many background checks to do, and not enough time to do them? You fake them, of course:
Eight current and former security clearance investigators say they have been pressured to work faster and take on crushing workloads in recent years, as the government tried to eliminate a backlog that once topped 531,000 cases.
It's all a matter of incentives. The investigators were rewarded for completing investigations, not for doing them well.
Hiding Information in Retransmissions
I don't think these sorts of things have any large-scale applications, but they are clever.
The Dice-O-Matic is 7 feet tall, 18 inches wide and 18 inches deep. It has an aluminum frame covered with Plexiglas panels. A 6x4 inch square Plexiglas tube runs vertically up the middle almost the entire height. Inside this tube a bucket elevator carries dice from a hopper at the bottom, past a camera, and tosses them onto a ramp at the top. The ramp spirals down between the tube and the outer walls. The camera and synchronizing disk are near the top, the computer, relay board, elevator motor and power supplies are at the bottom."
Click on the link and watch the short video.
As someone who has designed random number generators professionally, I find this to be an overly complex hardware solution to a relatively straightforward software problem. But the sheer beauty of the machine cannot be denied.
What I am curious about is what kind of statistical anomalies there are in the dice themselves. At 1,330,000 rolls a day, we can certainly learn something about the randomness of commercial dice.
Seeking to quell fears of terrorists somehow breaking out of America's top-security prisons and wreaking havoc on the defenseless heartland, President Barack Obama moved quickly to announce an Anti-Terrorist Strike Force headed by veteran counterterrorism agent Jack Bauer and mutant superhero Wolverine. Already dubbed a "dream team," their appointment is seen by experts as a crucial step in reducing the mounting incidents of national conservatives and congressional Democrats crapping their pants.
In 2004, I wrote about the prevalence of secret questions as backup passwords. The problem is that the answers to these "secret questions" are often much easier to guess than random passwords. Mother's maiden name isn't very secret. Name of first pet, name of favorite teacher: there are some common names. Favorite color: I could probably guess that in no more than five attempts.
The result is that the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.
Here's some actual research on the issue:
It's no secret: Measuring the security and reliability of authentication via 'secret' questions
Method 2: Offer Squid a Tasty Treat
They have technology:
The FTS Patent has been acclaimed by leading cryptographic authorities around the world as the most innovative and secure protocol ever invented to manage offline and online smart card related transactions. Please see the independent report by Bruce Schneider [sic] in his book entitled Applied Cryptography, 2nd Edition published in the late 1990s.
I have no idea what this is referring to.
EDITED TO ADD (5/20): Someone, probably from the company, said in comments that this is referring to the UEPS protocol, discussed on page 589. I still don't like the hyperbole and the implied endorsement in the quote.
Four points. One: There was little danger of an actual terrorist attack:
Authorities said the four men have long been under investigation and there was little danger they could actually have carried out their plan, NBC News' Pete Williams reported.
Of course, politicians are using this incident to peddle more fear:
"This was a very serious threat that could have cost many, many lives if it had gone through," Representative Peter T. King, Republican from Long Island, said in an interview with WPIX-TV. "It would have been a horrible, damaging tragedy. There's a real threat from homegrown terrorists and also from jailhouse converts."
Two, they were caught by traditional investigation and intelligence. Not airport security. Not warrantless eavesdropping. But old fashioned investigation and intelligence. This is what works. This is what keeps us safe. Here's an essay I wrote in 2004 that says exactly that.
The only effective way to deal with terrorists is through old-fashioned police and intelligence work -- discovering plans before they're implemented and then going after the plotters themselves.
Three, they were idiots:
The ringleader of the four-man homegrown terror cell accused of plotting to blow up synagogues in the Bronx and military planes in Newburgh admitted to a judge today that he had smoked pot before his bust last night.
Four, an "informant" helped this group a lot:
In April, Mr. Cromitie and the three other men selected the synagogues as their targets, the statement said. The informant soon helped them get the weapons, which were incapable of being fired or detonated, according to the authorities.
The warning the warning I wrote in "Portrait of the Modern Terrorist as an Idiot" is timely again:
Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it's unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.
Actually, that whole 2007 essay is timely again. Some things never change.
In an article on the recent arrests in New York:
On Wednesday night, they planted one of the mock improvised explosive devices in a trunk of a car outside the temple and two mock bombs in the back seat of a car outside the Jewish center, the authorities said. Shortly thereafter, police officers swooped in and broke the windows on the suspects' black sport utility vehicle and charged them with conspiracy to use weapons of mass destruction within the United States and conspiracy to acquire and use antiaircraft missiles.
I've covered this before. According to the law, almost any weapon is a weapon of mass destruction.
From the complaint:
... knowingly did combine, conspire, confederate and agree together and with each other to use a weapon of mass destruction, to wit, a surface-to-air guided missile system and an improvised explosive device ("IED") containing over 30 pounds of Composition 4 ('C-4") military grade plastic explosive material against persons and property within the United States.
Philippe Golle and Kurt Partridge of PARC have a cute paper on the anonymity of geo-location data. They analyze data from the U.S. Census and show that for the average person, knowing their approximate home and work locations -- to a block level -- identifies them uniquely.
"On the Anonymity of Home/Work Location Pairs," by Philippe Golle and Kurt Partridge:
This is all very troubling, given the number of location-based services springing up and the number of databases that are collecting location data.
I'm very happy with this quote in a CNN.com story on "whole-body imaging" at airports:
Bruce Schneier, an internationally recognized security technologist, said whole-body imaging technology "works pretty well," privacy rights aside. But he thinks the financial investment was a mistake. In a post-9/11 world, he said, he knows his position isn't "politically tenable," but he believes money would be better spent on intelligence-gathering and investigations.
I've written about "cover your ass" security in the past, but it's nice to see it in the press.
This seems smart:
Microsoft plans to formally banish the popular programming function that's been responsible for an untold number of security vulnerabilities over the years, not just in Windows but in countless other applications based on the C language. Effective later this year, Microsoft will add memcpy(), CopyMemory(), and RtlCopyMemory() to its list of function calls banned under its secure development lifecycle.
Here's the list of banned function calls. This doesn't help secure legacy code, of course, but you have to start somewhere.
For the April 09 issue of Wired Magazine, I was asked to create a cryptographic puzzle based on the television show Lost. Specifically, I was given a "clue" to encrypt.
Creating something like this is very hard. The puzzle needs to be hard enough so that people don't figure it out immediately, and easy enough so that people eventually figure it out. To make matters even more complicated, people will share their ideas on the Internet. So if the solution requires -- and I'm making this up -- expertise in Mayan history, carburetor design, algebraic topology, and Russian folk dancing, those people are likely to come together on the Internet. The puzzle has to be challenging for the group mind; not just for individual minds.
Do I need to give people a hint?
EDITED TO ADD (5/20): No hints required; there's a solution posted.
This is cool. It writes like a normal pen, but if you run a hair dryer over the written words they disappear. And if you put the paper in the freezer the words reappear. Fantastic.
This is a great movie-plot threat:
Pirates could soon find their way to the waters of the Chesapeake Bay. That's assuming that a liquefied natural gas terminal gets built at Sparrows Point.
Remember—if you don't like something, claim that it will enable, embolden, or entice terrorists. Works every time.
China has developed more secure operating software for its tens of millions of computers and is already installing it on government and military systems, hoping to make Beijing's networks impenetrable to U.S. military and intelligence agencies.
Is this real, or yet more cybersecurity hype pushed by agencies looking for funding and power? My guess is the latter. Anyone know?
ThreatPost interviewed me.
Slashdot thread on the interview.
As the law currently stands, the court said police can mount GPS on cars to track people without violating their constitutional rights -- even if the drivers aren't suspects.
The court wants the legislature to fix it:
However, the District 4 Court of Appeals said it was "more than a little troubled" by that conclusion and asked Wisconsin lawmakers to regulate GPS use to protect against abuse by police and private individuals.
I think the odds of that happening are approximately zero.
Kevin Colwell, a psychologist at Southern Connecticut State University, has advised police departments, Pentagon officials and child protection workers, who need to check the veracity of conflicting accounts from parents and children. He says that people concocting a story prepare a script that is tight and lacking in detail.
This is new research, and there are limitations to the approach, but it's interesting.
Terrorists attacking our food supply is a nightmare scenario that has been given new life during the recent swine flu outbreak. Although it seems easy to do, understanding why it hasn't happened is important. G.R. Dalziel, at the Nanyang Technological University in Singapore, has written a report chronicling every confirmed case of malicious food contamination in the world since 1950: 365 cases in all, plus 126 additional unconfirmed cases. What he found demonstrates the reality of terrorist food attacks.
It turns out 72% of the food poisonings occurred at the end of the food supply chain -- at home -- typically by a friend, relative, neighbour, or co-worker trying to kill or injure a specific person. A characteristic example is Heather Mook of York, who in 2007 tried to kill her husband by putting rat poison in his spaghetti.
Most of these cases resulted in fewer than five casualties -- Mook only injured her husband in this incident -- although 16% resulted in five or more. Of the 19 cases that claimed 10 or more lives, four involved serial killers operating over several years.
Another 23% of cases occurred at the retail or food service level. A 1998 incident in Japan, where someone put arsenic in a curry sold at a summer festival, killing four and hospitalising 63, is a typical example. Only 11% of these incidents resulted in 100 or more casualties, while 44% resulted in none.
There are very few incidents of people contaminating the actual food supply. People deliberately contaminated a water supply seven times, resulting in three deaths. There is only one example of someone deliberately contaminating a crop before harvest -- in Australia in 2006 -- and the crops were recalled before they could be sold. And in the three cases of someone deliberately contaminating food during packaging and distribution, including a 2005 case in the UK where glass and needles were baked into loaves of bread, no one died or was injured.
This isn't the stuff of bioterrorism. The closest example occurred in 1984 in the US, where members of a religious group known as the Rajneeshees contaminated several restaurant salad bars with salmonella enterica typhimurium, sickening 751, hospitalising 45, but killing no one. In fact, no one knew this was malicious until a year later, when one of the perpetrators admitted it.
Almost all of the food contaminations used conventional poisons such as cyanide, drain cleaner, mercury, or weed killer. There were nine incidents of biological agents, including salmonella, ricin, and faecal matter, and eight cases of radiological matter. The 2006 London poisoning of the former KGB agent Alexander Litvinenko with polonium-210 in his tea is an example of the latter.
And that assassination illustrates the real risk of malicious food poisonings. What is discussed in terrorist training manuals, and what the CIA is worried about, is the use of contaminated food in targeted assassinations. The quantities involved for mass poisonings are too great, the nature of the food supply too vast and the details of any plot too complicated and unpredictable to be a real threat. That becomes crystal clear as you read the details of the different incidents: it's hard to kill one person, and very hard to kill dozens. Hundreds, thousands: it's just not going to happen any time soon. The fear of bioterror is much greater, and the panic from any bioterror scare will injure more people, than bioterrorism itself.
Far more dangerous are accidental contaminations due to negligent industry practices, such as the 2006 spinach E coli and, more recently, peanut salmonella contaminations in the US, the 2008 milk contaminations in China, and the BSE-infected beef from earlier this decade. And the systems we have in place to deal with these accidental contaminations also work to mitigate any intentional ones.
In 2004, the then US secretary of health and human services, Tommy Thompson, said on Fox News: "I cannot understand why terrorists have not attacked our food supply. Because it is so easy to do."
Guess what? It's not at all easy to do.
This essay previously appeared in The Guardian.
This is an excellent lesson in the security problems inherent in trusting proprietary software:
After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.
Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.
2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.
Basically, the system was designed to return some sort of result regardless.
This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. "You can't look at our code because we don't want you to" simply isn't good enough.
It's called "sweethearting": when cashiers pass free merchandise to friends. And some stores are using security cameras to detect it:
Mathematical algorithms embedded in the stores' new security system pick out sweethearting on their own. There's no need for a security guard watching banks of video monitors or reviewing hours of grainy footage. When the system thinks it's spotted evidence, it alerts management on a computer screen and offers up the footage.
How good is it? My guess is that it's not very good, but this is an instance where that may be good enough. As long as there aren't a lot of false positives -- as long as a person can quickly review the suspect footage and dismiss it as a false positive -- the cost savings might be worth the expense.
For this contest, the goal was to:
...to find an existing event somewhere in the industrialized world—Third World events are just too easy—and provide a conspiracy theory to explain how the terrorists were really responsible.
I thought it was straightforward enough but, honestly, I wasn't very impressed with the submissions. Nothing surprised me with its cleverness. There were scary entries and there were plausible entries, but hardly any were both at the same time. And I was amazed by how many people didn't bother to read the rules at all, and just submitted movie plot threats.
But, after reading through the entries, I have chosen a winner. It's HJohn, for his kidnap-blackmail-terrorist connection:
Though recent shooting sprees in churches, nursing homes, and at family outings appear unrelated, a terrifying link has been discovered. All perpetrators had small children who were abducted by terrorists, and perpetrators received a video of their children with hooded terrorists warning that their children would be beheaded if they do not engage in the suicidal rampage. The terror threat level has been raised to red as profiling, known associations, and criminal history are now useless in detecting who will be the next terrorist sniper or airline hijacker. Anyone who loves their children may be a potential terrorist.
Fairly plausible, and definitely scary. Congratulations, HJohn. E-mail me and I'll get you your fabulous prizes—as soon as I figure out what they are.
For historical purposes: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Theat Contest rules, semifinalists, and winner.
One of the scarier realities about malicious software is that these programs leave ultimate control over victim machines in the hands of the attacker, who could simply decide to order all of the infected machines to self-destruct. Most security experts will tell you that while this so-called "nuclear option" is an available feature in some malware, it is hardly ever used. Disabling infected systems is counterproductive for attackers, who generally focus on hoovering as much personal and financial data as they can from the PCs they control.
This is bad. I see it as a sign that the botnet wars are heating up, and botnet designers would rather destroy their networks than have them fall into "enemy" hands.
A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:
The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet's controllers updated the system and cut the observation short.
Here's the paper:
In the modern era, the right of privacy represents a vast array of rights that include clear legal standards, government accountability, judicial oversight, the design of techniques that are minimally intrusive and the respect for the dignity and autonomy of individuals.
The United Kingdom's MI6 agency acknowledged this week that in 2006 it had to scrap a multi-million-dollar undercover drug operation after an agent left a memory stick filled with top-secret data on a transit coach.
This is bad:
On Thursday, April 30, the secure site for the Virginia Prescription Monitoring Program (PMP) was replaced with a $US10M ransom demand:"I have your shit! In *my* possession, right now, are 8,257,378 patient records and a total of 35,548,087 prescriptions. Also, I made an encrypted backup and deleted the original. Unfortunately for Virginia, their backups seem to have gone missing, too. Uhoh :(For $10 million, I will gladly send along the password."
Hackers last week broke into a Virginia state Web site used by pharmacists to track prescription drug abuse. They deleted records on more than 8 million patients and replaced the site's homepage with a ransom note demanding $10 million for the return of the records, according to a posting on Wikileaks.org, an online clearinghouse for leaked documents.
More. This doesn't seem like a professional extortion/ransom demand, but still....
This is worth reading:
Five years ago I wrote a Language Log post entitled "BS conditional semantics and the Pinocchio effect" about the nonsense spouted by a lie detection company, Nemesysco. I was disturbed by the marketing literature of the company, which suggested a 98% success rate in detecting evil intent of airline passengers, and included crap like this:The LVA uses a patented and unique technology to detect "Brain activity finger prints" using the voice as a "medium" to the brain and analyzes the complete emotional structure of your subject. Using wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!) we can learn about any anomaly in the brain activity, and furthermore, classify it accordingly. Stress ("fight or flight" paradigm) is only a small part of this emotional structure
Most of the lie detector industry is based on, well, lies.
The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn't have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.
Now I want Microsoft to offer this configuration to everyone.
EDITED TO ADD (5/6): Microsoft responds:
Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn't a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn't deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.
EDITED TO ADD (5/12): FDCC policy specs.
Fascinating bit of evolutionary biology:
So how did natural selection equip men to solve the adaptive problem of other men impregnating their sexual partners? The answer, according to Gallup, is their penises were sculpted in such a way that the organ would effectively displace the semen of competitors from their partner's vagina, a well-synchronized effect facilitated by the "upsuck" of thrusting during intercourse. Specifically, the coronal ridge offers a special removal service by expunging foreign sperm. According to this analysis, the effect of thrusting would be to draw other men's sperm away from the cervix and back around the glans, thus "scooping out" the semen deposited by a sexual rival.
Evolution is the result of a struggle for survival, so you'd expect security considerations to be important.
If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender's ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization -- domestic and international.
You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.
Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you're relying on that company to keep your data private. If you use Google Docs, you're relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google's security, but we don't know what it is.
This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it's on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don't even know exist.
And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation -- or maybe a conversation inside Facebook.
Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.
We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.
This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don't need one to read it from the backup tapes at your ISP.
This isn't a technological problem; it's a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant -- even though it occurred at the phone company switching office and not in the target's home or office -- the Supreme Court must recognize that reading personal e-mail at an ISP is no different.
This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.
This may be the stupidest example of risk assessment I've ever seen. It's a video clip from a recent Daily Show, about he dangers of the Large Hadron Collider. The segment starts off slow, but then there's an exchange with high school science teacher Walter L. Wagner, who insists the device has a 50-50 chance of destroying the world:
"If you have something that can happen, and something that won't necessarily happen, it's going to either happen or it's going to not happen, and so the best guess is 1 in 2."
This is followed by clips of news shows taking the guy seriously.
In related news, almost four-fifths of Americans don't know that a trillion is a million million, and most think it's less than that. Is it any wonder why we're having so much trouble with national budget debates?
Not five miles from my house.
Last year, when law professor Joel Reidenberg wanted to show his Fordham University class how readily private information is available on the Internet, he assigned a group project. It was collecting personal information from the Web about himself.
Somehow, I don't think "poor judgment" is going to be much of a defense against those with agendas more malicious than Professor Reidenberg.
It's the season, I guess:
The United States has no clear military policy about how the nation might respond to a cyberattack on its communications, financial or power networks, a panel of scientists and policy advisers warned Wednesday, and the country needs to clarify both its offensive capabilities and how it would respond to such attacks.
Here's the report summary, which I have not read yet.
I was particularly disturbed by the last paragraph of the newspaper article:
Introducing the possibility of a nuclear response to a catastrophic cyberattack would be expected to serve the same purpose.
Nuclear war is not a suitable response to a cyberattack.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.