Blog: July 2010 Archives
The Vivos network, which offers partial ownerships similar to a timeshare in underground shelter communities, is one of several ventures touting escape from a surface-level calamity.
Radius Engineering in Terrell, Texas, has built underground shelters for more than three decades, and business has never been better, says Walton McCarthy, company president.
The company sells fiberglass shelters that can accommodate 10 to 2,000 adults to live underground for one to five years with power, food, water and filtered air, McCarthy says.
The shelters range from $400,000 to a $41 million facility Radius built and installed underground that is suitable for 750 people, McCarthy says. He declined to disclose the client or location of the shelter.
“We’ve doubled sales every year for five years,” he says.Other shelter manufacturers include Hardened Structures of Colorado and Utah Shelter Systems, which also report increased sales.
The Vivos website features a clock counting down to Dec. 21, 2012, the date when the ancient Mayan “Long Count” calendar marks the end of a 5,126-year era, at which time some people expect an unknown apocalypse.
Vicino, whose terravivos.com website lists 11 global catastrophes ranging from nuclear war to solar flares to comets, bristles at the notion he’s profiting from people’s fears.
“You don’t think of the person who sells you a fire extinguisher as taking advantage of your fear,” he says. “The fact that you may never use that fire extinguisher doesn’t make it a waste or bad.
“We’re not creating the fear; the fear is already out there. We’re creating a solution.
Yip Harburg commented on the subject about half a century ago, and the Chad Mitchell Trio recited it. It’s at about 0:40 on the recording, though the rest is worth listening to as well.
Hammacher Schlemmer is selling a shelter,
worthy of Kubla Khan’s Xanadu dome;
Plushy and swanky, with posh hanky panky
that affluent Yankees can really call home.
Hammacher Schlemmer is selling a shelter,
a push-button palace, fluorescent repose;
Electric devices for facing a crisis
with frozen fruit ices and cinema shows.
Hammacher Schlemmer is selling a shelter
all chromium kitchens and rubber-tiled dorms;
With waterproof portals to echo the chortles
of weatherproof mortals in hydrogen storms.
What a great come-to-glory emporium!
To enjoy a deluxe moratorium,
Where nuclear heat can beguile the elite
in a creme-de-la-creme crematorium.
EDITED TO ADD (8/9: Slate on this as a bogus trend.
Hacking ATMs to spit out money, demonstrated at the Black Hat conference:
The two systems he hacked on stage were made by Triton and Tranax. The Tranax hack was conducted using an authentication bypass vulnerability that Jack found in the system’s remote monitoring feature, which can be accessed over the Internet or dial-up, depending on how the owner configured the machine.
Tranax’s remote monitoring system is turned on by default, but Jack said the company has since begun advising customers to protect themselves from the attack by disabling the remote system.
To conduct the remote hack, an attacker would need to know an ATM’s Internet IP address or phone number. Jack said he believes about 95 percent of retail ATMs are on dial-up; a hacker could war dial for ATMs connected to telephone modems, and identify them by the cash machine’s proprietary protocol.
The Triton attack was made possible by a security flaw that allowed unauthorized programs to execute on the system. The company distributed a patch last November so that only digitally signed code can run on them.
Both the Triton and Tranax ATMs run on Windows CE.
Using a remote attack tool, dubbed Dillinger, Jack was able to exploit the authentication bypass vulnerability in Tranax’s remote monitoring feature and upload software or overwrite the entire firmware on the system. With that capability, he installed a malicious program he wrote, called Scrooge.
“Who controls the off switch?” by Ross Anderson and Shailendra Fuloria.
Abstract: We’re about to acquire a significant new cybervulnerability. The world’s energy utilities are starting to install hundreds of millions of ‘smart meters’ which contain a remote off switch. Its main purpose is to ensure that customers who default on their payments can be switched remotely to a prepay tariff; secondary purposes include supporting interruptible tariffs and implementing rolling power cuts at times of supply shortage.
The off switch creates information security problems of a kind, and on a scale, that the energy companies have not had to face before. From the viewpoint of a cyber attacker—whether a hostile government agency, a terrorist organisation or even a militant environmental group—the ideal attack on a target country is to interrupt its citizens’ electricity supply. This is the cyber equivalent of a nuclear strike; when electricity stops, then pretty soon everything else does too. Until now, the only plausible ways to do that involved attacks on critical generation, transmission and distribution assets, which are increasingly well defended.
Smart meters change the game. The combination of commands that will cause meters to interrupt the supply, of applets and software upgrades that run in the meters, and of cryptographic keys that are used to authenticate these commands and software changes, create a new strategic vulnerability, which we discuss in this paper.
The DNSSEC root key has been divided among seven people:
Part of ICANN’s security scheme is the Domain Name System Security, a security protocol that ensures Web sites are registered and “signed” (this is the security measure built into the Web that ensures when you go to a URL you arrive at a real site and not an identical pirate site). Most major servers are a part of DNSSEC, as it’s known, and during a major international attack, the system might sever connections between important servers to contain the damage.
A minimum of five of the seven keyholders—one each from Britain, the U.S., Burkina Faso, Trinidad and Tobago, Canada, China, and the Czech Republic—would have to converge at a U.S. base with their keys to restart the system and connect everything once again.
Paul Kane—who lives in the Bradford-on-Avon area—has been chosen to look after one of seven keys, which will ‘restart the world wide web’ in the event of a catastrophic event.
Dan Kaminsky is another.
I don’t know how they picked those countries.
Okay, this is just weird:
Mark S. Price, a specialist in public security, and his privately held company, Paradise Lost Antiterrorism Network of America (www.plan-a.us), have recently applied to the United States Patent and Trademark Office for a Utility Patent on their Suicide Bomb Deterrent, a security device designed, manufactured and distributed by PLAN-A. This device has been designed to warn and deter potential fanatical religious suicide bomb-wielding terrorists from otherwise detonating an explosive charge within close proximity of said device, to the intended end of successfully accomplishing its namesake purpose of Suicide Bomb Deterrent and the protecting and preserving of all life and property otherwise in mortal and destructive danger.
Reading the partial patent application on their minimal website, it appears to be a packet of pork product, combined with a big sign saying something like: “Warning. If you blow up a bomb right here, you’ll get pork stuff all over you before you die—which might be suboptimal from a religious point of view.”
This appears to not be a joke.
It’s a service:
The mechanism used involves captured network traffic, which is uploaded to the WPA Cracker service and subjected to an intensive brute force cracking effort. As advertised on the site, what would be a five-day task on a dual-core PC is reduced to a job of about twenty minutes on average. For the more “premium” price of $35, you can get the job done in about half the time. Because it is a dictionary attack using a predefined 135-million-word list, there is no guarantee that you will crack the WPA key, but such an extensive dictionary attack should be sufficient for any but the most specialized penetration testing purposes.
It gets even better. If you try the standard 135-million-word dictionary and do not crack the WPA encryption on your target network, there is an extended dictionary that contains an additional 284 million words. In short, serious brute force wireless network encryption cracking has become a retail commodity.
In related news, there might be a man-in-the-middle attack possible against the WPA2 protocol. Man-in-the-middle attacks are potentially serious, but it depends on the details—and they’re not available yet.
EDITED TO ADD (8/8): Details about the MITM attack.
An article from The Economist makes a point that I have been thinking about for a while: the modern technology makes life harder for spies, not easier. It used to be the technology favored spycraft—think James Bond gadgets—but more and more, technology favors spycatchers. The ubiquitous collection of personal data makes it harder to maintain a false identity, ubiquitous eavesdropping makes it harder to communicate securely, the prevalence of cameras makes it harder to not be seen, and so on.
I think this an example of the general tendency of modern information and communications technology to increase power in proportion to existing power. So while technology makes the lone spy more effective, it makes an institutional counterspy organization much more powerful.
Where do these TV shows come from?
Follows the adventures of the Cuylers, an impoverished and dysfunctional family of anthropomorphic, air-breathing, redneck squids who live in a rural Appalachian community in the US state of Georgia.
The Washington Post has published a phenomenal piece of investigative journalism: a long, detailed, and very interesting expose on the U.S. intelligence industry (overall website; parts 1, 2, and 3; blog; Washington reactions; top 10 revelations; many many many blog comments and reactions; and so on).
It’s a truly excellent piece of investigative journalism. Pity people don’t care much about investigative journalism—or facts in politics, really—anymore.
EDITED TO ADD (7/26): Jay Rosen writes:
Last week, it was the Washington Post’s big series, Top Secret America, two years in the making. It reported on the massive security shadowland that has arisen since 09/11. The Post basically showed that there is no accountability, no knowledge at the center of what the system as a whole is doing, and too much “product” to make intelligent use of. We’re wasting billions upon billions of dollars on an intelligence system that does not work. It’s an explosive finding but the explosive reactions haven’t followed, not because the series didn’t do its job, but rather: the job of fixing what is broken would break the system responsible for such fixes.
The mental model on which most investigative journalism is based states that explosive revelations lead to public outcry; elites get the message and reform the system. But what if elites believe that reform is impossible because the problems are too big, the sacrifices too great, the public too distractible? What if cognitive dissonance has been insufficiently accounted for in our theories of how great journalism works…and often fails to work?
EDITED TO ADD (7/27): More.
Stuxnet is a new Internet worm that specifically targets Siemens WinCC SCADA systems: used to control production at industrial plants such as oil rigs, refineries, electronics production, and so on. The worm seems to uploads plant info (schematics and production information) to an external website. Moreover, owners of these SCADA systems cannot change the default password because it would cause the software to break down.
The use of profiling by ethnicity or nationality to trigger secondary security screening is a controversial social and political issue. Overlooked is the question of whether such actuarial methods are in fact mathematically justified, even under the most idealized assumptions of completely accurate prior probabilities, and secondary screenings concentrated on the highest-probablity individuals. We show here that strong profiling (defined as screening at least in proportion to prior probability) is no more efficient than uniform random sampling of the entire population, because resources are wasted on the repeated screening of higher probability, but innocent, individuals. A mathematically optimal strategy would be ”square-root biased sampling,” the geometric mean between strong profiling and uniform sampling, with secondary screenings distributed broadly, although not uniformly, over the population. Square-root biased sampling is a general idea that can be applied whenever a ”bell-ringer” event must be found by sampling with replacement, but can be recognized (either with certainty, or with some probability) when seen.
Two interesting research papers on website password policies.
Abstract: We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength.
We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.
Abstract: We report the results of the first large-scale empirical analysis of password implementations deployed on the Internet. Our study included 150 websites which offer free user accounts for a variety of purposes, including the most popular destinations on the web and a random sample of e-commerce, news, and communication websites. Although all sites evaluated relied on user-chosen textual passwords for authentication, we found many subtle but important technical variations in implementation with important security implications. Many poor practices were commonplace, such as a lack of encryption to protect transmitted passwords, storage of cleartext passwords in server databases, and little protection of passwords from brute force attacks. While a spectrum of implementation quality exists with a general correlation between implementation choices within more-secure and less-secure websites, we find a surprising number of inconsistent choices within individual sites, suggesting that the lack of a standards is harming security. We observe numerous ways in which the technical failures of lower-security sites can compromise higher-security sites due to the well-established tendency of users to re-use passwords. Our data confirms that the worst security practices are indeed found at sites with few security incentives, such as newspaper websites, while sites storing more sensitive information such as payment details or user communication implement more password security. From an economic viewpoint, password insecurity is a negative externality that the market has been unable to correct, undermining the viability of password-based authentication. We also speculate that some sites deploying passwords do so primarily for psychological reasons, both as a justification for collecting marketing data and as a way to build trusted relationships with customers. This theory suggests that efforts to replace passwords with more secure protocols or federated identity systems may fail because they don’t recreate the entrenched ritual of password authentication.
From the U.S. Government Accountability Office: “Cybersecurity: Key Challenges Need to Be Addressed to Improve Research and Development.” Thirty-six pages; I haven’t read it.
From Wired News:
The four Wiseguy defendants, who also operated other ticket-reselling businesses, allegedly used sophisticated programming and inside information to bypass technological measures—including CAPTCHA—at Ticketmaster and other sites that were intended to prevent such bulk automated purchases. This violated the sites’ terms of service, and according to prosecutors constituted unauthorized computer access under the anti-hacking Computer Fraud and Abuse Act, or CFAA.
But the government’s interpretation of the law goes too far, according to the policy groups, and threatens to turn what is essentially a contractual dispute into a criminal case. As in the Lori Drew prosecution last year, the case marks a dangerous precedent that could make a felon of anyone who violates a site’s terms-of-service agreement, according to the amicus brief filed last week by the Electronic Frontier Foundation, the Center for Democracy and Technology and other advocates.
“Under the government’s theory, anyone who disregards—or doesn’t read—the terms of service on any website could face computer crime charges,” said EFF civil liberties director Jennifer Granick in a press release. “Price-comparison services, social network aggregators, and users who skim a few years off their ages could all be criminals if the government prevails.”
If the crypto is good, this is less of a big deal than you might think. Good cryptography is designed to be made public; it’s only for business reasons that it remains secret.
In what creepy back room do they come up with these names?
The federal government is launching an expansive program dubbed “Perfect Citizen” to detect cyber assaults on private companies and government agencies running such critical infrastructure as the electricity grid and nuclear-power plants, according to people familiar with the program.
The surveillance by the National Security Agency, the government’s chief eavesdropping agency, would rely on a set of sensors deployed in computer networks for critical infrastructure that would be triggered by unusual activity suggesting an impending cyber attack, though it wouldn’t persistently monitor the whole system, these people said.
No reason to be alarmed, though. The NSA claims that this is just research.
I don’t think this is a good idea.
This is interesting:
Some of the scenarios where we have installed video analytics for our clients include:
- to detect someone walking in an area of their yard (veering off of the main path) that they are not supposed to be;
- to send an alarm if someone is standing too close to the front of a store window/front door after hours;
- to alert security guards about someone in a parkade during specific hours;
- to count the number of people coming into (and out of) a store during the day;
In the case of burglary prevention, getting an early warning about someone trespassing makes a huge difference for our response teams. Now, rather than waiting for a detector in the house to trip, we can receive an alarm signal while a potential burglar is still outside.
Effectiveness is going to be a question of limiting false positives.
It’s easy to access someone else’s voicemail by spoofing the caller ID. This isn’t new; what is new is that many people now have easy access to caller ID spoofing.
The spoofing only works for voicemail accounts that don’t have a password set up, but AT&T has no password as the default.
From 1955, intended as humor:
In the future when I should ever call on the telephone to make a request or issue an order I will identify myself as follows: This is Hemingway, Ernest M. Hemingway speaking and my serial number is 0-363. That is an easy number to remember and is not the correct one which a con man might have. A con character would say 364. So we will make it 363. Any character can then ask how many shares I own and I will reply truly to the best of my knowledge. If the bank has made any once contemplated mergers or there has been a split that I had not been informed of I might give an inaccurate answer.
The Chaocipher is a mechanical encryption algorithm invented in 1918. No one was able to reverse-engineer the algorithm, given sets of plaintexts and ciphertexts—at least, nobody publicly. On the other hand, I don’t know how many people tried, or even knew about the algorithm. I’d never heard of it before now. Anyway, for the first time, the algorithm has been revealed. Of course, it’s not able to stand up to computer cryptanalysis.
Try to keep up:
Leslie Van Houten, a one-time member of Charles Manson’s infamous ‘family’ is up for parole for the 17th time today….
“These are serial killers,” she said. “These would be domestic terrorists if it was today. So these are very dangerous people.”
Last month, Sen. Joe Lieberman, I-Conn., introduced a bill (text here) that might—we’re not really sure—give the president the authority to shut down all or portions of the Internet in the event of an emergency. It’s not a new idea. Sens. Jay Rockefeller, D-W.Va., and Olympia Snowe, R-Maine, proposed the same thing last year, and some argue that the president can already do something like this. If this or a similar bill ever passes, the details will change considerably and repeatedly. So let’s talk about the idea of an Internet kill switch in general.
It’s a bad one.
Security is always a trade-off: costs versus benefits. So the first question to ask is: What are the benefits? There is only one possible use of this sort of capability, and that is in the face of a warfare-caliber enemy attack. It’s the primary reason lawmakers are considering giving the president a kill switch. They know that shutting off the Internet, or even isolating the U.S. from the rest of the world, would cause damage, but they envision a scenario where not doing so would cause even more.
That reasoning is based on several flawed assumptions.
The first flawed assumption is that cyberspace has traditional borders, and we could somehow isolate ourselves from the rest of the world using an electronic Maginot Line. We can’t.
Yes, we can cut off almost all international connectivity, but there are lots of ways to get out onto the Internet: satellite phones, obscure ISPs in Canada and Mexico, long-distance phone calls to Asia.
The Internet is the largest communications system mankind has ever created, and it works because it is distributed. There is no central authority. No nation is in charge. Plugging all the holes isn’t possible.
Even if the president ordered all U.S. Internet companies to block, say, all packets coming from China, or restrict non-military communications, or just shut down access in the greater New York area, it wouldn’t work. You can’t figure out what packets do just by looking at them; if you could, defending against worms and viruses would be much easier.
And packets that come with return addresses are easy to spoof. Remember the cyberattack July 4, 2009, that probably came from North Korea, but might have come from England, or maybe Florida? On the Internet, disguising traffic is easy. And foreign cyberattackers could always have dial-up accounts via U.S. phone numbers and make long-distance calls to do their misdeeds.
The second flawed assumption is that we can predict the effects of such a shutdown. The Internet is the most complex machine mankind has ever built, and shutting down portions of it would have all sorts of unforeseen ancillary effects.
Would ATMs work? What about the stock exchanges? Which emergency services would fail? Would trucks and trains be able to route their cargo? Would airlines be able to route their passengers? How much of the military’s logistical system would fail?
That’s to say nothing of the variety of corporations that rely on the Internet to function, let alone the millions of Americans who would need to use it to communicate with their loved ones in a time of crisis.
Even worse, these effects would spill over internationally. The Internet is international in complex and surprising ways, and it would be impossible to ensure that the effects of a shutdown stayed domestic and didn’t cause similar disasters in countries we’re friendly with.
The third flawed assumption is that we could build this capability securely. We can’t.
Once we engineered a selective shutdown switch into the Internet, and implemented a way to do what Internet engineers have spent decades making sure never happens, we would have created an enormous security vulnerability. We would make the job of any would-be terrorist intent on bringing down the Internet much easier.
Computer and network security is hard, and every Internet system we’ve ever created has security vulnerabilities. It would be folly to think this one wouldn’t as well. And given how unlikely the risk is, any actual shutdown would be far more likely to be a result of an unfortunate error or a malicious hacker than of a presidential order.
But the main problem with an Internet kill switch is that it’s too coarse a hammer.
Yes, the bad guys use the Internet to communicate, and they can use it to attack us. But the good guys use it, too, and the good guys far outnumber the bad guys.
Shutting the Internet down, either the whole thing or just a part of it, even in the face of a foreign military attack would do far more damage than it could possibly prevent. And it would hurt others whom we don’t want to hurt.
For years we’ve been bombarded with scare stories about terrorists wanting to shut the Internet down. They’re mostly fairy tales, but they’re scary precisely because the Internet is so critical to so many things.
Why would we want to terrorize our own population by doing exactly what we don’t want anyone else to do? And a national emergency is precisely the worst time to do it.
Just implementing the capability would be very expensive; I would rather see that money going toward securing our nation’s critical infrastructure from attack.
Defending his proposal, Sen. Lieberman pointed out that China has this capability. It’s debatable whether or not it actually does, but it’s actively pursuing the capability because the country cares less about its citizens.
Here in the U.S., it is both wrong and dangerous to give the president the power and ability to commit Internet suicide and terrorize Americans in this way.
This essay was originally published on AOL.com News.
Riddles of squid sex:
All cephalopods are hindered by their body shape, which comprises a closed hood-type structure called a mantle, which forms most of what appear to be a cephalopod’s body and head.
The animals use this mantle to move via jet propulsion, they must ventilate it to breathe, and they must also hide their excretory and sexual organs within its structure.
That poses a challenge to male cephalopods: how do they get their sperm past this mantle, and how does the sperm stay there when water is being forcibly passed through the mantle cavity so females can move and breathe?
I wonder if my blog counts.
EDITED TO ADD (7/13): The TSA reversed itself. Or, at least, they now claim that isn’t what they meant.
The measures used to prevent cheating during tests remind me of casino security measures:
No gum is allowed during an exam: chewing could disguise a student’s speaking into a hands-free cellphone to an accomplice outside.
The 228 computers that students use are recessed into desk tops so that anyone trying to photograph the screen—using, say, a pen with a hidden camera, in order to help a friend who will take the test later—is easy to spot.
Scratch paper is allowed—but it is stamped with the date and must be turned in later.
When a proctor sees something suspicious, he records the student’s real-time work at the computer and directs an overhead camera to zoom in, and both sets of images are burned onto a CD for evidence.
Lots of information on detecting cheating in homework and written papers.
The upshot of these reflections is that the relation between surveillance and moral edification is complicated. In some contexts, surveillance helps keep us on track and thereby reinforces good habits that become second nature. In other contexts, it can hinder moral development by steering us away from or obscuring the saintly ideal of genuinely disinterested action. And that ideal is worth keeping alive.
Some will object that the saintly ideal is utopian. And it is. But utopian ideals are valuable. It’s true that they do not help us deal with specific, concrete, short-term problems, such as how to keep drunk drivers off the road, or how to ensure that people pay their taxes. Rather, like a distant star, they provide a fixed point that we can use to navigate by. Ideals help us to take stock every so often of where we are, of where we’re going, and of whether we really want to head further in that direction.
Ultimately, the ideal college is one in which every student is genuinely interested in learning and needs neither extrinsic motivators to encourage study, nor surveillance to deter cheating. Ultimately, the ideal society is one in which, if taxes are necessary, everyone pays them as freely and cheerfully as they pay their dues to some club of which they are devoted members where citizen and state can trust each other perfectly. We know our present society is a long way from such ideals, yet we should be wary of practices that take us ever further from them. One of the goals of moral education is to cultivate a conscience the little voice inside telling us that we should do what is right because it is right. As surveillance becomes increasingly ubiquitous, however, the chances are reduced that conscience will ever be anything more than the little voice inside telling us that someone, somewhere, may be watching.
Read the whole thing.
There’s a power struggle going on in the U.S. government right now.
It’s about who is in charge of cyber security, and how much control the government will exert over civilian networks. And by beating the drums of war, the military is coming out on top.
“The United States is fighting a cyberwar today, and we are losing,” said former NSA director—and current cyberwar contractor—Mike McConnell. “Cyber 9/11 has happened over the last ten years, but it happened slowly so we don’t see it,” said former National Cyber Security Division director Amit Yoran. Richard Clarke, whom Yoran replaced, wrote an entire book hyping the threat of cyberwar.
General Keith Alexander, the current commander of the U.S. Cyber Command, hypes it every chance he gets. This isn’t just rhetoric of a few over-eager government officials and headline writers; the entire national debate on cyberwar is plagued with exaggerations and hyperbole.
Googling those names and terms—as well as “cyber Pearl Harbor,” “cyber Katrina,” and even “cyber Armageddon“—gives some idea how pervasive these memes are. Prefix “cyber” to something scary, and you end up with something really scary.
Cyberspace has all sorts of threats, day in and day out. Cybercrime is by far the largest: fraud, through identity theft and other means, extortion, and so on. Cyber-espionage is another, both government- and corporate-sponsored. Traditional hacking, without a profit motive, is still a threat. So is cyber-activism: people, most often kids, playing politics by attacking government and corporate websites and networks.
These threats cover a wide variety of perpetrators, motivations, tactics, and goals. You can see this variety in what the media has mislabeled as “cyberwar.” The attacks against Estonian websites in 2007 were simple hacking attacks by ethnic Russians angry at anti-Russian policies; these were denial-of-service attacks, a normal risk in cyberspace and hardly unprecedented.
A real-world comparison might be if an army invaded a country, then all got in line in front of people at the DMV so they couldn’t renew their licenses. If that’s what war looks like in the 21st century, we have little to fear.
Similar attacks against Georgia, which accompanied an actual Russian invasion, were also probably the responsibility of citizen activists or organized crime. A series of power blackouts in Brazil was caused by criminal extortionists—or was it sooty insulators? China is engaging in espionage, not war, in cyberspace. And so on.
One problem is that there’s no clear definition of “cyberwar.” What does it look like? How does it start? When is it over? Even cybersecurity experts don’t know the answers to these questions, and it’s dangerous to broadly apply the term “war” unless we know a war is going on.
Yet recent news articles have claimed that China declared cyberwar on Google, that Germany attacked China, and that a group of young hackers declared cyberwar on Australia. (Yes, cyberwar is so easy that even kids can do it.) Clearly we’re not talking about real war here, but a rhetorical war: like the war on terror.
We have a variety of institutions that can defend us when attacked: the police, the military, the Department of Homeland Security, various commercial products and services, and our own personal or corporate lawyers. The legal framework for any particular attack depends on two things: the attacker and the motive. Those are precisely the two things you don’t know when you’re being attacked on the Internet. We saw this on July 4 last year, when U.S. and South Korean websites were attacked by unknown perpetrators from North Korea—or perhaps England. Or was it Florida?
We surely need to improve our cybersecurity. But words have meaning, and metaphors matter. There’s a power struggle going on for control of our nation’s cybersecurity strategy, and the NSA and DoD are winning. If we frame the debate in terms of war, if we accept the military’s expansive cyberspace definition of “war,” we feed our fears.
We reinforce the notion that we’re helpless—what person or organization can defend itself in a war?—and others need to protect us. We invite the military to take over security, and to ignore the limits on power that often get jettisoned during wartime.
If, on the other hand, we use the more measured language of cybercrime, we change the debate. Crime fighting requires both resolve and resources, but it’s done within the context of normal life. We willingly give our police extraordinary powers of investigation and arrest, but we temper these powers with a judicial system and legal protections for citizens.
We need to be prepared for war, and a Cyber Command is just as vital as an Army or a Strategic Air Command. And because kid hackers and cyber-warriors use the same tactics, the defenses we build against crime and espionage will also protect us from more concerted attacks. But we’re not fighting a cyberwar now, and the risks of a cyberwar are no greater than the risks of a ground invasion. We need peacetime cyber-security, administered within the myriad structure of public and private security institutions we already have.
This essay previously appeared on CNN.com.
EDITED TO ADD (7/7): Earlier this month, I participated in a debate: “The Cyberwar Threat has been Grossly Exaggerated.” (Transcript here, video here.) Marc Rotenberg of EPIC and I were for the motion; Mike McConnell and Jonathan Zittrain were against. We lost.
We lost fair and square, for a bunch of reasons—we didn’t present our case very well, Jonathan Zittrain is a way better debater than we were—but basically the vote came down to the definition of “cyberwar.” If you believed in an expansive definition of cyberwar, one that encompassed a lot more types of attacks than traditional war, then you voted against the motion. If you believed in a limited definition of cyberwar, one that is a subset of traditional war, then you voted for it.
This continues to be an important debate.
EDITED TO ADD (7/7): Last month the Senate Homeland Security Committee held hearings on “Protecting Cyberspace as a National Asset: Comprehensive Legislation for the 21st Century.” Unfortunately, the DHS is getting hammered at these hearings, and the NSA is consolidating its power.
This sign is from a gas station in the U.K.
<img alt=”sign saying ‘Police Notice: Don’t Commit Crime'” src=”/images/dont-commit-crime.jpg” width=500 height=400″>
My first reaction was to laugh, but then I started thinking about it. We know that signs like “No Shoplifting” reduce shoplifting in the area around the sign, but those are warnings against a specific crime. Could a sign this general be effective? Clearly some comparative studies are needed.
EDITED TO ADD (7/7): This is part of a larger sign. Presumably, whoever put up the sign I saw cut the top and bottom off.
From the National Academies in 2009: Technology, Policy, Law, and Ethics Regarding U.S. Acquisition and Use of Cyberattack Capabilities. It’s 390 pages.
…water molecules differ slightly in their isotope ratios depending on the minerals at their source. …researchers found that water samples from 33 cities across the United State could be reliably traced back to their origin based on their isotope ratios. And because the human body breaks down water’s constituent atoms of hydrogen and oxygen to construct the proteins that make hair cells, those cells can preserve the record of a person’s travels.
Here’s the paper.
This is from Atomic Bombing: How to Protect Yourself, published in 1950:
Of course, millions of us will go through our lives never seeing a spy or a saboteur going about his business. Thousands of us may, at one time or another, think we see something like that. Only hundreds will be right. It would be foolish for all of us to see enemy agents lurking behind every tree, to become frightened of our own shadows and report them to the F.B.I.
But we are citizens, we might see something which might be useful to the F.B.I. and it is our duty to report what we see. It is also our duty to know what is useful to the F.B.I. and what isn’t.
If you think your neighbor has “radical” views—that is none of your or the F.B.I.’s business. After all, it is the difference in views of our citizens, from the differences between Jefferson and Hamilton to the differences between Truman and Dewey, which have made our country strong.
But if you see your neighbor—and the views he expresses might seem to agree with yours completely—commit an act which might lead you to suspect that he might be committing espionage, sabotage or subversion, then report it to the F.B.I.
After that, forget about it. Mr. Hoover also said: “Do not circulate rumors about subversive activities, or draw conclusions from information you furnish the F.B.I. The data you possess might be incomplete or only partially accurate. By drawing conclusions based on insufficient evidence grave injustices might result to innocent persons.”
In other words, you might be wrong. In our system, it takes a court, a trial and a jury to say a man is guilty.
It would be nice if this advice didn’t seem as outdated as the rest of the book.
By Russian spies:
Ricci said the steganographic program was activated by pressing control-alt-E and then typing in a 27-character password, which the FBI found written down on a piece of paper during one of its searches.
Sidebar photo of Bruce Schneier by Joe MacInnis.