Blog: November 2007 Archives
Someone drove a truck through the front gate of the Guinness brewery in Dublin, loaded the trailer with 450 kegs of beer, and drove out the gate. Security presumed it was just another legitimate contractor coming to pick up beer for distribution, and ignored him.
Moral: look like you belong.
EDITED TO ADD (12/5): Looks like they were caught before they drank all the beer.
The lead paragraphs:
The plot was like something from a Hollywood blockbuster: dozens of foreign terrorists working with a Mexican drug cartel to attack a Southern Arizona Army post with anti-tank missiles and grenade launchers.
Paying one of Mexico's most ruthless drug cartels $20,000 apiece, 60 Afghan and Iraqi terrorists would be smuggled into Texas and hole up at a safe house.
Their weapons, Soviet-made and easily acquired on the black market, were funneled through Arizona and New Mexico in hand-dug tunnels that cut across the border.
Their target: 13,500 military personnel and civilians working at Fort Huachuca, roughly 75 miles southeast of Tucson.
But (no surprise):
But the plot, widely reported by local stations and national TV networks and The Washington Times, turned out to be nothing more than fiction, an FBI spokesman said Monday.
Just put up a password strength meter and encourage people to submit their passwords for testing. You might want to collect names and e-mail addresses, too.
For the record, here's how to choose a secure password:
So if you want your password to be hard to guess, you should choose something not on any of the root or appendage lists. You should mix upper and lowercase in the middle of your root. You should add numbers and symbols in the middle of your root, not as common substitutions. Or drop your appendage in the middle of your root. Or use two roots with an appendage in the middle.
Even something lower down on PRTK's dictionary list -- the seven-character phonetic pattern dictionary -- together with an uncommon appendage, is not going to be guessed. Neither is a password made up of the first letters of a sentence, especially if you throw numbers and symbols in the mix. And yes, these passwords are going to be hard to remember, which is why you should use a program like the free and open-source Password Safe to store them all in.
EDITED TO ADD (12/5): Note that I am not actually accusing them of harvesting passwords, only pointing out that you could harvest passwords that way.
In the UK:
In early November about 30 animal rights activists are understood to have received letters from the Crown Prosecution Service in Hampshire inviting them to provide passwords that will decrypt material held on seized computers.
The letter is the first stage of a process set out under RIPA which governs how the authorities handle requests to examine encrypted material.
Once a request has been issued the authorities can then issue what is known as a Section 49 notice demanding that a person turn the data into an "intelligible" form or, under Section 51 hand over keys.
Although much of RIPA came into force many years ago, the part governing the handing over of keys only passed in to law on 1 October 2007. This is why the CPS is only now asking for access to files on the seized machines.
Alongside a S49 notice, the authorities can also issue a Section 54 notice that prevents a person revealing that they are subject to this part of RIPA.
Actually, we don't know if the activists actually handed the police their encryption keys yet. More about the law here.
If you remember, this was sold to the public as essential for fighting terrorism. It's already being misused.
I've been saying this for a while now:
Since the outbreak of a cybercrime epidemic that has cost the American economy billions of dollars, the federal government has failed to respond with enough resources, attention and determination to combat the cyberthreat, a Mercury News investigation reveals.
"The U.S. government has not devoted the leadership and energy that this issue needs," said Paul Kurtz, a former administration homeland and cybersecurity adviser. "It's been neglected."
Even as the White House asked last week for $154 million toward a new cybersecurity initiative expected to reach billions of dollars over the next several years, security experts complain the administration remains too focused on the risks of online espionage and information warfare, overlooking the international criminals who are stealing a fortune through the Internet.
Unlike police, firefighters and emergency medical personnel don't need warrants to access hundreds of thousands of homes and buildings each year, putting them in a position to spot behavior that could indicate terrorist activity or planning.
When going to private residences, for example, they are told to be alert for a person who is hostile, uncooperative or expressing hate or discontent with the United States; unusual chemicals or other materials that seem out of place; ammunition, firearms or weapons boxes; surveillance equipment; still and video cameras; night-vision goggles; maps, photos, blueprints; police manuals, training manuals, flight manuals; and little or no furniture other than a bed or mattress.
Because it's such a good idea for people to start fearing firefighters....
I didn't write about this story at first because we've seen it so many times before: a disk with lots of personal information is lost. Encryption is the simple and obvious solution, and that's the end of it.
But the UK's loss of 25 million child benefit records -- including dates of birth, addresses, bank account information, and national insurance numbers -- is turning into a privacy disaster, threatening to derail plans for a national ID card.
Why is it such a big deal? Certainly the scope: 40% of the British population. Also the data: bank account details; plus information about children. There's already a larger debate on the issue of a database on kids that this feeds into. And it's a demonstration of government incompetence (think Hurricane Katrina).
In any case, this issue isn't going away anytime soon. Prime Minister Gordon Brown has apologized. The head of the Revenue and Customs office has resigned. More is certainly coming.
And this is an easy security problem to solve! Disk and file encryption software is cheap, easy to use, and effective.
Excellent article by John Tehranian: "Infringement Nation: Copyright Reform and the Law/Norm Gap":
By the end of the day, John has infringed the copyrights of twenty emails, three legal articles, an architectural rendering, a poem, five photographs, an animated character, a musical composition, a painting, and fifty notes and drawings. All told, he has committed at least eighty-three acts of infringement and faces liability in the amount of $12.45 million (to say nothing of potential criminal charges). There is nothing particularly extraordinary about John’s activities. Yet if copyright holders were inclined to enforce their rights to the maximum extent allowed by law, he would be indisputably liable for a mind-boggling $4.544 billion in potential damages each year. And, surprisingly, he has not even committed a single act of infringement through P2P file sharing. Such an outcome flies in the face of our basic sense of justice. Indeed, one must either irrationally conclude that John is a criminal infringer -- a veritable grand larcenist -- or blithely surmise that copyright law must not mean what it appears to say. Something is clearly amiss. Moreover, the troublesome gap between copyright law and norms has grown only wider in recent years.
The point of the article is how, simply by acting normally, all of us are technically lawbreakers many times over every day. When laws are this far outside the social norms, it's time to change them.
It's in Portuguese, but the photo is good.
EDITED TO ADD (11/23): Title corrected.
...I thought it would be interesting to find out the account password. Wordpress stores raw MD5 hashes in the user database.... As with any respectable hash function, it is believed to be computationally infeasible to discover the input of MD5 from an output. Instead, someone would have to try out all possible inputs until the correct output is discovered.
Instead, I asked Google. I found, for example, a genealogy page listing people with the surname "Anthony", and an advert for a house, signing off "Please Call for showing. Thank you, Anthony". And indeed, the MD5 hash of "Anthony" was the database entry for the attacker. I had discovered his password.
Makes no sense:
Passengers at Liverpool's Lime Street station face airport-style searches and bag-screening, under swingeing new anti-terror measures unveiled yesterday.
And security barriers, vehicle exclusion zones and blast-resistant buildings will be introduced at airports, ports and up to 250 of the busiest train stations, Gordon Brown announced.
Of course, less busy train stations are only a few minutes away by car.
No two-person control or complicated safety features: until 1998, you could arm British nukes with a bicycle lock key.
To arm the weapons you just open a panel held by two captive screws -- like a battery cover on a radio -- using a thumbnail or a coin.
Inside are the arming switch and a series of dials which you can turn with an Allen key to select high yield or low yield, air burst or groundburst and other parameters.
The Bomb is actually armed by inserting a bicycle lock key into the arming switch and turning it through 90 degrees. There is no code which needs to be entered or dual key system to prevent a rogue individual from arming the Bomb.
Certainly most of the security was procedural. But still....
The "War on the Unexpected" is being fought everywhere.
Bouncers kicked a Melbourne man out of a Cairns pub after paranoid patrons complained that he was reading a book called The Unknown Terrorist.
At the U.S. border with Canada:
A Canadian firetruck responding with lights and sirens to a weekend fire in Rouses Point, New York, was stopped at the U.S. border for about eight minutes, U.S. border officials said Tuesday.
The Canadian firefighters "were asked for IDs," Trombley said. "I believe they even ran the license plate on the truck to make sure it was legal."
In the UK:
A man who had gone into a diabetic coma on a bus in Leeds was shot twice with a Taser gun by police who feared he may have been a security threat.
A powdered substance that led to a baggage claim being shut down for nearly six hours at the Portland International Jetport was a mixture of flour and sugar, airport officials said Thursday.
Fear is winning. Refuse to be terrorized, people.
I don't know if this story is true:
Portable hard discs sold locally and produced by US disk-drive manufacturer Seagate Technology have been found to carry Trojan horse viruses that automatically upload to Beijing Web sites anything the computer user saves on the hard disc, the Investigation Bureau said.
Around 1,800 of the portable Maxtor hard discs, produced in Thailand, carried two Trojan horse viruses: autorun.inf and ghost.pif, the bureau under the Ministry of Justice said.
The tainted portable hard disc uploads any information saved on the computer automatically and without the owner's knowledge to www.nice8.org and www.we168.org, the bureau said.
EDITED TO ADD (12/14): A first-hand account.
A 2003 "Camp Delta Standard Operating Procedures" manual has been leaked to the Internet. This is the same manual that the ACLU has unsuccessfully sued the government to get a copy of. Others can debate the legality of some of the procedures; I'm interested in comments about the security.
See, for example, this quote on page 27.3:
(b) Upon arrival will enter the gate by entering the number (1998) in the combination lock
(c) Proceed to the junction box with the number (7012-83) Breaker Box and open the boc. The number for the lock on the breaker box is (224).
The idea is simple: prevent the machine from completing an action and place it in an error state, and then exploit that state. In this instance, the hacker prevents the machine from dispensing the drink bottle. The machine refunds the money, but the bottle stays on the conveyor belt. Then the hacker purchases a second bottle, and receives them both.
I previously wrote about Dan Egerstad, a security researcher who ran a Tor anonymity network and was able to sniff some pretty impressive usernames and passwords.
Swedish police arrested him:
About 9am Egerstad walked downstairs to move his car when he was accosted by the officers in a scene "taken out of a bad movie", he said in an email interview.
"I got a couple of police IDs in my face while told that they are taking me in for questioning," he said.
But not before the agents, who had staked out his house in undercover blue and grey Saabs ("something that screams cop to every person in Sweden from miles away"), searched his apartment and confiscated computers, CDs and portable hard drives.
"They broke my wardrobe, short cutted my electricity, pulled out my speakers, phone and other cables having nothing to do with this and been touching my bookkeeping, which they have no right to do," he said.
While questioning Egerstad at the station, the police "played every trick in the book, good cop, bad cop and crazy mysterious guy in the corner not wanting to tell his name and just staring at me".
"Well, if they want to try to manipulate, I can play that game too. [I] gave every known body signal there is telling of lies ... covered my mouth, scratched my elbow, looked away and so on."
No charges have been filed. I'm not sure there's anything wrong with what he did.
Here's a good article on what he did; it was published just before the arrest.
The case is clearly a major embarrassment for both the FBI and CIA and has already raised a host of questions. Chief among them: how did an illegal alien from Lebanon who was working as a waitress at a shish kabob restaurant in Detroit manage to slip through extensive security background checks, including polygraphs, to land highly sensitive positions with the nation's top law enforcement and intelligence agencies?
Here's another article.
Dan Bernstein wrote an interesting paper on the security lessons he's learned from qmail.
My views of security have become increasingly ruthless over the years. I see a huge amount of money and effort being invested in security, and I have become convinced that most of that money and effort is being wasted. Most "security" efforts are designed to stop yesterday's attacks but fail completely to stop tomorrow's attacks and are of no use in building invulnerable software. These efforts are a distraction from work that does have long-term value.
Very interesting stuff, some counter to conventional security wisdom.
I have become convinced that this "principle of least privilege" is fundamentally wrong. Minimizing privilege might reduce the damage done by some security holes but almost never fixes the holes. Minimizing privilege is not the same as minimizing the amount of trusted code, does not have the same benefits as minimizing the amount of trusted code, and does not move us any closer to a secure computer system.
From the AP:
...government experts and intelligence officials say such a threat gets vastly more attention than it deserves. These officials said a true suitcase nuke would be highly complex to produce, require significant upkeep and cost a small fortune.
Counterproliferation authorities do not completely rule out the possibility that these portable devices once existed. But they do not think the threat remains.
"The suitcase nuke is an exciting topic that really lends itself to movies," said Vahid Majidi, the assistant director of the FBI's Weapons of Mass Destruction Directorate. "No one has been able to truly identify the existence of these devices."
Interesting technical details in the article.
Random numbers are critical for cryptography: for encryption keys, random authentication challenges, initialization vectors, nonces, key-agreement schemes, generating prime numbers and so on. Break the random-number generator, and most of the time you break the entire security system. Which is why you should worry about a new random-number standard that includes an algorithm that is slow, badly designed and just might contain a backdoor for the National Security Agency.
Generating random numbers isn't easy, and researchers have discovered lots of problems and attacks over the years. A recent paper found a flaw in the Windows 2000 random-number generator. Another paper found flaws in the Linux random-number generator. Back in 1996, an early version of SSL was broken because of flaws in its random-number generator. With John Kelsey and Niels Ferguson in 1999, I co-authored Yarrow, a random-number generator based on our own cryptanalysis work. I improved this design four years later -- and renamed it Fortuna -- in the book Practical Cryptography, which I co-authored with Ferguson.
The U.S. government released a new official standard for random-number generators this year, and it will likely be followed by software and hardware developers around the world. Called NIST Special Publication 800-90 (.pdf), the 130-page document contains four different approved techniques, called DRBGs, or "Deterministic Random Bit Generators." All four are based on existing cryptographic primitives. One is based on hash functions, one on HMAC, one on block ciphers and one on elliptic curves. It's smart cryptographic design to use only a few well-trusted cryptographic primitives, so building a random-number generator out of existing parts is a good thing.
But one of those generators -- the one based on elliptic curves -- is not like the others. Called Dual_EC_DRBG, not only is it a mouthful to say, it's also three orders of magnitude slower than its peers. It's in the standard only because it's been championed by the NSA, which first proposed it years ago in a related standardization project at the American National Standards Institute.
The NSA has always been intimately involved in U.S. cryptography standards -- it is, after all, expert in making and breaking secret codes. So the agency's participation in the NIST (the U.S. Commerce Department's National Institute of Standards and Technology) standard is not sinister in itself. It's only when you look under the hood at the NSA's contribution that questions arise.
Problems with Dual_EC_DRBG were first described in early 2006. The math is complicated, but the general point is that the random numbers it produces have a small bias. The problem isn't large enough to make the algorithm unusable -- and Appendix E of the NIST standard describes an optional work-around to avoid the issue -- but it's cause for concern. Cryptographers are a conservative bunch: We don't like to use algorithms that have even a whiff of a problem.
But today there's an even bigger stink brewing around Dual_EC_DRBG. In an informal presentation (.pdf) at the CRYPTO 2007 conference in August, Dan Shumow and Niels Ferguson showed that the algorithm contains a weakness that can only be described as a backdoor.
This is how it works: There are a bunch of constants -- fixed numbers -- in the standard used to define the algorithm's elliptic curve. These constants are listed in Appendix A of the NIST publication, but nowhere is it explained where they came from.
What Shumow and Ferguson showed is that these numbers have a relationship with a second, secret set of numbers that can act as a kind of skeleton key. If you know the secret numbers, you can predict the output of the random-number generator after collecting just 32 bytes of its output. To put that in real terms, you only need to monitor one TLS internet encryption connection in order to crack the security of that protocol. If you know the secret numbers, you can completely break any instantiation of Dual_EC_DRBG.
The researchers don't know what the secret numbers are. But because of the way the algorithm works, the person who produced the constants might know; he had the mathematical opportunity to produce the constants and the secret numbers in tandem.
Of course, we have no way of knowing whether the NSA knows the secret numbers that break Dual_EC-DRBG. We have no way of knowing whether an NSA employee working on his own came up with the constants -- and has the secret numbers. We don't know if someone from NIST, or someone in the ANSI working group, has them. Maybe nobody does.
We don't know where the constants came from in the first place. We only know that whoever came up with them could have the key to this backdoor. And we know there's no way for NIST -- or anyone else -- to prove otherwise.
This is scary stuff indeed.
Even if no one knows the secret numbers, the fact that the backdoor is present makes Dual_EC_DRBG very fragile. If someone were to solve just one instance of the algorithm's elliptic-curve problem, he would effectively have the keys to the kingdom. He could then use it for whatever nefarious purpose he wanted. Or he could publish his result, and render every implementation of the random-number generator completely insecure.
It's possible to implement Dual_EC_DRBG in such a way as to protect it against this backdoor, by generating new constants with another secure random-number generator and then publishing the seed. This method is even in the NIST document, in Appendix A. But the procedure is optional, and my guess is that most implementations of the Dual_EC_DRBG won't bother.
If this story leaves you confused, join the club. I don't understand why the NSA was so insistent about including Dual_EC_DRBG in the standard. It makes no sense as a trap door: It's public, and rather obvious. It makes no sense from an engineering perspective: It's too slow for anyone to willingly use it. And it makes no sense from a backwards-compatibility perspective: Swapping one random-number generator for another is easy.
My recommendation, if you're in need of a random-number generator, is not to use Dual_EC_DRBG under any circumstances. If you have to use something in SP 800-90, use CTR_DRBG or Hash_DRBG.
In the meantime, both NIST and the NSA have some explaining to do.
This essay originally appeared on Wired.com.
This kind of thinking can do enormous damage to a free society:
As Congress debates new rules for government eavesdropping, a top intelligence official says it is time that people in the United States change their definition of privacy.
Privacy no longer can mean anonymity, says Donald Kerr, the principal deputy director of national intelligence. Instead, it should mean that government and businesses properly safeguard people's private communications and financial information.
"Our job now is to engage in a productive debate, which focuses on privacy as a component of appropriate levels of security and public safety," Kerr said. "I think all of us have to really take stock of what we already are willing to give up, in terms of anonymity, but [also] what safeguards we want in place to be sure that giving that doesn't empty our bank account or do something equally bad elsewhere."
Anonymity, privacy, and security are intertwined; you can't just separate them out like that. And privacy isn't opposed to security; privacy is part of security. And the value of privacy in a free society is enormous.
Malcolm Gladwell makes a convincing case that criminal profiling is nothing more than a "cold reading" magic trick.
A few years ago, Alison went back to the case of the teacher who was murdered on the roof of her building in the Bronx. He wanted to know why, if the F.B.I.'s approach to criminal profiling was based on such simplistic psychology, it continues to have such a sterling reputation. The answer, he suspected, lay in the way the profiles were written, and, sure enough, when he broke down the rooftop-killer analysis, sentence by sentence, he found that it was so full of unverifiable and contradictory and ambiguous language that it could support virtually any interpretation.
Astrologers and psychics have known these tricks for years. The magician Ian Rowland, in his classic "The Full Facts Book of Cold Reading," itemizes them one by one, in what could easily serve as a manual for the beginner profiler. First is the Rainbow Ruse -- the "statement which credits the client with both a personality trait and its opposite." ("I would say that on the whole you can be rather a quiet, self effacing type, but when the circumstances are right, you can be quite the life and soul of the party if the mood strikes you.") The Jacques Statement, named for the character in "As You Like It" who gives the Seven Ages of Man speech, tailors the prediction to the age of the subject. To someone in his late thirties or early forties, for example, the psychic says, "If you are honest about it, you often get to wondering what happened to all those dreams you had when you were younger." There is the Barnum Statement, the assertion so general that anyone would agree, and the Fuzzy Fact, the seemingly factual statement couched in a way that "leaves plenty of scope to be developed into something more specific." ("I can see a connection with Europe, possibly Britain, or it could be the warmer, Mediterranean part?") And that's only the start: there is the Greener Grass technique, the Diverted Question, the Russian Doll, Sugar Lumps, not to mention Forking and the Good Chance Guess -- all of which, when put together in skillful combination, can convince even the most skeptical observer that he or she is in the presence of real insight.
They had been at it for almost six hours. The best minds in the F.B.I. had given the Wichita detectives a blueprint for their investigation. Look for an American male with a possible connection to the military. His I.Q. will be above 105. He will like to masturbate, and will be aloof and selfish in bed. He will drive a decent car. He will be a "now" person. He won't be comfortable with women. But he may have women friends. He will be a lone wolf. But he will be able to function in social settings. He won't be unmemorable. But he will be unknowable. He will be either never married, divorced, or married, and if he was or is married his wife will be younger or older. He may or may not live in a rental, and might be lower class, upper lower class, lower middle class or middle class. And he will be crazy like a fox, as opposed to being mental. If you're keeping score, that's a Jacques Statement, two Barnum Statements, four Rainbow Ruses, a Good Chance Guess, two predictions that aren't really predictions because they could never be verified -- and nothing even close to the salient fact that BTK was a pillar of his community, the president of his church and the married father of two.
Stoddart told inquiry Commissioner John Major she is concerned that people could be placed on the list in error and face dire consequences if their identities are then disclosed to the RCMP or passed on to police agencies in other countries.
And she questioned why, if people are so dangerous that they can't get on a plane, they are deemed safe to travel by other means in Canada.
Between June 28, when the program came into effect, and the end of September, no passengers were turned away because of the list, the inquiry heard. Stoddart said that information only increases her suspicion about the value of the program.
"I think it only deepens the mystery of the rationale, the usefulness of this," Stoddart said. "The program is totally opaque."
Major suggested that perhaps less extreme measures could be taken. For example, individuals on the list might be able to undergo extra screening so they could be allowed to travel.
"We are looking to avoid what happened in 9/11. Presumably it's to keep dangerous people capable of blowing planes up -- or capturing them -- off the plane. It seems difficult that you can do that by a name (on a list) alone."
Other members of Stoddart's staff added there are concerns someone could be stranded in Canada after arriving without incident, only to be prevented from boarding their return flight.
Okay, so it was a stupid (and dangerous) stunt:
A 17-year-old Hopewell High student was apparently acting on a dare when he did a fly-over prank at a Hopewell High football game Friday, at one point dipping below the stadium lights.
Charlotte-Mecklenburg Schools officials said Sunday that the teen pilot and two teen passengers flew the length of the field three times around 8 p.m. The plane reportedly came within feet of a flag pole.
On the final pass, a pair of tennis shoes and a football dropped from the single-engine Cessna 172 into the end zone, officials said.
But this is just funny:
"My immediate reaction was that we were going to have a terrorist act of some sort," said Vincent "Bud" Cesena, head of CMS law enforcement, who was among the 4,000 people in the stands.
Yeah, because the terrorists are going to target high-school football games.
Interesting and thoughtful article about suicide attacks in the online video game Halo 3:
Whenever I find myself under attack by a wildly superior player, I stop trying to duck and avoid their fire. Instead, I turn around and run straight at them. I know that by doing so, I'm only making it easier for them to shoot me -- and thus I'm marching straight into the jaws of death. Indeed, I can usually see my health meter rapidly shrinking to zero.
But at the last second, before I die, I'll whip out a sticky plasma grenade -- and throw it at them. Because I've run up so close, I almost always hit my opponent successfully. I'll die -- but he'll die too, a few seconds later when the grenade goes off. (When you pull off the trick, the game pops up a little dialog box noting that you killed someone "from beyond the grave.")
It was after pulling this maneuver a couple of dozen times that it suddenly hit me: I had, quite unconsciously, adopted the tactics of a suicide bomber -- or a kamikaze pilot.
It's not just that I'm willing to sacrifice my life to kill someone else. It's that I'm exploiting the psychology of asymmetrical warfare.
Because after all, the really elite Halo players don't want to die. If they die too often, they won't win the round, and if they don't win the round, they won't advance up the Xbox Live rankings. And for the elite players, it's all about bragging rights.
I, however, have a completely different psychology. I know I'm the underdog; I know I'm probably going to get killed anyway. I am never going to advance up the Halo 3 rankings, because in the political economy of Halo, I'm poor.
The biggest problems in discussing cyberwar are the definitions. The things most often described as cyberwar are really cyberterrorism, and the things most often described as cyberterrorism are more like cybercrime, cybervandalism or cyberhooliganism--or maybe cyberespionage.
At first glance there's nothing new about these terms except the "cyber" prefix. War, terrorism, crime and vandalism are old concepts. What's new is the domain; it's the same old stuff occurring in a new arena. But because cyberspace is different, there are differences worth considering.
Of course, the terms overlap. Although the goals are different, many tactics used by armies, terrorists and criminals are the same. Just as they use guns and bombs, they can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime or even--if done by some 14-year-old who doesn't really understand what he's doing--cyberhooliganism. Which it is depends on the attacker's motivations and the surrounding circumstances--just as in the real world.
For it to be cyberwar, it must first be war. In the 21st century, war will inevitably include cyberwar. Just as war moved into the air with the development of kites, balloons and aircraft, and into space with satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics and defenses.
I have no doubt that smarter and better-funded militaries are planning for cyberwar. They have Internet attack tools: denial-of-service tools; exploits that would allow military intelligence to penetrate military systems; viruses and worms similar to what we see now, but perhaps country- or network-specific; and Trojans that eavesdrop on networks, disrupt operations, or allow an attacker to penetrate other networks. I believe militaries know of vulnerabilities in operating systems, generic or custom military applications, and code to exploit those vulnerabilities. It would be irresponsible for them not to.
The most obvious attack is the disabling of large parts of the Internet, although in the absence of global war, I doubt a military would do so; the Internet is too useful an asset and too large a part of the world economy. More interesting is whether militaries would disable national pieces of it. For a surgical approach, we can imagine a cyberattack against a military headquarters, or networks handling logistical information.
Destruction is the last thing a military wants to accomplish with a communications network. A military only wants to shut down an enemy's network if it isn't acquiring useful information. The best thing is to infiltrate enemy computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, perform traffic analysis: analyze the characteristics of communications. Only if a military can't do any of this would it consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh the advantages of eavesdropping on it.
Cyberwar is certainly not a myth. But you haven't seen it yet, despite the attacks on Estonia. Cyberwar is warfare in cyberspace. And warfare involves massive death and destruction. When you see it, you'll know it.
This is the second half of a point/counterpoint with Marcus Ranum; it appeared in the November issue of Information Security Magazine. Marcus's half is here.
I wrote a longer essay on cyberwar here.
At least that's what they said two weeks ago:
On Sunday, Nov. 11, al Qaeda's electronic experts will start attacking Western, Jewish, Israeli, Muslim apostate and Shiite Web sites. On Day One, they will test their skills against 15 targeted sites expand the operation from day to day thereafter until hundreds of thousands of Islamist hackers are in action against untold numbers of anti-Muslim sites.
I think this is nonsense. We'll see who's right next week.
This is a very moving story about a foreign tourist being removed from a train for taking pictures:
The train is a half hour west of New Haven when the conductor, having finished her original rounds, reappears. She moves down the aisle, looks, stops between our seats, faces the person taking pictures. "Sir, in the interest of national security, we do not allow pictures to be taken of or from this train." He starts, "I……." but, without English, his response trails off into silence. The conductor, speaking louder, forcefully: "Sir, I will confiscate that camera if you don’t put it away." Again, little response. "Sir, this is a security matter! We cannot allow pictures." She turns away abruptly and, as she moves down the aisle, calls over her shoulder, in a very loud voice, "Put. It. Away!" He packs his camera.
Within a minute after our arrival in New Haven, two armed police officers entered the car, approached my neighbor’s seat. "Sir, we're removing you from this train." "I….;" "I……" "Sir, you have breached security regulations. We must remove you from this train." "I…," "I….." "Sir, we are not going to delay this train because of you. You will get off, or we will remove you physically." "I….."
Nearby passengers stir. One says, "It’s obvious he doesn’t speak English. There are people here who speak more than one language. Perhaps we can help." Different ones ask about the traveler’s language; learn he speaks Japanese. For me, a sudden flash of memory -- a student at International Christian University in Japan, I took countless pictures without arousing suspicion.
The police speak through the interpreter, with the impatience of authority. "The conductor asked this man three times to discontinue. We must remove him from the train." The traveler hears the translation, is befuddled. Hidden beneath the commotion is a cross-cultural drama. With the appearance of police officers, this quiet visitor is embarrassed to find he is the center of attention. The officers explain, "After we remove him from the train, when we are through our investigation, we will put him on the next train." The woman translates. The passenger replies, "I’m meeting relatives in Boston. They cannot be reached by phone. They expect me and will be worried when I do not arrive on schedule." "Our task," the police repeat, "is to remove you from this train. If necessary, we will do so by force. After we have finished the investigation, we’ll put you on another train." The woman translates. The traveler gathers his belongings and departs.
My earlier suggestion that you imagine being in his place leaves you free to respond and draw your conclusions. Remember: you’ve been removed from the train, are being interrogated, perhaps having your equipment confiscated; while I continue to do what I take for granted traveling unimpeded, on to Providence.
The more I replay the scene, the more troublesome it is. It is the stuff of nightmares. Relations between people and countries lie at the heart of the issue. The abstract terms that inform political and social debate appear, as if in person, unexpectedly, near enough to hear, touch, feel. Taking no position is not an option. As an educator, I would prepare and deliver a lecture on how others perceive America in the world community, then seek an audience. I'll spare you. But -- I just watched armed police officers remove a visitor from the train for taking pictures. I don't understand this. I’m disturbed no, shaken to bear witness to these events.
EDITED TO ADD (11/13): A response from the writer of the original article, after people questioned the veracity of the story.
Salesforce.com has finally acknowledged what security experts have suspected for weeks: that a Salesforce.com employee had his company credentials stolen in a phishing scam, and criminals have been using names and e-mail addresses from Salesforce's customer list to conduct other highly targeted phishing attacks, including the recent round of fake e-mails apparently from the Federal Trade Commission." In such hightly targeted attacks, the AV companies are at a loss -- they have little chance of quickly developing signatures for threats that only reach a few thousand victims.
Interesting study: "Identity Fraud Trends and Patterns: Building a Data-Based Foundation for Proactive Enforcement," October 2007. It's long, but at least read the executive summary. Or, even shorter, this Associated Press story:
Researchers reviewed 517 cases closed by the Secret Service between 2000 and 2006. Two-thirds of the cases were concentrated in the Northeast and South and there were 933 defendants. The Federal Trade Commission has said about 3 million Americans have their identities stolen annually.
The study found that 42.5 percent of offenders were between the ages of 25 and 34. Another 18 percent were between the ages of 18 and 24. Two-thirds of the identity thieves were male.
Nearly a quarter of the offenders were born outside the United States.
Eighty percent of the cases involved an offender working solo or with a single partner, the report found.
While identity thieves used a wide combination of methods, fewer than 20 percent of the crimes involved the Internet. The most frequently used non-technological method was the rerouting of mail through change of address cards. Other prevalent non-technological methods were mail theft and dumpster diving.
Of the 933 offenders, 609 said they initiated their crime by stealing fragments of personal identifying information, as opposed to stealing entire documents, such as bank cards or driver's licenses.
Most of the offenses were committed by non-employees who victimized strangers. Employee insiders were the offenders in just one-third of the 517 cases. When an employee did commit identity theft, the offenders were employed in a retail business in two out of every five instances, the report said. Stores, gas stations, car dealerships, casinos, restaurants, hotels, doctors and hospitals were all considered retail operations in the study.
In about a fifth of the cases, the employee worked in the financial services industry.
This was accidental, but it could certainly be done on purpose:
Some cars failed to start on Tuesday in Parrock Street car park, in Gravesend, Kent, while others would not unlock.
A spokesman said "weeks of sleuthing" by council officers had them looking for a rogue transmitter or wireless broadband unit in nearby offices.
Staff also checked all transmissions in and around the car park, because of nearby communications at the town's Civic Centre and police station.
Ofcom was finally called and a survey found a small family car was intermittently sending out signals blocking other fobs in a 164ft (50 m) radius.
Mad at someone? Turn him in as a terrorist:
A man in Sweden who was angry with his daughter's husband has been charged with libel for telling the FBI that the son-in-law had links to al-Qaeda, Swedish media reported on Friday.
The man, who admitted sending the email, said he did not think the US authorities would stupid enough to believe him.
The 40-year-old son-in-law and his wife were in the process of divorcing when the husband had to travel to the United States for business.
The wife didn't want him to travel since she was sick and wanted him to help care for their children, regional daily Sydsvenska Dagbladet said without disclosing the couple's names.
When the husband refused to stay home, his father-in-law wrote an email to the FBI saying the son-in-law had links to al-Qaeda in Sweden and that he was travelling to the US to meet his contacts.
He provided information on the flight number and date of arrival in the US.
The son-in-law was arrested upon landing in Florida. He was placed in handcuffs, interrogated and placed in a cell for 11 hours before being put on a flight back to Europe, the paper said.
EDITED TO ADD (11/6): Businesses do this too:
In May 2005 Jet's application for a licence to fly to America was held up after a firm based in Maryland, also called Jet Airways, accused Mr Goyal's company of being a money-laundering outfit for al-Qaeda. Mr Goyal says some of his local competitors were behind the claim, which was later withdrawn.
Interesting GAO testimony/report: "Internet Infrastructure: Challenges in Developing a Public/Private Recovery Plan," Gregory C. Wilshusen, Director, Information Security Issues, Government Accountability Office (GAO), October 23, 2007.
Synthetic identity theft is poised to become a bigger problem than regular identity theft:
Unlike traditional identity thieves, who purloin people's information to get loans or make purchases, fraudsters like Mr. Rose mix legitimate and phony data to create synthetic identities. This kind of fraud doesn't usually directly affect consumers. The big losers are banks, which get stuck with loan defaults and unpaid credit-card bills that identity thieves leave behind.
Actually, real people do get harmed:
The men paired fake names with Social Security numbers of real people. Adam Gregory, the purported Las Vegas resident, had the Social Security number of a real California resident.
The conspirators needed addresses for their synthetic identities and for a dozen or so shell companies that helped to facilitate the scam. Eventually they rented 200-odd apartments in 14 states. They kept binders of data in their Phoenix headquarters to keep the details straight.
The duo acquired business licenses, usually online, for the dummy businesses. A few had real offices with furniture; others rented "virtual" office space. After Messrs. Rose and Newton triggered the credit bureaus to set up no-hit files for their synthetic identities, their shell companies fed false data to credit bureaus.
Okay, this is clever.
Basically, someone arrested as a homicide suspect walked out of jail after identifying himself as someone else. The biometric system worked, but human error overrode it:
But Sauceda's fingerprints, taken by a jail employee to verify his identity, were smudged and couldn't be matched to those on file for Garcia, said Brian Menges, director of jail administration.
So Sauceda was taken for an additional fingerprint check using the jail's Live Scan technology. Menges said Saucedo's Live Scan fingerprints were never compared to those on record for Garcia.
It's a neat scam. Find out someone else who's been arrested, have a friend come and post bail for that person, and then steal his identity when the jailers come into the cellblock.
Joe Bennett in New Zealand:
An officer frisks me with hands like questing butterflies. Up my legs they flutter, then over my buttocks, my back, my chest and along my arms, but not, I notice, over my crotch. So there's the answer. When my anger at being pointlessly searched in airports finally reaches such incandescence that I feel compelled to act, I'll tape a bomblet behind my scrotum with the detonator clenched between my cheeks. It will kill no one except myself and I won't make a pretty corpse, but I will make damn sure I take out a particular notice. You know the one I mean. It's the only notice in human history to forbid, on pain of imprisonment, the making of jokes. I am not allowed to crack a joke about bombs.
Jokes are essential to mental well-being. But all authorities hate them because jokes pierce to the truth. Jokes see through bogus seriousness and say, "oh come off it". The instinct to make jokes is a natural reaction to overweening authority.
The authorities have an obvious response. Airport security, they will say, is no laughing matter. Do I want planes to be blown up?
Spammers have created a Windows game which shows a woman in a state of undress when people correctly type in text shown in an accompanying image.
The scrambled text images come from sites which use them to stop computers automatically signing up for accounts that can be put to illegal use.
By getting people to type in the text the spammers can take over the accounts and use them to send junk mail.
I've been saying that spammers would start doing this for years. I'm actually surprised it took this long.
This is really interesting:
(In)Security explores a new design vocabulary in direct response to the climate of fear and paranoia that currently drives the program and aesthetic of much contemporary urban design. The project addresses the current and future state of security in and around the Wall Street financial district, creating viable security alternatives while simultaneously questioning our nation's current philosophy that security = freedom.
Full paper here.
We've opened up a new front on the war on terror. It's an attack on the unique, the unorthodox, the unexpected; it's a war on different. If you act different, you might find yourself investigated, questioned, and even arrested -- even if you did nothing wrong, and had no intention of doing anything wrong. The problem is a combination of citizen informants and a CYA attitude among police that results in a knee-jerk escalation of reported threats.
This isn't the way counterterrorism is supposed to work, but it's happening everywhere. It's a result of our relentless campaign to convince ordinary citizens that they're the front line of terrorism defense. "If you see something, say something" is how the ads read in the New York City subways. "If you suspect something, report it" urges another ad campaign in Manchester, UK. The Michigan State Police have a seven-minute video. Administration officials from then-attorney general John Ashcroft to DHS Secretary Michael Chertoff to President Bush have asked us all to report any suspicious activity.
The problem is that ordinary citizens don't know what a real terrorist threat looks like. They can't tell the difference between a bomb and a tape dispenser, electronic name badge, CD player, bat detector, or trash sculpture; or the difference between terrorist plotters and imams, musicians, or architects. All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.
Even worse: after someone reports a "terrorist threat," the whole system is biased towards escalation and CYA instead of a more realistic threat assessment.
Watch how it happens. Someone sees something, so he says something. The person he says it to -- a policeman, a security guard, a flight attendant -- now faces a choice: ignore or escalate. Even though he may believe that it's a false alarm, it's not in his best interests to dismiss the threat. If he's wrong, it'll cost him his career. But if he escalates, he'll be praised for "doing his job" and the cost will be borne by others. So he escalates. And the person he escalates to also escalates, in a series of CYA decisions. And before we're done, innocent people have been arrested, airports have been evacuated, and hundreds of police hours have been wasted.
This story has been repeated endlessly, both in the U.S. and in other countries. Someone -- these are all real -- notices a funny smell, or some white powder, or two people passing an envelope, or a dark-skinned man leaving boxes at the curb, or a cell phone in an airplane seat; the police cordon off the area, make arrests, and/or evacuate airplanes; and in the end the cause of the alarm is revealed as a pot of Thai chili sauce, or flour, or a utility bill, or an English professor recycling, or a cell phone in an airplane seat.
Of course, by then it's too late for the authorities to admit that they made a mistake and overreacted, that a sane voice of reason at some level should have prevailed. What follows is the parade of police and elected officials praising each other for doing a great job, and prosecuting the poor victim -- the person who was different in the first place -- for having the temerity to try to trick them.
For some reason, governments are encouraging this kind of behavior. It's not just the publicity campaigns asking people to come forward and snitch on their neighbors; they're asking certain professions to pay particular attention: truckers to watch the highways, students to watch campuses, and scuba instructors to watch their students. The U.S. wanted meter readers and telephone repairmen to snoop around houses. There's even a new law protecting people who turn in their travel mates based on some undefined "objectively reasonable suspicion," whatever that is.
If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security.
We need to do two things. The first is to stop urging people to report their fears. People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they're spooked only squanders our security resources and makes no one safer.
We don't want people to never report anything. A store clerk's tip led to the unraveling of a plot to attack Fort Dix last May, and in March an alert Southern California woman foiled a kidnapping by calling the police about a suspicious man carting around a person-sized crate. But these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.
Equally important, politicians need to stop praising and promoting the officers who get it wrong. And everyone needs to stop castigating, and prosecuting, the victims just because they embarrassed the police by their innocence.
Causing a city-wide panic over blinking signs, a guy with a pellet gun, or stray backpacks, is not evidence of doing a good job: it's evidence of squandering police resources. Even worse, it causes its own form of terror, and encourages people to be even more alarmist in the future. We need to spend our resources on things that actually make us safer, not on chasing down and trumpeting every paranoid threat anyone can come up with.
This essay originally appeared on Wired.com.
EDITED TO ADD (11/1): Some links didn't make it into the original article. There's this creepy "if you see a father holding his child's hands, call the cops" campaign, this story of an iPod found on an airplane, and this story of an "improvised electronics device" trying to get through airport security. This is a good essay on the "war on electronics."
Unlike police, firefighters and emergency medical personnel don't need warrants to access hundreds of thousands of homes and buildings each year, putting them in a position to spot behavior that could indicate terrorist activity or planning.
Sidebar photo of Bruce Schneier by Joe MacInnis.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.