Schneier on Security
A blog covering security and security technology.
April 2007 Archives
In East Belfast, burglars called in a bomb threat. Residents evacuated their homes, and then the burglars proceeded to rob eight empty houses on the block.
I've written about this sort of thing before: sometimes security procedures themselves can be exploited by attackers. It was Step 4 of my "five-step process" from Beyond Fear (pages 14-15). A national ID card make identity theft more lucrative; forcing people to remove their laptops at airport security checkpoints makes laptop theft more common.
Moral: you can't just focus on one threat. You need to look at the broad spectrum of threats, and pay attention to how security against one affects the others.
If you want your security technology to be considered for the London Olympics, you have to be a major sponsor of the event.
...he casually revealed that because neither of these companies was a ‘major sponsor’ of the Olympics their technology could not be used.
I have repeatedly said that security is generally only part of a larger context, but this borders on ridiculous.
The MP3 of my March 21 talk at the British Computer Society -- on information security trends and economic considerations -- is on the Internet.
EDITED TO ADD (4/30): Ogg file here.
"Get Fuzzy" is one of my favorite comic strips. Tuesday's was about security.
Excellent op ed, by someone who actually knows about this stuff:
How is this odd terrorist puppy dog behavior supposed to work? The President must believe that terrorists are playing by some odd rules of chivalry. Would this be the "only one slaughter ground at a time" rule of terrorism?
This is right:
As Dan Geer has been saying for years, Microsoft has a bit of a problem. Either it stonewalls and pretends there is no security problem, which is what Vista does, by taking over your computer to force patches (and DRM) down its throat. Or you actually change the basic design and produce a secure operating system, which risks people wondering why they're sticking with Windows and Microsoft, then? It turns out the former course may also result in the latter result:If you fit Microsoft's somewhat convoluted definition of poor, it still wants to lock you in, you might get rich enough to afford the full-priced stuff someday. It is at a dangerous crossroads, if its software bumps up the price of a computer by 100 per cent, people might look to alternatives.
I regularly read articles about terrorists using cell phones to trigger bombs. The Thai government seems to be particularly worried about this; two years ago I blogged about a particularly bizarre movie-plot threat along these lines. And last year I blogged about the cell phone network being restricted after the Mumbai terrorist bombings.
Efforts to restrict cell phone usage because of this threat are ridiculous. It's a perfect example of a "movie-plot threat": by focusing on the specfics of a particular tactic rather than the broad threat, we simply force the bad guys to modify their tactics. Lots of money spent: no security gained.
And that's exactly what happened in Thailand:
Authorities said yesterday that police are looking for 40 Daihatsu keyless remote entry devices, some of which they believe were used to set off recent explosions in the deep South.
On the subject of people noticing and reporting suspicious actions, I have been espousing two views that some find contradictory. One, we are all safer if police, guards, security screeners, and the like ignore traditional profiling and instead pay attention to people acting hinky: not right. And two, if we encourage people to contact the authorities every time they see something suspicious, we're going to waste our time chasing false alarms: foreigners whose customs are different, people who are disliked by someone, and so on.
The key difference is expertise. People trained to be alert for something hinky will do much better than any profiler, but people who have no idea what to look for will do no better than random.
Here's a story that illustrates this: Last week, a student at the Rochester Institute of Technology was arrested with two illegal assault weapons and 320 rounds of ammunition in his dorm room and car:
The discovery of the weapons was made only by chance. A conference center worker who served in the military was walking past Hackenburg's dorm room. The door was shut, but the worker heard the all-too-familiar racking sound of a weapon, said the center's director Bill Gunther.
Notice how expertise made the difference. The "conference center worker" had the right knowledge to recognize the sound and to understand that it was out of place in the environment he heard it. He wasn't primed to be on the lookout for suspicious people and things; his trained awareness kicked in automatically. He recognized hinky, and he acted on that recognition. A random person simply can't do that; he won't recognize hinky when he sees it. He'll report imams for praying, a neighbor he's pissed at, or people at random. He'll see an English professor recycling paper, and report a Middle-Eastern-looking man leaving a box on sidewalk.
We all have some experience with this. Each of us has some expertise in some topic, and will occasionally recognize that something is wrong even though we can't fully explain what or why. An architect might feel that way about a particular structure; an artist might feel that way about a particular painting. I might look at a cryptographic system and intuitively know something is wrong with it, well before I figure out exactly what. Those are all examples of a subliminal recognition that something is hinky -- in our particular domain of expertise.
Good security people have the knowledge, skill, and experience to do that in security situations. It's the difference between a good security person and an amateur.
This is why behavioral assessment profiling is a good idea, while the Terrorist Information and Prevention System (TIPS) isn't. This is why training truckers to look out for suspicious things on the highways is a good idea, while a vague list of things to watch out for isn't. It's why this Israeli driver recognized a passenger as a suicide bomber, while an American driver probably wouldn't.
This kind of thing isn't easy to train. (Much has been written about it, though; Malcolm Gladwell's Blink discusses this in detail.) You can't learn it from watching a seven-minute video. But the more we focus on this -- the more we stop wasting our airport security resources on screeners who confiscate rocks and snow globes, and instead focus them on well-trained screeners walking through the airport looking for hinky -- the more secure we will be.
EDITED TO ADD (4/26): Jim Harper makes an important clarification.
This is just awful:
Because of my recycling, the bomb squad came, then the state police. Because of my recycling, buildings were evacuated, classes were canceled, the campus was closed. No. Not because of my recycling. Because of my dark body. No. Not even that. Because of his fear. Because of the way he saw me. Because of the culture of fear, mistrust, hatred and suspicion that is carefully cultivated in the media, by the government, by people who claim to want to keep us "safe."
People use policemen as props in their personal disputes:
Noon, its 59 degrees and I get a call from a guy whose neighbor's dog has been left in a car. I get there, the windows are cracked, and the dog has only been in there 20 minutes. It's 59 Degrees! It's not summer and if it were the dead of winter I'd say the car is a $20,000 dog house. But it turns out this guy has a running dispute with his neighbor so guess who he calls to irritate the guy a little more? Me. When I go to leave, the asshole that called this in yells, "hey, aren't you gonna do anything?" I explain why I am not and he says "great, I'm writing a letter to the paper" Holy shit. Now I'm the bad guy because I didn't embarrass your target enough for you? Grow the hell up.
When the police implement programs to let ordinary citizens report suspected terrorists, this is the kind of thing that will result.
I wish I could make a joke about security theater at the theater, but this is just basic stupidity:
Dean of Student Affairs Betty Trachtenberg has limited the use of stage weapons in theatrical productions.
Not only does this not make anyone safer, it doesn't even make anyone feel safer.
EDITED TO ADD (4/25): The order has been rescinded, without any demonstration of common sense:
"I think people should start thinking about other people rather than trying to feel sorry for themselves and thinking that the administration is trying to thwart their creativity," Trachtenberg said. "They're not using their own intelligence. … We have to think of the people who might be affected by seeing real-life weapons."
According to the Internet Crime Complaint Center and reported in U.S. News and World Report, auction fraud and non-delivery of items purchased are far and away the most common Internet crimes. Identity theft is way down near the bottom.
Although the number of complaints last year207,492fell by 10 percent, the overall losses hit a record $198 million. By far the most reported crime: Internet auction fraud, garnering 45 percent of all complaints. Also big was nondelivery of merchandise or payment, which notched second at 19 percent. The biggest money losers: those omnipresent Nigerian scam letters, which fleeced victims on average of $5,100 followed by check fraud at $3,744 and investment fraud at $2,694.
Watch the video of how the Australian authorities react when someone -- dressed either as an American or Arab tourist -- films the Sydney Harbor Bridge and a nuclear reactor.
The synopsis: The Arab is intercepted within three minutes both times, while the U.S. tourist is given instructions on how to get inside the nuclear facility.
Moral for terrorists: dress like an American.
By the way, Lucas Heights is a research reactor. It produces medical isotopes and performs research, and doesn't produce power.
This is clever:
Many USA ecommerce shops don’t send their goods to Russia or to the countries of the Ex-USSR.
The technology, which measures the time for which keys are held down, as well as the length between strokes, takes advantage of the fact that most computer users evolve a method of typing which is both consistent and idiosyncratic especially for words used frequently such as a user name and password.
I wouldn't want to automatically block users unless they get this right, and the false-positive/false-negative ratio would have to be jiggered properly, but if they can get it working right, it's an extra layer of authentication for "free."
It contains high levels of arsenic.
Pity, I kind of like squid jerky.
This fascinating tidbit is from Aviation Week and Space Technology (April 9, 2007, p. 21), in David Bond's "Washington Outlook" column (unfortunately, not online).
Need to Know
I'd never heard of an LRBL before, but the FAA has public proposed guidelines on them. Apparently flight crews are trained to stash suspicious objects there.
But liability seems to be getting in the way of security and common sense here. It seems reasonable that an airline's engineering director should be allowed to understand the technical reasoning behind the choice of LRBL, and maybe even give the manufacturer feedback on it.
EDITED TO ADD (4/21): Comment (below) from a pilot: The designation of a "least risk bomb location" is nothing new. All planes have a designated area where potentially dangerous packages should be placed. Usually it's in the back, adjacent to a door. There are a slew of procedures to be followed if an explosive device is found on board: depressurizing the plane, moving the item to the LRBL, and bracing/smothering it with luggage and other dense materials so that the force of the blast is directed outward, through the door.
This is a fantastic story of a major prank pulled off at the Super Bowl this year. Basically, five people smuggled more than a quarter of a ton of material into Dolphin Stadium in order to display their secret message on TV. A summary:
Just days after the Boston bomb scare, another team of Boston-based pranksters smuggled and distributed 2,350 suspicious light-up devices into the Super Bowl. Due to its attractiveness as a terrorist target, Dolphin Stadium was on a Level One security alert, a level usually reserved for Presidential inaugurations. By posing as media reporters, the pranksters were able to navigate 95 boxes through federal marshals, Homeland Security agents, bomb squads, police dogs, and a five-ton X-ray crane.
Given all the security, it's amazing how easy it was for them to become part of the security perimeter with all that random stuff. But to those of us who follow this thing, it shouldn't be. His observations are spot on:
1. Wear a suit.
Someone who crashed the Oscars last year gave similar advice:
Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.
On a much smaller scale, here's someone's story of social engineering a bank branch:
I enter the first branch at approximately 9:00AM. Dressed in Dickies coveralls, a baseball cap, work boots and sunglasses I approach the young lady at the front desk.
Social engineering is surprisingly easy. As I said in Beyond Fear (page 144):
Social engineering will probably always work, because so many people are by nature helpful and so many corporate employees are naturally cheerful and accommodating. Attacks are rare, and most people asking for information or help are legitimate. By appealing to the victim’s natural tendencies, the attacker will usually be able to cozen what she wants.
All it takes is a good cover story.
EDITED TO ADD (4/20): The first commenter suggested that the Zug story is a hoax. I think he makes a good argument, and I have no evidence to refute it. Does anyone know for sure?
From the Michigan State Police. The seven signs, according to the video:
I especially like the scenes of concerned citizens calling the police. Anyone care to guess what the false alarm rate would be if everyone started making phone calls like this?
More than a year ago, I wrote about the increasing risks of data loss because more and more data fits in smaller and smaller packages. Today I use a 4-GB USB memory stick for backup while I am traveling. I like the convenience, but if I lose the tiny thing I risk all my data.
Encryption is the obvious solution for this problem -- I use PGPdisk -- but Secustick sounds even better: It automatically erases itself after a set number of bad password attempts. The company makes a bunch of other impressive claims: The product was commissioned, and eventually approved, by the French intelligence service; it is used by many militaries and banks; its technology is revolutionary.
Unfortunately, the only impressive aspect of Secustick is its hubris, which was revealed when Tweakers.net completely broke its security. There's no data self-destruct feature. The password protection can easily be bypassed. The data isn't even encrypted. As a secure storage device, Secustick is pretty useless.
On the surface, this is just another snake-oil security story. But there's a deeper question: Why are there so many bad security products out there? It's not just that designing good security is hard -- although it is -- and it's not just that anyone can design a security product that he himself cannot break. Why do mediocre security products beat the good ones in the marketplace?
In 1970, American economist George Akerlof wrote a paper called "The Market for 'Lemons'" (abstract and article for pay here), which established asymmetrical information theory. He eventually won a Nobel Prize for his work, which looks at markets where the seller knows a lot more about the product than the buyer.
Akerlof illustrated his ideas with a used car market. A used car market includes both good cars and lousy ones (lemons). The seller knows which is which, but the buyer can't tell the difference -- at least until he's made his purchase. I'll spare you the math, but what ends up happening is that the buyer bases his purchase price on the value of a used car of average quality.
This means that the best cars don't get sold; their prices are too high. Which means that the owners of these best cars don't put their cars on the market. And then this starts spiraling. The removal of the good cars from the market reduces the average price buyers are willing to pay, and then the very good cars no longer sell, and disappear from the market. And then the good cars, and so on until only the lemons are left.
In a market where the seller has more information about the product than the buyer, bad products can drive the good ones out of the market.
The computer security market has a lot of the same characteristics of Akerlof's lemons market. Take the market for encrypted USB memory sticks. Several companies make encrypted USB drives -- Kingston Technology sent me one in the mail a few days ago -- but even I couldn't tell you if Kingston's offering is better than Secustick. Or if it's better than any other encrypted USB drives. They use the same encryption algorithms. They make the same security claims. And if I can't tell the difference, most consumers won't be able to either.
Of course, it's more expensive to make an actually secure USB drive. Good security design takes time, and necessarily means limiting functionality. Good security testing takes even more time, especially if the product is any good. This means the less-secure product will be cheaper, sooner to market and have more features. In this market, the more-secure USB drive is going to lose out.
I see this kind of thing happening over and over in computer security. In the late 1980s and early 1990s, there were more than a hundred competing firewall products. The few that "won" weren't the most secure firewalls; they were the ones that were easy to set up, easy to use and didn't annoy users too much. Because buyers couldn't base their buying decision on the relative security merits, they based them on these other criteria. The intrusion detection system, or IDS, market evolved the same way, and before that the antivirus market. The few products that succeeded weren't the most secure, because buyers couldn't tell the difference.
How do you solve this? You need what economists call a "signal," a way for buyers to tell the difference. Warranties are a common signal. Alternatively, an independent auto mechanic can tell good cars from lemons, and a buyer can hire his expertise. The Secustick story demonstrates this. If there is a consumer advocate group that has the expertise to evaluate different products, then the lemons can be exposed.
Secustick, for one, seems to have been withdrawn from sale.
But security testing is both expensive and slow, and it just isn't possible for an independent lab to test everything. Unfortunately, the exposure of Secustick is an exception. It was a simple product, and easily exposed once someone bothered to look. A complex software product -- a firewall, an IDS -- is very hard to test well. And, of course, by the time you have tested it, the vendor has a new version on the market.
In reality, we have to rely on a variety of mediocre signals to differentiate the good security products from the bad. Standardization is one signal. The widely used AES encryption standard has reduced, although not eliminated, the number of lousy encryption algorithms on the market. Reputation is a more common signal; we choose security products based on the reputation of the company selling them, the reputation of some security wizard associated with them, magazine reviews, recommendations from colleagues or general buzz in the media.
All these signals have their problems. Even product reviews, which should be as comprehensive as the Tweakers' Secustick review, rarely are. Many firewall comparison reviews focus on things the reviewers can easily measure, like packets per second, rather than how secure the products are. In IDS comparisons, you can find the same bogus "number of signatures" comparison. Buyers lap that stuff up; in the absence of deep understanding, they happily accept shallow data.
With so many mediocre security products on the market, and the difficulty of coming up with a strong quality signal, vendors don't have strong incentives to invest in developing good products. And the vendors that do tend to die a quiet and lonely death.
This essay originally appeared in Wired.
EDITED TO ADD (4/22): Slashdot thread.
New developments from surveillance-camera-happy England:
The £7,000 device, nicknamed "the Bug", consists of a ring of eight cameras scanning in all directions. It uses software to detect whether anybody is walking or loitering in a way that marks them out from the crowd. A ninth camera then zooms in to follow them if it thinks they are behaving suspiciously.
This is interesting. It moves us further along the continuum into thoughtcrimes, but near as I can tell, the system just collects evidence on people it thinks suspicious, just in case. Assuming the data is erased immediately after, it's much less invasive than actually accosting someone for thoughtcrime; the costs for false alarms is minimal.
I doubt it works nearly as well as the article claims, but that's likely to change in 5 to 10 years. For example, there's a lot of research being done in the area of microfacial expressions to detect lying and other thoughts. This is the sort of technological advance that we need to be talking about in terms of security, privacy, and liberty.
These are not the sorts of matters the police should be getting involved in. The police aren't trained to handle children this age, and children this age don't benefit by being fingerprinted and thrown in jail.
EDITED TO ADD (4/18): Another example:
Unfortunately, the school forgot that the clocks had switched to Daylight Saving Time that morning. The time stamps left on the hotline were adjusted by an hour after Day Light Savings causing Webb's call to logged as the same time the bomb threat was placed. Webb, who's never even had a detention in his life, had actually made his call an hour before the bomb threat was placed.
Seems to work:
The method is a sharp contrast to the traditional training for bank employees confronted with a suspicious person, which advises not approaching the person, and at most, activating an alarm or dropping an exploding dye pack into the cash.
What I like about this security system is that it fails really well in the event of a false alarm. There's nothing wrong with being extra nice to a legitimate customer.
Actually, there are lots of them.
Are these people trying to be stupid?
Sofia Loginova, 17, a Quincy High senior, said she didn't hire anyone to hang four backpacks emblazoned with her Web site's name, www.B4Class.com, and filled with newspapers and some dollar bills, from tree limbs and on a fence near the school. The unexplained backpacks sparked a panic.
Terrorism used to be hard. Now all you have to is hang backpacks from trees near schools.
Refuse to be terrorized, people!
They got a D.
The rest of the U.S. government didn't do very well. Eight of twenty-four departments (including the Department of Defense) failed. Overall, the federal government received a C- (up from a D+ last year).
Amazing video of squid giving birth.
This is just a frightening story. Basically, a contractor with a top secret security clearance was able to inject malicious code and sabotage computers used to track Navy submarines.
Yeah, it was annoying to find and fix the problem, but hang on. How is it possible for a single disgruntled idiot to damage a multi-billion-dollar weapons system? Why aren't there any security systems in place to prevent this? I'll bet anything that there was absolutely no control or review over who put what code in where. I'll bet that if this guy had been just a little bit cleverer, he could have done a whole lot more damage without ever getting caught.
One of the ways to deal with the problem of trusted individuals is by making sure they're trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.
I'll also bet you anything that Microsoft has more security around its critical code than the U.S. military does.
From their press release:
The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.
Um, hello? Having a username and a password -- even if they're both secret -- does not count as two factors, two layers, or two of anything. You need to have two different authentication systems: a password and a biometric, a password and a token.
I wouldn't trust the New Horizons Community Credit Union with my money.
Another example of how we get the risks wrong:
Although statistics show that rates of child abduction and sexual abuse have marched steadily downward since the early 1990s, fear of these crimes is at an all-time high. Even the panic-inducing Megan's Law Web site says stranger abduction is rare and that 90 percent of child sexual-abuse cases are committed by someone known to the child. Yet we still suffer a crucial disconnect between perception of crime and its statistical reality. A child is almost as likely to be struck by lightning as kidnapped by a stranger, but it's not fear of lightning strikes that parents cite as the reason for keeping children indoors watching television instead of out on the sidewalk skipping rope.
German Interior Minister Wolfgang Schaeuble has confirmed plans to seek a change to the constitution to allow the state secret access to the computers of private individuals, in an interview published Thursday.
Supposedly Switzerland is also considering a similar law.
If there's only a few large gangs operating -- and other people are detecting these huge swings of activity as well -- then that's very significant for public policy. One can have sympathy for police officers and regulators faced with the prospect of dealing with hundreds or thousands of spammers; dealing with them all would take many (rather boring and frustrating) lifetimes. But if there are, say, five, big gangs at most -- well that's suddenly looking like a tractable problem.
Count the security lessons: bad password management, protocol failures, poor authentication, check fraud, and -- I suppose -- an attack made possible by poor bounds checking. What else?
By law, every business has to check their customers against a list of "specially designated nationals," and not do business with anyone on that list.
Of course, the list is riddled with bad names and many innocents get caught up in the net. And many businesses decide that it's easier to turn away potential customers with whose name is on the list, creating -- well -- a shunned class:
Tom Kubbany is neither a terrorist nor a drug trafficker, has average credit and has owned homes in the past, so the Northern California mental-health worker was baffled when his mortgage broker said lenders were not interested in him. Reviewing his loan file, he discovered something shocking. At the top of his credit report was an OFAC alert provided by credit bureau TransUnion that showed that his middle name, Hassan, is an alias for Ali Saddam Hussein, purportedly a "son of Saddam Hussein."
This is the same problem as the no-fly list, only in a larger context. And it's no way to combat terrorism. Thankfully, many businesses don't know to check this list and people whose names are similar to suspected terrorists' can still lead mostly normal lives. But the trend here is not good.
The UK police are considering mandating the quality of commercial CCTV cameras to ensure that the images meet their evidence standards.
The shortcomings of the present DNS have been known for years but difficulties in devising a system that offers backward compatability while scaling to millions of nodes on the net have slowed down the implementation of its successor, Domain Name System Security Extensions (DNSSEC). DNSSEC ensures that domain name requests are digitally signed and authenticated, a defence against forged DNS data, a product of attacks such as DNS cache poisoning used to trick surfers into visiting bogus websites that pose as the real thing.
Another news report.
Seems like there's some hysteria in the making:
They are deadly, huge and fast moving. Their tentacles can suck the life out of a human being and they've arrived in Northern California.
The whole article is like that.
I had nothing to do with it.
EDITED TO ADD (4/6): Or this.
This paper, from February's International Journal of Health Geographics, (abstract here), analyzes the consequences of a nuclear attack on several American cities and points out that burn unit capacity nationwide is far too small to accommodate the victims. It says just training people to flee crosswind could greatly reduce deaths from fallout.
I've long said that emergency response is something we should be spending money on. This kind of analysis is both interesting and helpful.
And 17 is the most random number between 1 and 20.
RU Sirius interviewed me for his podcast show.
EDITED TO ADD (4/23): Blogged on BoingBoing.
It's 2007, and I can't believe that people are still using homebrewed encryption algorithms. This one looks pretty easy to break.
Last month Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.
As reported in Federal Computer Week, Cartwright said: "History teaches us that a purely defensive posture poses significant risks," and that if "we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests."
The general isn't alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn't a sys-admin in the world who doesn't want to strike back at computers that are blindly and repeatedly attacking their networks.
Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don't make good police chiefs.
A cyber-security policy that condones both active deterrence and retaliation -- without any judicial determination of wrongdoing -- is attractive, but it's wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.
In warfare, the notion of counterattack is extremely powerful. Going after the enemy -- its positions, its supply lines, its factories, its infrastructure -- is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to an attorney, and to be presumed innocent until proven guilty.
Both vigilante counterattacks, and pre-emptive attacks, fly in the face of these rights. They punish people before who haven't been found guilty. It's the same whether it's an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.
In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the internet it's even harder to know who's attacking you. Just because my computer looks like the source of an attack doesn't mean that it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government's legal system is justice; the goal of a vigilante is expediency.
I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world's sys-admins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces and no extradition treaties.
Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.
But that doesn't make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: "No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished."
I'm glad General Cartwright thinks about offensive cyberwar; it's how generals are supposed to think. I even agree with Richard Clarke's threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we're far safer with a legal system that respects our rights.
This essay originally appeared in Wired.
WEP (Wired Equivalent Privacy) was the protocol used to secure wireless networks. It's known to be insecure and has been replaced by Wi-Fi Protected Access, but it's still in use.
This paper, "Breaking 104 bit WEP in less than 60 seconds," is the best attack against WEP to date:
A two-part story from The Guardian: an excerpt from Other People's Money: The Rise And Fall Of Britain's Most Audacious Credit Card Fraudster.
The first time I did the WTS, it was on a man from London who was staying in a £400 hotel room in Glasgow. I used my hotel phone trick to get his card and personal information -- fortunately, he was a trusting individual. I then called his card company and explained that I was the gentleman concerned, in Glasgow on business, and had suffered the theft of my wallet and passport. I was understandably distraught, lying on my bed in Battlefield and speaking quietly so my parents couldn't hear, and wondered what the company suggested I do. The sympathetic woman at the other end proposed I take a cash advance set against my account, which they could have ready for collection within a couple of hours at a wire transfer operator.
Experts say that the fundamental problem that this highlights is that every stage in Vista's booting process works on blind faith that everything prior to it ran cleanly. The boot kit is therefore able to copy itself into the memory image even before Vista has booted and capture interrupt 13, which operating systems use for read access to sectors of hard drives, among other things.
This is not theoretical; VBootkit is actual code that demonstrates this.
Great essay from 1990 by Bill Holm:
No papers, no pay. It's an interesting equation, and I think it has not surfaced before in Minnesota. Neither of my Icelandic grandfathers, for instance, had papers enough to work in Marshall, and if you're an old Minnesotan, it's unlikely that your grandfathers did, either. Viking wetbacks, they were.
I'm curious what he's thinking today.
I'm not sure which is more important -- the news or the fact that no one is surprised:
Sources told 9NEWS the Red Team was able to sneak about 90 percent of simulated weapons past checkpoint screeners in Denver. In the baggage area, screeners caught one explosive device that was packed in a suitcase. However later, screeners in the baggage area missed a book bomb, according to sources.
My favorite so far: "Window Transparency Information Disclosure."
An information disclosure attack can be launched against buildings that make use of windows made of glass or other transparent materials by observing externally-facing information through the window.
There's also "Technology retrieves sounds in the wall":
Every wall in a room is made up of millions and millions of atoms. Each atom is a collection of electrons, protons and neutrons - all electrically charged and constantly moving.
If you find any others, please post them in the comments. This is the canonical list of April Fool's jokes on the web.
EDITED TO ADD (4/1): "Threat Alert" Jesus.
EDITED TO ADD (4/2): And this by Jim Harper.
The first Movie-Plot Threat Contest asked you to invent a horrific and completely ridiculous, but plausible, terrorist plot. All the entrants were worth reading, but Tom Grant won with his idea to crash an explosive-filled plane into the Grand Coulee Dam.
This year the contest is a little different. We all know that a good plot to blow up an airplane will cause the banning, or at least screening, of something innocuous. If you stop and think about it, it's a stupid response. We screened for guns and bombs, so the terrorists used box cutters. We took away box cutters and small knives, so they hid explosives in their shoes. We started screening shoes, so they planned to use liquids. We now confiscate liquids (even though experts agree the plot was implausible)...and they're going to do something else. We can't win this game, so why are we playing?
Well, we are playing. And now you can, too. Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.
Make the TSA ban wristwatches. Or laptop computers. Or polyester. Or zippers over three inches long. You get the idea.
Your entry will be judged on the common item that the TSA has no choice but to ban, as well as the cleverness of the plot. It has to be realistic; no science fiction, please. And the write-up is critical; last year the best entries were the most entertaining to read.
As before, assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.
Post your movie plots here on this blog.
Judging will be by me, swayed by popular acclaim in the blog comments section. The prize will be an autographed copy of Beyond Fear (in both English and Japanese) and the adulation of your peers. And, if I can swing it -- I couldn't last year -- a phone call with a real live movie producer.
Entries close at the end of the month -- April 30 -- so Crypto-Gram readers can also play.
This is not an April Fool's joke, although it's in the spirit of the season. The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we're not any safer through security measures that require us to correctly guess what the terrorists are going to do next.
EDITED TO ADD (6/15): Winner here.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.