Schneier on Security
A blog covering security and security technology.
January 2008 Archives
Yet another article on the topic. An excerpt:
We substitute one risk for another.
...federal law enforcement officials who need to know have already learned the identities of those responsible for running the Storm worm network, but that U.S. authorities have thus far been prevented from bringing those responsible to justice due to a lack of cooperation from officials in St. Petersburg, Russia, where the Storm worm authors are thought to reside.
I've written about Storm here.
Cory Doctorow has a new metaphor:
We should treat personal electronic data with the same care and respect as weapons-grade plutonium -- it is dangerous, long-lasting and once it has leaked there's no getting it back
I said something similar two years ago:
In some ways, this tidal wave of data is the pollution problem of the information age. All information processes produce it. If we ignore the problem, it will stay around forever. And the only way to successfully deal with it is to pass laws regulating its generation, use and eventual disposal.
They're checking IDs more carefully, looking for forgeries:
Black lights will help screeners inspect the ID cards by illuminating holograms, typically of government seals, that are found in licenses and passports. Screeners also are getting magnifying glasses that highlight tiny inscriptions found in borders of passports and other IDs. About 2,100 of each are going to the nation's 800 airport checkpoints.
ID checks have nothing to do with airport security. And even if they did, anyone can fly on a fake ID. And enforcing immigration laws is not what the TSA does.
In related news, look at this page from the TSA's website:
We screen every passenger; we screen every bag so that your memories are from where you went, not how you got there. We're here to help your travel plans be smooth and stress free. Please take a moment to become familiar with some of our security measures. Doing so now will help save you time once you arrive at the airport.
I know they don't mean it that way, but doesn't it sound like it's saying "We know it doesn't help, but it might make you feel better"?
And why is this even news?
So Jason -- looking every bit the middle-aged man on an uneventful trip to anywhere -- shows a boarding pass and an ID to a TSA document checker, and he is directed to a checkpoint where, unbeknown to the security officer on site, the real test begins.
Also relevant: "Confessions of a TSA Agent":
The traveling public has no idea that the changes the TSA makes come as orders sent down directly from Washington D.C. Those orders may have reasons, but we little screeners at a screening checkpoint will never be told what the background might be. We get told to do something, and just as in the military, we are expected to make it happen -- no ifs, ands or buts about it. Perhaps the changes are as a result of some event occurring in the nation or the world, perhaps it's based on some newly received information or interrogation. What the traveling public needs to understand the necessity for flexibility. If a passenger asks us why we're doing something, in all likelihood we couldn't tell them even if we really did know the answer. This is a business of sensitive information that is used to make choices that can have life changing effects if the information is divulged to the wrong person(s). Just trust that we must know something that prompts us to be doing something.
I have no idea why Kip Hawley is surprised that the TSA is as unpopular with Americans as the IRS.
EDITED TO ADD (1/30): The TSA has a blog, and Kip Hawley wrote the first post. This could be interesting....
EDITED TO ADD (1/31): There is some speculation that the "Confessions of a TSA Agent" is a hoax. I don't know.
Two Ethiopian cabin cleaners were found hiding in the ceiling of an aircraft after it landed at Dulles. Presumably they were allowed on the plane at Addis Abbaba, but no one checked to make sure they got off.
If there's a debate that sums up post-9/11 politics, it's security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity? Security versus privacy: It's the battle of the century, or at least its first decade.
In a Jan. 21 New Yorker article, Director of National Intelligence Michael McConnell discusses a proposed plan to monitor all -- that's right, all -- internet communications for security purposes, an idea so extreme that the word "Orwellian" feels too mild.
In order for cyberspace to be policed, internet activity will have to be closely monitored. Ed Giorgio, who is working with McConnell on the plan, said that would mean giving the government the authority to examine the content of any e-mail, file transfer or Web search. "Google has records that could help in a cyber-investigation," he said. Giorgio warned me, "We have a saying in this business: 'Privacy and security are a zero-sum game.'"
I'm sure they have that saying in their business. And it's precisely why, when people in their business are in charge of government, it becomes a police state. If privacy and security really were a zero-sum game, we would have seen mass immigration into the former East Germany and modern-day China. While it's true that police states like those have less street crime, no one argues that their citizens are fundamentally more secure.
We've been told we have to trade off security and privacy so often -- in debates on security versus privacy, writing contests, polls, reasoned essays and political rhetoric -- that most of us don't even question the fundamental dichotomy.
Security and privacy are not opposite ends of a seesaw; you don't have to accept less of one to get more of the other. Think of a door lock, a burglar alarm and a tall fence. Think of guns, anti-counterfeiting measures on currency and that dumb liquid ban at airports. Security affects privacy only when it's based on identity, and there are limitations to that sort of approach.
Since 9/11, approximately three things have potentially improved airline security: reinforcing the cockpit doors, passengers realizing they have to fight back and -- possibly -- sky marshals. Everything else -- all the security measures that affect privacy -- is just security theater and a waste of effort.
By the same token, many of the anti-privacy "security" measures we're seeing -- national ID cards, warrantless eavesdropping, massive data mining and so on -- do little to improve, and in some cases harm, security. And government claims of their success are either wrong, or against fake threats.
The debate isn't security versus privacy. It's liberty versus control.
You can see it in comments by government officials: "Privacy no longer can mean anonymity," says Donald Kerr, principal deputy director of national intelligence. "Instead, it should mean that government and businesses properly safeguard people's private communications and financial information." Did you catch that? You're expected to give up control of your privacy to others, who -- presumably -- get to decide how much of it you deserve. That's what loss of liberty looks like.
It should be no surprise that people choose security over privacy: 51 to 29 percent in a recent poll. Even if you don't subscribe to Maslow's hierarchy of needs, it's obvious that security is more important. Security is vital to survival, not just of people but of every living thing. Privacy is unique to humans, but it's a social need. It's vital to personal dignity, to family life, to society -- to what makes us uniquely human -- but not to survival.
If you set up the false dichotomy, of course people will choose security over privacy -- especially if you scare them first. But it's still a false dichotomy. There is no security without privacy. And liberty requires both security and privacy. The famous quote attributed to Benjamin Franklin reads: "Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety." It's also true that those who would give up privacy for security are likely to end up with neither.
This essay originally appeared on Wired.com.
Remember the "cyberwar" in Estonia last year? When asked about it, I generally say that it's unclear that it wasn't just kids playing politics.
The reality is even more mundane:
...the attacker convicted today isn't a member of the Russian military, nor is he an embittered cyber warrior in Putin's secret service. He doesn't even live in Russia. He's an [20-year-old] ethnic Russian who lives in Estonia, who was pissed off over that whole statue thing.
So much for all of that hype.
Ronald C. Arkin, "Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture," Technical Report GIT-GVU-07011. Fascinating (and long: 117-page) paper on ethical implications of robots in war.
Summary, Conclusions, and Future Work
In 1987, CPSR began a tradition to recognize outstanding contributions for social responsibility in computing technology. The organization wanted to cite people who recognize the importance of a science-educated public, who take a broader view of the social issues of computing. We aimed to share concerns that lead to action in arenas of the power, promise, and limitations of computer technology.
It's also dead:
Heavier than even giant squid, colossal squid (Mesonychoteuthis hamiltoni) have eyes as wide as dinner plates and sharp hooks on some of their suckers. The new specimen weighs in at an estimated 990 pounds (450 kilograms).
New York City's plan to secure its subways with a next-generation surveillance network is getting more expensive by the second, and slipping further and further behind schedule. A new report by the New York State Comptroller's office reveals that "the cost of the electronic security program has grown from $265 million to $450 million, an increase of $185 million or 70 percent." An August 2008 deadline has been pushed back to December 2009, and further delays may be just ahead.
A suitcase in New Zealand.
An American photographing all 50 state capitols.
Maybe this would be a good idea.
And when the owner reports his mistake, he's arrested.
What is this supposed to teach?
This is an awful fear-mongering story about non-Muslims being recruited in the UK:
As many as 1,500 white Britons are believed to have converted to Islam for the purpose of funding, planning and carrying out surprise terror attacks inside the UK, according to one MI5 source.
This quote is particularly telling:
One British security source last night told Scotland on Sunday: “There could be anything up to 1,500 converts to the fundamentalist cause across Britain. They pose a real potential danger to our domestic security because, obviously, these people blend in and do not raise any flags.
Because the only "flag" that can possibly identify terrorists is that they're Muslim, right?
Using software, of course. The context is shredded and torn East German Stasi documents, but the technology is more general of course:
The machine-shredded stuff is confetti, largely unrecoverable. But in May 2007, a team of German computer scientists in Berlin announced that after four years of work, they had completed a system to digitally tape together the torn fragments. Engineers hope their software and scanners can do the job in less than five years even taking into account the varying textures and durability of paper, the different sizes and shapes of the fragments, the assortment of printing (from handwriting to dot matrix) and the range of edges (from razor sharp to ragged and handmade.) "The numbers are tremendous. If you imagine putting together a jigsaw puzzle at home, you have maybe 1,000 pieces and a picture of what it should look like at the end," project manager Jan Schneider says. "We have many millions of pieces and no idea what they should look like when we're done."
I have absolutely no doubt that there will be security flaws in remotely controllable thermostats, allowing hackers to seize control of them. Do this on a too-hot day, and you might even cause a large blackout.
The CIA unleashed a big one at a SANS conference:
On Wednesday, in New Orleans, US Central Intelligence Agency senior analyst Tom Donahue told a gathering of 300 US, UK, Swedish, and Dutch government officials and engineers and security managers from electric, water, oil & gas and other critical industry asset owners from all across North America, that "We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet."
SANS's Alan Paller is happy to add details:
In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems, says Alan Paller, director of the SANS Institute, an organization that hosts a crisis center for hacked companies. "Hundreds of millions of dollars have been extorted, and possibly more. It's difficult to know, because they pay to keep it a secret," Paller says. "This kind of extortion is the biggest untold story of the cybercrime industry."
And to up the fear factor:
The prospect of cyberattacks crippling multicity regions appears to have prompted the government to make this information public. The issue "went from 'we should be concerned about to this' to 'this is something we should fix now,' " said Paller. "That's why, I think, the government decided to disclose this."
An attendee of the meeting said that the attack was not well-known through the industry and came as a surprise to many there. Said the person who asked to remain anonymous, "There were apparently a couple of incidents where extortionists cut off power to several cities using some sort of attack on the power grid, and it does not appear to be a physical attack."
And more hyperbole from someone in the industry:
Over the past year to 18 months, there has been "a huge increase in focused attacks on our national infrastructure networks, . . . and they have been coming from outside the United States," said Ralph Logan, principal of the Logan Group, a cybersecurity firm.
I'm more than a bit skeptical here. To be sure -- fake staged attacks aside -- there are serious risks to SCADA systems (Ganesh Devarajan gave a talk at DefCon this year about some potential attack vectors), although at this point I think they're more a future threat than present danger. But this CIA tidbit tells us nothing about how the attacks happened. Were they against SCADA systems? Were they against general-purpose computer, maybe Windows machines? Insiders may have been involved, so was this a computer security vulnerability at all? We have no idea.
Cyber-extortion is certainly on the rise; we see it at Counterpane. Primarily it's against fringe industries -- online gambling, online gaming, online porn -- operating offshore in countries like Bermuda and the Cayman Islands. It is going mainstream, but this is the first I've heard of it targeting power companies. Certainly possible, but is that part of the CIA rumor or was it tacked on afterwards?
And here's list of power outages. Which ones were hacker caused? Some details would be nice.
I'd like a little bit more information before I start panicking.
EDITED TO ADD (1/23): Slashdot thread.
Not a joke, apparently.
Almost three years ago I blogged about SmartWater: liquid imbued with a uniquely identifiable DNA-style code. In my post I made the snarky comment:
The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.
That remark aside, a new university study concludes that it works:
The study of over 100 criminals revealed that simply displaying signs that goods and premises were protected by SmartWater was sufficient to put off most of the criminals the team interviewed.
Of course, we don't know if the study was sponsored by SmartWater the company, and we don't know the methodology -- interviewing criminals about what deters them is fraught with potential biases -- but it's still interesting.
Also note that SmartWater is not only sprayed on valuables, but also sprayed on burglars and criminals -- tying them to the crime scene.
The Dutch RFID public transit card, which has already cost the government $2B -- no, that's not a typo -- has been hacked even before it has been deployed:
The first reported attack was designed by two students at the University of Amsterdam, Pieter Siekerman and Maurits van der Schee. They analyzed the single-use ticket and showed its vulnerabilities in a report. They also showed how a used single-use card could be given eternal life by resetting it to its original "unused" state.
Most of the links are in Dutch; there isn't a whole lot of English-language press about this. But the Dutch Parliament recently invited the students to give testimony; they're more than a little bit interested how $2B could be wasted.
My guess is the system was designed by people who don't understand security, and therefore thought it was easy.
EDITED TO ADD (2/13): More info.
Up to three American Airlines jets carrying passengers will be outfitted with anti-missile technology this spring in the latest phase of testing technology to protect commercial planes from attack.
I have several feelings about this. One, it's security theater against a movie-plot threat. Two, given that that's true, attaching an empty box to the belly of the plane and writing "Laser Anti-Missile System" on it would be just as effective a deterrent at a fraction of the cost. And three, how do we know that's not what they're doing?
More news here.
Fire Engineering magazine points out that fire alarms used to be kept locked to prevent false alarms:
Q: Prior to 1870, street corner fire alarm pull boxes were kept locked. Why were they kept locked and how did a person gain access to 'pull the box?'
According to Robert Cromie in The Great Chicago Fire (Thomas Nelson: 1994, p. 33), this may have been one reason for the slow response to the fire:
William Lee, the O'Leary's neighbor, rushed into Goll's drugstore, and gasped out a request for the key to the alarm box. The new boxes were attached to the walls of stores or other convenient locations. To prevent false alarms and crank calls, the boxes were locked, and the keys given to trustworthy citizens nearby.
Apparently, Lee said that Goll refused to give him the key because he'd already seen a fire engine go past; Goll said he actually did pull the alarm, twice, but if so it must not have worked.
(There's more about what sounds like a really bad communications failure, but it's a little too hard for me to read on the Amazon website.)
But did you know that the fire burned for over half an hour before an alarm was ever sounded? Alarm boxes were actually kept locked in those days, to prevent false alarms!
Compare this with a proposed law in New York City that will require people to get a license before they can buy chemical, biological, or radiological attack detectors:
The legislation — which was proposed by the Bloomberg administration and would be the first of its kind in the nation — would empower the police commissioner to decide whether to grant a free five-year permit to individuals and companies seeking to "possess or deploy such detectors." Common smoke alarms and carbon monoxide detectors would not be covered by the law, the Police Department said. Violations of the law would be considered a misdemeanor.
False positives are a problem with any detection system, and certainly putting Geiger counters in the hands of everyone will mean a lot of amateurs calling false alarms into the police. But the way to handle that isn't to ban Geiger counters. (Just as the way to deal with false fire alarms 100 years ago wasn't to lock the alarm boxes.) The way to deal with it is by 1) putting a system in place to quickly separate the real alarms from the false alarms, and 2) prosecuting those who maliciously sound false alarms.
We don't want to encourage people to report everything; that's too many false alarms. Nor do we want to discourage them from reporting things they feel are serious. In the end, it's the job of the police to figure out what's what. I said this in an essay last year:
...these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.
A 14-year-old built a modified a TV remote control to switch trains on tracks in the Polish city of Lodz:
Transport command and control systems are commonly designed by engineers with little exposure or knowledge about security using commodity electronics and a little native wit. The apparent ease with which Lodz's tram network was hacked, even by these low standards, is still a bit of an eye opener.
Here's Steve Bellovin:
The device is described in the original article as a modified TV remote control. Presumably, this means that the points are normally controlled by IR signals; what he did was learn the coding and perhaps the light frequency and amplitude needed. This makes a lot of sense; it lets tram drivers control where their trains go, rather than relying on an automated system or some such. Indeed, the article notes "a city tram driver tried to steer his vehicle to the right, but found himself helpless to stop it swerving to the left instead."
The lesson here is that security by obscurity, combined with physical security of the equipment, wasn't enough. This kid jumped whatever fences there were, and reverse-engineered the IR control protocol. Then he was able to play "trains" with real trains.
The measures -- details here -- won't do anything to stop child predators on MySpace. But, on the other hand, there isn't really any problem with child predators -- just a tiny handful of highly publicized stories -- on MySpace. It's just security theater against a movie-plot threat. But we humans have a well-established cognitive bias that overestimates threats against our children, so it all makes sense.
The New York Times writes about a plausible connection between fear and heart disease:
Which is more of a threat to your health: Al Qaeda or the Department of Homeland Security?
It doesn't surprise me that fear of terrorism is more harmful than actual terrorism. That's the whole point of terrorism: an amplification of fear through the mass media.
The point of terrorism is to cause terror, sometimes to further a political goal and sometimes out of sheer hatred. The people terrorists kill are not the targets; they are collateral damage. And blowing up planes, trains, markets or buses is not the goal; those are just tactics. The real targets of terrorism are the rest of us: the billions of us who are not killed but are terrorized because of the killing. The real point of terrorism is not the act itself, but our reaction to the act.
A longish article by Rudy Giuliani on his philosophy to secure the nation from terrorism. I may write a long blog post on the article after I read it, but I wanted to post the link as soon as I saw it.
This is a good article on a new trend in corporate spying: companies like Wal-Mart and Sears have resorted to covert surveillance of employees, partners, journalists, and even Internet users to protect itself from "global threats."
"Like most major corporations, it is our corporate responsibility to have systems in place, including software systems, to monitor threats to our network, intellectual property and our people," Wal-Mart spokeswoman Sarah Clark said in a statement in April. Following the Gabbard firing, Wal-Mart said it conducted a review of its monitoring activities. "There have been changes in leadership, and we have strengthened our practices and protocols in this area," Clark said.
And this article talks about ex-CIA agents working for corporations:
The best estimate is that several hundred former intelligence agents now work in corporate espionage, including some who left the C.I.A. during the agency turmoil that followed 9/11. They quickly joined private-investigation firms whose U.S. corporate clients were planning to expand into Russia, China, and other countries with opaque business practices and few public records, and who needed the skinny on international partners or rivals.
All interesting. It seems that corporate espionage has gone mainstream, and the debate is more about how and when.
On a related note, this paragraph disturbed me:
On occasion, Diligence investigators were dispatched to collect garbage from a target's home or office. In some cases, two former employees said, Diligence hired off-duty or retired police officers to take trash so that they could wave their badges and fend off any awkward questions.
It's public authority being used for private interests. We see it a lot -- off-duty police officers guarding private businesses, for example -- and it erodes public trust of authority. In the case above, I'm not even sure it's legal.
On Wednesday, a man dressed as an armored truck employee with the company AT Systems walked into a BB&T bank in Wheaton about 11 a.m., was handed more than $500,000 in cash and walked out, a source familiar with the case said.
Social engineering at its finest.
EDITED TO ADD (1/16): Seems to be an inside job.
How to cheat on a test by replacing a soft-drink-bottle label with a replica that includes your crib notes. Certainly more clever than hiding a small piece of paper inside your pen.
Whenever I talk or write about my own security setup, the one thing that surprises people -- and attracts the most criticism -- is the fact that I run an open wireless network at home. There's no password. There's no encryption. Anyone with wireless capability who can see my network can use it to access the internet.
To me, it's basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it's both wrong and dangerous.
I'm told that uninvited strangers may sit in their cars in front of my house, and use my network to send spam, eavesdrop on my passwords, and upload and download everything from pirated movies to child pornography. As a result, I risk all sorts of bad things happening to me, from seeing my IP address blacklisted to having the police crash through my door.
While this is technically true, I don't think it's much of a risk. I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.
This is not to say that the new wireless security protocol, WPA, isn't very good. It is. But there are going to be security flaws in it; there always are.
I spoke to several lawyers about this, and in their lawyerly way they outlined several other risks with leaving your network open.
While none thought you could be successfully prosecuted just because someone else used your network to commit a crime, any investigation could be time-consuming and expensive. You might have your computer equipment seized, and if you have any contraband of your own on your machine, it could be a delicate situation. Also, prosecutors aren't always the most technically savvy bunch, and you might end up being charged despite your innocence. The lawyers I spoke with say most defense attorneys will advise you to reach a plea agreement rather than risk going to trial on child-pornography charges.
In a less far-fetched scenario, the Recording Industry Association of America is known to sue copyright infringers based on nothing more than an IP address. The accuser's chance of winning is higher than in a criminal case, because in civil litigation the burden of proof is lower. And again, lawyers argue that even if you win it's not worth the risk or expense, and that you should settle and pay a few thousand dollars.
I remain unconvinced of this threat, though. The RIAA has conducted about 26,000 lawsuits, and there are more than 15 million music downloaders. Mark Mulligan of Jupiter Research said it best: "If you're a file sharer, you know that the likelihood of you being caught is very similar to that of being hit by an asteroid."
I'm also unmoved by those who say I'm putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers. This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it's on, then it simply doesn't matter. And if my computer isn't secure on a public network, securing my own network isn't going to reduce my risk very much.
Yes, computer security is hard. But if your computers leave your house, you have to solve it anyway. And any solution will apply to your desktop machines as well.
Finally, critics say someone might steal bandwidth from me. Despite isolated court rulings that this is illegal, my feeling is that they're welcome to it. I really don't mind if neighbors use my wireless network when they need it, and I've heard several stories of people who have been rescued from connectivity emergencies by open wireless networks in the neighborhood.
Similarly, I appreciate an open network when I am otherwise without bandwidth. If someone were using my network to the point that it affected my own traffic or if some neighbor kid was dinking around, I might want to do something about it; but as long as we're all polite, why should this concern me? Pay it forward, I say.
Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn't a big risk either. The worst that will happen to you is that you'll have to find a new ISP.
A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either "Bill" or "Linus" mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It's a really clever idea.
Security is always a trade-off. I know people who rarely lock their front door, who drive in the rain (and, while using a cell phone) and who talk to strangers. In my opinion, securing my wireless network isn't worth it. And I appreciate everyone else who keeps an open wireless network, including all the coffee shops, bars and libraries I have visited in the past, the Dayton International Airport where I started writing this and the Four Points Sheraton where I finished. You all make the world a better place.
This essay originally appeared on Wired.com, and has since generated a lot of controversy. There's a Slashdot thread. And here are three opposing essays and three supporting essays. Presumably there will be a lot of back and forth in the comments section here as well.
EDITED TO ADD (1/18): Another. In the beginning, comments agreeing with me and disagreeing with me were about tied. By now, those that disagree with me are firmly in the lead.
"The goal of this project is to develop a reusable and behaviorally founded computer model of pedestrian movement and crowd behavior amid dense urban environments, to serve as a test-bed for experimentation," says Torrens. "The idea is to use the model to test hypotheses, real-world plans and strategies that are not very easy, or are impossible to test in practice."
Their special report from December includes a bunch of different articles.
Excellent essay from The New York Times:
In the end, I'm not sure which is more troubling, the inanity of the existing regulations, or the average American's acceptance of them and willingness to be humiliated. These wasteful and tedious protocols have solidified into what appears to be indefinite policy, with little or no opposition. There ought to be a tide of protest rising up against this mania. Where is it? At its loudest, the voice of the traveling public is one of grumbled resignation. The op-ed pages are silent, the pundits have nothing meaningful to say.
This story made the rounds in European newspapers about ten years ago -- mostly stories in German, if I remember -- but it wasn't covered much here in the U.S.
For half a century, Crypto AG, a Swiss company located in Zug, has sold to more than 100 countries the encryption machines their officials rely upon to exchange their most sensitive economic, diplomatic and military messages. Crypto AG was founded in 1952 by the legendary (Russian born) Swedish cryptographer Boris Hagelin. During World War II, Hagelin sold 140,000 of his machine to the US Army.
We don't know the truth here, but the article lays out the evidence pretty well.
See this essay of mine on how the NSA might have been able to read Iranian encrypted traffic.
It's not on their website yet, and you'd have to pay to read it in any case, but the February 2008 issue of Consumer Reports has an article on aviation security. Much of it you've all heard before, but there are some new bits:
Larry Tortorich, a TSA training officer and former representative to the Joint Terrorism Task Force who retired in 2006, also says he saw problems from the inside. "There was a facade of security. There were numerous security flaws and vulnerabilities I identified. The response was, it wasn't apparent to the public, so there would not be any corrective action."
I've regularly pointed to reinforcing the cockpit doors as something that was a good idea, and should have been done years earlier.
Critics, however, say a stronger door is only half of the solution. "People have this illusion that hardened cockpit doors work, and they don't," Dzakovic says. "If you want to have a secure door, you need to have a double hulled door."
Most of them weren't really security issues: locking mechanisms failing, doors popping open in flight, and so on. But this was more interesting:
A 2006 study of aviation security by DFI International, a Washington, D.C. security consultancy, found that a drunken passenger kicked a hole in a door panel and that aircraft cleaners "broke a fortified door off its hinges by running a heavy snack cart into it on a bet."
The article also talks about how poor the screeners actually are, but I've covered all that already.
His name is similar to someone on the "no fly" list:
A five-year-old boy was taken into custody and thoroughly searched at Sea-Tac because his name is similar to a possible terrorist alias. As the Consumerist reports, "When his mother went to pick him up and hug him and comfort him during the proceedings, she was told not to touch him because he was a national security risk. They also had to frisk her again to make sure the little Dillinger hadn't passed anything dangerous weapons or materials to his mother when she hugged him."
The explanation is simple: to the TSA, following procedure is more important than common sense. But unfortunately, catching the next terrorist will require more common sense than it will following proper procedure.
If I ever get to interview Kip Hawley again, I'll ask him about this.
EDITED TO ADD (1/12): Another kid on the no-fly list.
Canada comes in first.
Individual privacy is best protected in Canada but under threat in the United States and the European Union as governments introduce sweeping surveillance and information-gathering measures in the name of security and border control, an international rights group said in a report released Saturday.
EDITED TO ADD (1/10): Actually, Canada comes in second.
The daily newspaper, Aftonbladet, turned the stick over to the Armed Forces on Thursday. The paper's editorial office obtained the memory stick from an individual who discovered it in a public computer center in Stockholm.
I wrote about this sort of thing two years ago:
The point is that it's now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I'd never know it.
Interesting article from Newsweek:
The evolutionary primacy of the brain's fear circuitry makes it more powerful than the brain's reasoning faculties. The amygdala sprouts a profusion of connections to higher brain regions -- neurons that carry one-way traffic from amygdala to neocortex. Few connections run from the cortex to the amygdala, however. That allows the amygdala to override the products of the logical, thoughtful cortex, but not vice versa. So although it is sometimes possible to think yourself out of fear ("I know that dark shape in the alley is just a trash can"), it takes great effort and persistence. Instead, fear tends to overrule reason, as the amygdala hobbles our logic and reasoning circuits. That makes fear "far, far more powerful than reason," says neurobiologist Michael Fanselow of the University of California, Los Angeles. "It evolved as a mechanism to protect us from life-threatening situations, and from an evolutionary standpoint there's nothing more important than that."
I've already written about this sort of thing.
Investigative report on passport fraud worldwide.
Six years after 9/11, an NBC News undercover investigation has found that the black market in fraudulent passports is thriving. On the streets of South America, NBC documented the sale of stolen and doctored passports, and travel papers prized by terrorists: genuine passports issued under false names. For a few thousand dollars, an undercover investigator was able to purchase several entirely new identities from organized criminal networks with access to corrupt government employees. The investigator obtained passports from Spain, Peru, and Venezuela and used the Peruvian and Venezuelan passports to travel widely in the Western Hemisphere, with practically no scrutiny.
All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.
Yesterday The New York Times wrote about New York City's campaign:
Now, an overview of police data relating to calls to the hot line over the past two years reveals the answer and provides a unique snapshot of post-9/11 New York, part paranoia and part well-founded caution. Indeed, no terrorists were arrested, but a wide spectrum of other activity was reported.
And as long as we're on the topic, read about the couple branded as terrorists in the UK for taking photographs in a mall. And this about a rail fan being branded a terrorist for trying to film a train. (Note that the member of the train's crew was trying to incite the other passengers to do something about the filmer.) And about this Icelandic woman's experience with U.S. customs because she overstayed a visa in 1995.
And lastly, this funny piece of (I trust) fiction.
Remember that every one of these incidents requires police resources to investigate, resources that almost certainly could be better spent keeping us actually safe.
I was interviewed by Computerworld Australia.
The news articles are pretty sensational:
The computer network in the Dreamliner's passenger compartment, designed to give passengers in-flight internet access, is connected to the plane's control, navigation and communication systems, an FAA report reveals.
According to the U.S. Federal Aviation Administration, the new Boeing 787 Dreamliner aeroplane may have a serious security vulnerability in its on-board computer networks that could allow passengers to access the plane's control systems.
If this is true, this is a very serious security vulnerability. And it's not just terrorists trying to control the airplane, but the more common software flaw that causes some unforeseen interaction with something else and cascades into a bigger problem. However, the FAA document in the Federal Register is not as clear as all that. It does say:
The proposed architecture of the 787 is different from that of existing production (and retrofitted) airplanes. It allows new kinds of passenger connectivity to previously isolated data networks connected to systems that perform functions required for the safe operation of the airplane. Because of this new passenger connectivity, the proposed data network design and integration may result in security vulnerabilities from intentional or unintentional corruption of data and systems critical to the safety and maintenance of the airplane. The existing regulations and guidance material did not anticipate this type of system architecture or electronic access to aircraft systems that provide flight critical functions. Furthermore, 14 CFR regulations and current system safety assessment policy and techniques do not address potential security vulnerabilities that could be caused by unauthorized access to aircraft data buses and servers. Therefore, special conditions are imposed to ensure that security, integrity, and availability of the aircraft systems and data networks are not compromised by certain wired or wireless electronic connections between airplane data buses and networks.
But, honestly, this isn't nearly enough information to work with. Normally, the aviation industry is really good about this sort of thing, and it doesn't make sense that they'd do something as risky as this. I'd like more definitive information.
EDITED TO ADD (1/16): The FAA responds. Seems like there's more hype than story here. Still, it's worth paying attention to.
Because they're harder to hack:
Though Apple machines are still pricier than their Windows counterparts, the added security they offer might be worth the cost, says Wallington. He points out that Apple's X Serve servers, which are gradually becoming more commonplace in Army data centers, are proving their mettle. "Those are some of the most attacked computers there are. But the attacks used against them are designed for Windows-based machines, so they shrug them off," he says.
This one is pretty funny, too.
Good article about the Ft. Dix terrorist plotters: the challenges of going after terrorism more proactively, and the risks of using informants.
I wrote about some of these issues here.
I'm generally a fan of behavioral profiling. While it sounds weird and creepy and has been likened to Orwell's "facecrime", there's no doubt that -- when done properly -- it works at catching common criminals:
On Dec. 4, Juan Carlos Berriel-Castillo, 22, and Bernardo Carmona-Olivares, 20, were planning to fly to Maui but were instead arrested on suspicion of forgery.
TSA press release here.
Security is a trade-off. The question is whether the expense of the Screening Passengers by Observation Techniques (SPOT) program, given the minor criminals it catches, is worth it. (Remember, it's supposed to catch terrorists, not people with outstanding misdemeanor warrants.) Especially with the 99% false alarm rate:
Since January 2006, behavior-detection officers have referred about 70,000 people for secondary screening, Maccario said. Of those, about 600 to 700 were arrested on a variety of charges, including possession of drugs, weapons violations and outstanding warrants.
And the other social costs, including loss of liberty, restriction of fundamental freedoms, and the creation of a thoughtcrime. Is this the sort of power we want to give a police force in a constitutional democracy, or does it feel more like a police-state sort of thing?
This "Bizarro" cartoon sums it up nicely.
Join "My SHC Community" on Sears.com, and the company will install some pretty impressive spyware on your computer:
Sears.com is distributing spyware that tracks all your Internet usage - including banking logins, email, and all other forms of Internet usage - all in the name of "community participation." Every website visitor that joins the Sears community installs software that acts as a proxy to every web transaction made on the compromised computer. In other words, if you have installed Sears software ("the proxy") on your system, all data transmitted to and from your system will be intercepted. This extreme level of user tracking is done with little and inconspicuous notice about the true nature of the software. In fact, while registering to join the "community," very little mention is made of software or tracking. Furthermore, after the software is installed, there is no indication on the desktop that the proxy exists on the system, so users are tracked silently.
If a kid with a scary hacker name did this sort of thing, he'd be arrested. But this is Sears, so who knows what will happen to them. But what should happen is that the anti-spyware companies should treat this as the malware it is, and not ignore it because it's done by a Fortune 500 company.
"National Security for the Twenty-First Century," by Charlie Edwards at the British think-tank Demos. It's long -- 121 pages -- but there's some good stuff in it.
The British Government changes their rhetoric:
The words "war on terror" will no longer be used by the British government to describe attacks on the public, the country's chief prosecutor said Dec. 27.
This is excellent. The only war has been rhetorical, and using that language only served to scare people and legitimize the terrorists. Someday the U.S. will follow suit.
While standard commercial software vendors sell software as a service, malware vendors sell malware as a service, which is advertised and distributed like standard software. Communicating via internet relay chat (IRC) and forums, hackers advertise Iframe exploits, pop-unders, click fraud, posting and spam. "If you don't have it, you can rent it here," boasts one post, which also offers online video tutorials. Prices for services vary by as much as 100-200 percent across sites, while prices for non-Russian sites are often higher: "If you want the discount rate, buy via Russian sites," says Genes.
This kind of thing is also discussed here.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.