Blog: January 2008 Archives
Yet another article on the topic. An excerpt:
We substitute one risk for another.
Insurers in the United Kingdom used to offer discounts to drivers who purchased cars with safer brakes. “They don’t anymore,” says John Adams, a risk analyst and emeritus professor of geography at University College. “There weren’t fewer accidents, just different accidents.”
Why? For the same reason that the vehicles most likely to go out of control in snowy conditions are those with four-wheel drive. Buoyed by a false sense of safety that comes with the increased control, drivers of four-wheel-drive vehicles take more risks. “These vehicles are bigger and heavier, which should keep them on the road,” says Ropeik. “But police report that these drivers go faster, even when roads are slippery.”
Both are cases of risk compensation: People have a preferred level of risk, and they modulate their behavior to keep risk at that constant level. Features designed to increase safety—four-wheel drive, Seat belts, or air bags—wind up making people drive faster. The safety features may reduce risks associated with weather, but they don’t cut overall risk. “If I drink a diet soda with dinner,” quips Slovic, “I have ice cream for dessert.”
…federal law enforcement officials who need to know have already learned the identities of those responsible for running the Storm worm network, but that U.S. authorities have thus far been prevented from bringing those responsible to justice due to a lack of cooperation from officials in St. Petersburg, Russia, where the Storm worm authors are thought to reside.
I’ve written about Storm here.
Cory Doctorow has a new metaphor:
We should treat personal electronic data with the same care and respect as weapons-grade plutonium—it is dangerous, long-lasting and once it has leaked there’s no getting it back
I said something similar two years ago:
In some ways, this tidal wave of data is the pollution problem of the information age. All information processes produce it. If we ignore the problem, it will stay around forever. And the only way to successfully deal with it is to pass laws regulating its generation, use and eventual disposal.
They’re checking IDs more carefully, looking for forgeries:
Black lights will help screeners inspect the ID cards by illuminating holograms, typically of government seals, that are found in licenses and passports. Screeners also are getting magnifying glasses that highlight tiny inscriptions found in borders of passports and other IDs. About 2,100 of each are going to the nation’s 800 airport checkpoints.
The closer scrutiny of passenger IDs is the latest Transportation Security Administration effort to check passengers more thoroughly than simply having them walk through metal detectors.
More than 40 passengers have been arrested since June in cases when TSA screeners spotted altered passports, fraudulent visas and resident ID cards, and forged driver’s licenses. Many of them were arrested on immigration charges.
ID checks have nothing to do with airport security. And even if they did, anyone can fly on a fake ID. And enforcing immigration laws is not what the TSA does.
In related news, look at this page from the TSA’s website:
We screen every passenger; we screen every bag so that your memories are from where you went, not how you got there. We’re here to help your travel plans be smooth and stress free. Please take a moment to become familiar with some of our security measures. Doing so now will help save you time once you arrive at the airport.
I know they don’t mean it that way, but doesn’t it sound like it’s saying “We know it doesn’t help, but it might make you feel better”?
And why is this even news?
So Jason—looking every bit the middle-aged man on an uneventful trip to anywhere—shows a boarding pass and an ID to a TSA document checker, and he is directed to a checkpoint where, unbeknown to the security officer on site, the real test begins.
He gets through, which in real life would mean a terrorist was headed toward a plane with a bomb.
To be clear, the TSA allowed CNN to see and record this test, and the agency is not concerned with CNN showing it. The TSA says techniques such as the one used in Tampa are known to terrorists and openly discussed on known terror Web sites.
Also relevant: “Confessions of a TSA Agent“:
The traveling public has no idea that the changes the TSA makes come as orders sent down directly from Washington D.C. Those orders may have reasons, but we little screeners at a screening checkpoint will never be told what the background might be. We get told to do something, and just as in the military, we are expected to make it happen—no ifs, ands or buts about it. Perhaps the changes are as a result of some event occurring in the nation or the world, perhaps it’s based on some newly received information or interrogation. What the traveling public needs to understand the necessity for flexibility. If a passenger asks us why we’re doing something, in all likelihood we couldn’t tell them even if we really did know the answer. This is a business of sensitive information that is used to make choices that can have life changing effects if the information is divulged to the wrong person(s). Just trust that we must know something that prompts us to be doing something.
I have no idea why Kip Hawley is surprised that the TSA is as unpopular with Americans as the IRS.
EDITED TO ADD (1/30): The TSA has a blog, and Kip Hawley wrote the first post. This could be interesting….
EDITED TO ADD (1/31): There is some speculation that the “Confessions of a TSA Agent” is a hoax. I don’t know.
If there’s a debate that sums up post-9/11 politics, it’s security versus privacy. Which is more important? How much privacy are you willing to give up for security? Can we even afford privacy in this age of insecurity? Security versus privacy: It’s the battle of the century, or at least its first decade.
In a Jan. 21 New Yorker article, Director of National Intelligence Michael McConnell discusses a proposed plan to monitor all—that’s right, all—internet communications for security purposes, an idea so extreme that the word “Orwellian” feels too mild.
In order for cyberspace to be policed, internet activity will have to be closely monitored. Ed Giorgio, who is working with McConnell on the plan, said that would mean giving the government the authority to examine the content of any e-mail, file transfer or Web search. “Google has records that could help in a cyber-investigation,” he said. Giorgio warned me, “We have a saying in this business: ‘Privacy and security are a zero-sum game.'”
I’m sure they have that saying in their business. And it’s precisely why, when people in their business are in charge of government, it becomes a police state. If privacy and security really were a zero-sum game, we would have seen mass immigration into the former East Germany and modern-day China. While it’s true that police states like those have less street crime, no one argues that their citizens are fundamentally more secure.
We’ve been told we have to trade off security and privacy so often—in debates on security versus privacy, writing contests, polls, reasoned essays and political rhetoric—that most of us don’t even question the fundamental dichotomy.
Security and privacy are not opposite ends of a seesaw; you don’t have to accept less of one to get more of the other. Think of a door lock, a burglar alarm and a tall fence. Think of guns, anti-counterfeiting measures on currency and that dumb liquid ban at airports. Security affects privacy only when it’s based on identity, and there are limitations to that sort of approach.
Since 9/11, approximately three things have potentially improved airline security: reinforcing the cockpit doors, passengers realizing they have to fight back and—possibly—sky marshals. Everything else—all the security measures that affect privacy—is just security theater and a waste of effort.
By the same token, many of the anti-privacy “security” measures we’re seeing—national ID cards, warrantless eavesdropping, massive data mining and so on—do little to improve, and in some cases harm, security. And government claims of their success are either wrong, or against fake threats.
The debate isn’t security versus privacy. It’s liberty versus control.
You can see it in comments by government officials: “Privacy no longer can mean anonymity,” says Donald Kerr, principal deputy director of national intelligence. “Instead, it should mean that government and businesses properly safeguard people’s private communications and financial information.” Did you catch that? You’re expected to give up control of your privacy to others, who—presumably—get to decide how much of it you deserve. That’s what loss of liberty looks like.
It should be no surprise that people choose security over privacy: 51 to 29 percent in a recent poll. Even if you don’t subscribe to Maslow’s hierarchy of needs, it’s obvious that security is more important. Security is vital to survival, not just of people but of every living thing. Privacy is unique to humans, but it’s a social need. It’s vital to personal dignity, to family life, to society—to what makes us uniquely human—but not to survival.
If you set up the false dichotomy, of course people will choose security over privacy—especially if you scare them first. But it’s still a false dichotomy. There is no security without privacy. And liberty requires both security and privacy. The famous quote attributed to Benjamin Franklin reads: “Those who would give up essential liberty to purchase a little temporary safety, deserve neither liberty nor safety.” It’s also true that those who would give up privacy for security are likely to end up with neither.
This essay originally appeared on Wired.com.
Remember the “cyberwar” in Estonia last year? When asked about it, I generally say that it’s unclear that it wasn’t just kids playing politics.
The reality is even more mundane:
…the attacker convicted today isn’t a member of the Russian military, nor is he an embittered cyber warrior in Putin’s secret service. He doesn’t even live in Russia. He’s an [20-year-old] ethnic Russian who lives in Estonia, who was pissed off over that whole statue thing.
The court fined him 17,500 kroons, or $1,620 dollars, and sent him on his way.
So much for all of that hype.
Ronald C. Arkin, “Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture,” Technical Report GIT-GVU-07011. Fascinating (and long: 117-page) paper on ethical implications of robots in war.
Summary, Conclusions, and Future Work
This report has provided the motivation, philosophy, formalisms, representational requirements, architectural design criteria, recommendations, and test scenarios to design and construct an autonomous robotic system architecture capable of the ethical use of lethal force. These first steps toward that goal are very preliminary and subject to major revision, but at the very least they can be viewed as the beginnings of an ethical robotic warfighter. The primary goal remains to enforce the International Laws of War in the battlefield in a manner that is believed achievable, by creating a class of robots that not only conform to International Law but outperform human soldiers in their ethical capacity.
It is too early to tell whether this venture will be successful. There are daunting problems
- The transformation of International Protocols and battlefield ethics into machine usable representations and real-time reasoning capabilities for bounded morality using modal logics.
- Mechanisms to ensure that the design of intelligent behaviors only provide responses within rigorously defined ethical boundaries.
- The creation of techniques to permit the adaptation of an ethical constraint set and underlying behavioral control parameters that will ensure moral performance, should those norms be violated in any way, involving reflective and affective processing.
- A means to make responsibility assignment clear and explicit for all concerned parties regarding the deployment of a machine with a lethal potential on its mission.
Over the next two years, this architecture will be slowly fleshed out in the context of the specific test scenarios outlined in this article. Hopefully the goals of this effort, will fuel other scientists’ interest to assist in ensuring that the machines that we as roboticists create fit within international and societal expectations and requirements.
My personal hope would be that they will never be needed in the present or the future. But mankind’s tendency toward war seems overwhelming and inevitable. At the very least, if we can reduce civilian casualties according to what the Geneva Conventions have promoted and the Just War tradition subscribes to, the result will have been a humanitarian effort, even while staring directly at the face of war.
In 1987, CPSR began a tradition to recognize outstanding contributions for social responsibility in computing technology. The organization wanted to cite people who recognize the importance of a science-educated public, who take a broader view of the social issues of computing. We aimed to share concerns that lead to action in arenas of the power, promise, and limitations of computer technology.
It’s also dead:
Heavier than even giant squid, colossal squid (Mesonychoteuthis hamiltoni) have eyes as wide as dinner plates and sharp hooks on some of their suckers. The new specimen weighs in at an estimated 990 pounds (450 kilograms).
New York City’s plan to secure its subways with a next-generation surveillance network is getting more expensive by the second, and slipping further and further behind schedule. A new report by the New York State Comptroller’s office reveals that “the cost of the electronic security program has grown from $265 million to $450 million, an increase of $185 million or 70 percent.” An August 2008 deadline has been pushed back to December 2009, and further delays may be just ahead.
I’ve spent the last few months, on and off, reporting on New York’s counter-terror programs for the magazine. One major problem with the subway surveillance program has been wedging a modern security network into a 5,000 square-mile system that recently celebrated its hundredth birthday. Getting the power and air-conditioning needed for the cameras’ servers has been a nightmare. In many stations, there’s literally no place to put the things. Plus, the ceilings in most of the subway stations are only nine feet high, and there are columns every few yards. Which makes it very hard to get a good look at the passengers.
This is an awful fear-mongering story about non-Muslims being recruited in the UK:
As many as 1,500 white Britons are believed to have converted to Islam for the purpose of funding, planning and carrying out surprise terror attacks inside the UK, according to one MI5 source.
This quote is particularly telling:
One British security source last night told Scotland on Sunday: “There could be anything up to 1,500 converts to the fundamentalist cause across Britain. They pose a real potential danger to our domestic security because, obviously, these people blend in and do not raise any flags.
Because the only “flag” that can possibly identify terrorists is that they’re Muslim, right?
Using software, of course. The context is shredded and torn East German Stasi documents, but the technology is more general of course:
The machine-shredded stuff is confetti, largely unrecoverable. But in May 2007, a team of German computer scientists in Berlin announced that after four years of work, they had completed a system to digitally tape together the torn fragments. Engineers hope their software and scanners can do the job in less than five years even taking into account the varying textures and durability of paper, the different sizes and shapes of the fragments, the assortment of printing (from handwriting to dot matrix) and the range of edges (from razor sharp to ragged and handmade.) “The numbers are tremendous. If you imagine putting together a jigsaw puzzle at home, you have maybe 1,000 pieces and a picture of what it should look like at the end,” project manager Jan Schneider says. “We have many millions of pieces and no idea what they should look like when we’re done.”
I have absolutely no doubt that there will be security flaws in remotely controllable thermostats, allowing hackers to seize control of them. Do this on a too-hot day, and you might even cause a large blackout.
The CIA unleashed a big one at a SANS conference:
On Wednesday, in New Orleans, US Central Intelligence Agency senior analyst Tom Donahue told a gathering of 300 US, UK, Swedish, and Dutch government officials and engineers and security managers from electric, water, oil & gas and other critical industry asset owners from all across North America, that “We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet.”
According to Mr. Donahue, the CIA actively and thoroughly considered the benefits and risks of making this information public, and came down on the side of disclosure.
SANS’s Alan Paller is happy to add details:
In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems, says Alan Paller, director of the SANS Institute, an organization that hosts a crisis center for hacked companies. “Hundreds of millions of dollars have been extorted, and possibly more. It’s difficult to know, because they pay to keep it a secret,” Paller says. “This kind of extortion is the biggest untold story of the cybercrime industry.”
And to up the fear factor:
The prospect of cyberattacks crippling multicity regions appears to have prompted the government to make this information public. The issue “went from ‘we should be concerned about to this’ to ‘this is something we should fix now,’ ” said Paller. “That’s why, I think, the government decided to disclose this.”
An attendee of the meeting said that the attack was not well-known through the industry and came as a surprise to many there. Said the person who asked to remain anonymous, “There were apparently a couple of incidents where extortionists cut off power to several cities using some sort of attack on the power grid, and it does not appear to be a physical attack.”
And more hyperbole from someone in the industry:
Over the past year to 18 months, there has been “a huge increase in focused attacks on our national infrastructure networks, . . . and they have been coming from outside the United States,” said Ralph Logan, principal of the Logan Group, a cybersecurity firm.
It is difficult to track the sources of such attacks, because they are usually made by people who have disguised themselves by worming into three or four other computer networks, Logan said. He said he thinks the attacks were launched from computers belonging to foreign governments or militaries, not terrorist groups.”
I’m more than a bit skeptical here. To be sure—fake staged attacks aside—there are serious risks to SCADA systems (Ganesh Devarajan gave a talk at DefCon this year about some potential attack vectors), although at this point I think they’re more a future threat than present danger. But this CIA tidbit tells us nothing about how the attacks happened. Were they against SCADA systems? Were they against general-purpose computer, maybe Windows machines? Insiders may have been involved, so was this a computer security vulnerability at all? We have no idea.
Cyber-extortion is certainly on the rise; we see it at Counterpane. Primarily it’s against fringe industries—online gambling, online gaming, online porn—operating offshore in countries like Bermuda and the Cayman Islands. It is going mainstream, but this is the first I’ve heard of it targeting power companies. Certainly possible, but is that part of the CIA rumor or was it tacked on afterwards?
And here’s list of power outages. Which ones were hacker caused? Some details would be nice.
I’d like a little bit more information before I start panicking.
EDITED TO ADD (1/23): Slashdot thread.
Almost three years ago I blogged about SmartWater: liquid imbued with a uniquely identifiable DNA-style code. In my post I made the snarky comment:
The idea is for me to paint this stuff on my valuables as proof of ownership. I think a better idea would be for me to paint it on your valuables, and then call the police.
That remark aside, a new university study concludes that it works:
The study of over 100 criminals revealed that simply displaying signs that goods and premises were protected by SmartWater was sufficient to put off most of the criminals the team interviewed.
Professor Gill said: “According to our sample, SmartWater provided a strong projected deterrent value in that 74 per cent of the offenders interviewed reported that they would in the future be put off from breaking into a building with a SmartWater poster/sign displayed.
“Overall, the findings indicate that crime reduction strategies using SmartWater products have a strong deterrent effect. In particular, one notable finding of the study was that whilst ‘property marking’ in general acts as a reasonable deterrent, the combination of forensic products which SmartWater uses in its holistic approach increases the deterrent factor substantially.”
When scored out of ten by respondents in regard to deterrent value, SmartWater was awarded the highest average score (8.3 out of a score of 10) compared to a range of other crime deterrents. CCTV scored 6.2, Burglar Alarms scored 6.0 and security guards scored 4.9.
Of course, we don’t know if the study was sponsored by SmartWater the company, and we don’t know the methodology—interviewing criminals about what deters them is fraught with potential biases—but it’s still interesting.
Also note that SmartWater is not only sprayed on valuables, but also sprayed on burglars and criminals—tying them to the crime scene.
The Dutch RFID public transit card, which has already cost the government $2B—no, that’s not a typo—has been hacked even before it has been deployed:
The first reported attack was designed by two students at the University of Amsterdam, Pieter Siekerman and Maurits van der Schee. They analyzed the single-use ticket and showed its vulnerabilities in a report. They also showed how a used single-use card could be given eternal life by resetting it to its original “unused” state.
The next attack was on the Mifare Classic chip, used on the normal ticket. Two German hackers, Karsten Nohl and Henryk Plotz, were able to remove the coating on the Mifare chip and photograph the internal circuitry. By studying the circuitry, they were able to deduce the secret cryptographic algorithm used by the chip. While this alone does not break the chip, it certainly gives future hackers a stepping stone on which to stand. On Jan. 8, 2008, they released a statement abut their work.
Most of the links are in Dutch; there isn’t a whole lot of English-language press about this. But the Dutch Parliament recently invited the students to give testimony; they’re more than a little bit interested how $2B could be wasted.
My guess is the system was designed by people who don’t understand security, and therefore thought it was easy.
EDITED TO ADD (2/13): More info.
Up to three American Airlines jets carrying passengers will be outfitted with anti-missile technology this spring in the latest phase of testing technology to protect commercial planes from attack.
The technology is intended to stop a missile attack by detecting heat given off from the rocket, then firing a laser beam that jams the missile’s guidance system.
I have several feelings about this. One, it’s security theater against a movie-plot threat. Two, given that that’s true, attaching an empty box to the belly of the plane and writing “Laser Anti-Missile System” on it would be just as effective a deterrent at a fraction of the cost. And three, how do we know that’s not what they’re doing?
More news here.
Fire Engineering magazine points out that fire alarms used to be kept locked to prevent false alarms:
Q: Prior to 1870, street corner fire alarm pull boxes were kept locked. Why were they kept locked and how did a person gain access to ‘pull the box?’
A: They were kept locked due to false alarms. Nearby shopkeepers or beat cops carried the keys.
According to Robert Cromie in The Great Chicago Fire (Thomas Nelson: 1994, p. 33), this may have been one reason for the slow response to the fire:
William Lee, the O’Leary’s neighbor, rushed into Goll’s drugstore, and gasped out a request for the key to the alarm box. The new boxes were attached to the walls of stores or other convenient locations. To prevent false alarms and crank calls, the boxes were locked, and the keys given to trustworthy citizens nearby.
What happened when Lee made his request is not clear. Only one fact emerges from the confusion: No alarm was registered from any box in the vicinity of the fire until it was too late to do any good.
Apparently, Lee said that Goll refused to give him the key because he’d already seen a fire engine go past; Goll said he actually did pull the alarm, twice, but if so it must not have worked.
(There’s more about what sounds like a really bad communications failure, but it’s a little too hard for me to read on the Amazon website.)
But did you know that the fire burned for over half an hour before an alarm was ever sounded? Alarm boxes were actually kept locked in those days, to prevent false alarms!
When the first alarm box was finally opened and the lever pulled, the alarm somehow did not get through. The fire dispatcher was playing a guitar for a couple of girls at the time and he kept on serenely strumming, completely unawares. After the fire had been growing and blazing for nearly an hour a watchman screamed at the dispatcher to sound an alarm, which he did, and the first three engines, two hose wagons, and two hook and ladders were sent out—but in the wrong direction!
At first the dispatcher refused to sound another alarm, hoping to avoid further confusion.
Compare this with a proposed law in New York City that will require people to get a license before they can buy chemical, biological, or radiological attack detectors:
The legislation—which was proposed by the Bloomberg administration and would be the first of its kind in the nation—would empower the police commissioner to decide whether to grant a free five-year permit to individuals and companies seeking to “possess or deploy such detectors.” Common smoke alarms and carbon monoxide detectors would not be covered by the law, the Police Department said. Violations of the law would be considered a misdemeanor.
Why does the administration think such a law is necessary? Richard A. Falkenrath, the Police Department’s deputy commissioner for counterterrorism, told the Council’s Public Safety Committee at a hearing today, “Our mutual goal is to prevent false alarms and unnecessary public concern by making sure that we know where these detectors are located and that they conform to standards of quality and reliability.”
The law would also require anyone using such a detector—regardless of whether they have obtained the required permit—to notify the Police Department if the detector alerted them to a biological, chemical or radiological agent. “In this way, emergency response personnel will be able to assess threats and take appropriate action based on the maximum information available,” Dr. Falkenrath said.
False positives are a problem with any detection system, and certainly putting Geiger counters in the hands of everyone will mean a lot of amateurs calling false alarms into the police. But the way to handle that isn’t to ban Geiger counters. (Just as the way to deal with false fire alarms 100 years ago wasn’t to lock the alarm boxes.) The way to deal with it is by 1) putting a system in place to quickly separate the real alarms from the false alarms, and 2) prosecuting those who maliciously sound false alarms.
We don’t want to encourage people to report everything; that’s too many false alarms. Nor do we want to discourage them from reporting things they feel are serious. In the end, it’s the job of the police to figure out what’s what. I said this in an essay last year:
…these incidents only reinforce the need to realistically assess, not automatically escalate, citizen tips. In criminal matters, law enforcement is experienced in separating legitimate tips from unsubstantiated fears, and allocating resources accordingly; we should expect no less from them when it comes to terrorism.
A 14-year-old built a modified a TV remote control to switch trains on tracks in the Polish city of Lodz:
Transport command and control systems are commonly designed by engineers with little exposure or knowledge about security using commodity electronics and a little native wit. The apparent ease with which Lodz’s tram network was hacked, even by these low standards, is still a bit of an eye opener.
Problems with the signalling system on Lodz’s tram network became apparent on Tuesday when a driver attempting to steer his vehicle to the right was involuntarily taken to the left. As a result the rear wagon of the train jumped the rails and collided with another passing tram. Transport staff immediately suspected outside interference.
Here’s Steve Bellovin:
The device is described in the original article as a modified TV remote control. Presumably, this means that the points are normally controlled by IR signals; what he did was learn the coding and perhaps the light frequency and amplitude needed. This makes a lot of sense; it lets tram drivers control where their trains go, rather than relying on an automated system or some such. Indeed, the article notes “a city tram driver tried to steer his vehicle to the right, but found himself helpless to stop it swerving to the left instead.”
The lesson here is that security by obscurity, combined with physical security of the equipment, wasn’t enough. This kid jumped whatever fences there were, and reverse-engineered the IR control protocol. Then he was able to play “trains” with real trains.
The measures—details here—won’t do anything to stop child predators on MySpace. But, on the other hand, there isn’t really any problem with child predators—just a tiny handful of highly publicized stories—on MySpace. It’s just security theater against a movie-plot threat. But we humans have a well-established cognitive bias that overestimates threats against our children, so it all makes sense.
The New York Times writes about a plausible connection between fear and heart disease:
Which is more of a threat to your health: Al Qaeda or the Department of Homeland Security?
An intriguing new study suggests the answer is not so clear-cut. Although it’s impossible to calculate the pain that terrorist attacks inflict on victims and society, when statisticians look at cold numbers, they have variously estimated the chances of the average person dying in America at the hands of international terrorists to be comparable to the risk of dying from eating peanuts, being struck by an asteroid or drowning in a toilet.
But worrying about terrorism could be taking a toll on the hearts of millions of Americans. The evidence, published last week in the Archives of General Psychiatry, comes from researchers who began tracking the health of a representative sample of more than 2,700 Americans before September 2001. After the attacks of Sept. 11, the scientists monitored people’s fears of terrorism over the next several years and found that the most fearful people were three to five times more likely than the rest to receive diagnoses of new cardiovascular ailments.
After controlling for various factors (age, obesity, smoking, other ailments and stressful life events), the researchers found that the people who were acutely stressed after the 9/11 attacks and continued to worry about terrorism—about 6 percent of the sample—were at least three times more likely than the others in the study to be given diagnoses of new heart problems.
If you extrapolate that percentage to the adult population of America, it works out to more than 10 million people. No one knows what fraction of them might consequently die of a stroke or heart attack—plenty of other factors affect heart disease—but if it were merely 0.0003 percent, that would be higher than the 9/11 death toll.
Of course, statistics of any sort, even when the numbers are rock solid, don’t mean much to people when they’re assessing threats. Risk researchers have found that even when people know the numbers, they’re less worried about death tolls than about how the deaths occur. They have good reasons—called “rival rationalities”?—for fearing catastrophes that kill large numbers at once because these events affect the whole community and damage the social fabric.
It doesn’t surprise me that fear of terrorism is more harmful than actual terrorism. That’s the whole point of terrorism: an amplification of fear through the mass media.
The point of terrorism is to cause terror, sometimes to further a political goal and sometimes out of sheer hatred. The people terrorists kill are not the targets; they are collateral damage. And blowing up planes, trains, markets or buses is not the goal; those are just tactics. The real targets of terrorism are the rest of us: the billions of us who are not killed but are terrorized because of the killing. The real point of terrorism is not the act itself, but our reaction to the act.
And we’re doing exactly what the terrorists want.
The surest defense against terrorism is to refuse to be terrorized. Our job is to recognize that terrorism is just one of the risks we face, and not a particularly common one at that. And our job is to fight those politicians who use fear as an excuse to take away our liberties and promote security theater that wastes money and doesn’t make us any safer.
A longish article by Rudy Giuliani on his philosophy to secure the nation from terrorism. I may write a long blog post on the article after I read it, but I wanted to post the link as soon as I saw it.
This is a good article on a new trend in corporate spying: companies like Wal-Mart and Sears have resorted to covert surveillance of employees, partners, journalists, and even Internet users to protect itself from “global threats.”
“Like most major corporations, it is our corporate responsibility to have systems in place, including software systems, to monitor threats to our network, intellectual property and our people,” Wal-Mart spokeswoman Sarah Clark said in a statement in April. Following the Gabbard firing, Wal-Mart said it conducted a review of its monitoring activities. “There have been changes in leadership, and we have strengthened our practices and protocols in this area,” Clark said.
At a gathering of security specialists in New York City in January of 2006, David Harrison, the former Army military intelligence officer who was hired by Senser to head Wal-Mart’s analytical security research center, provided a rare glimpse into the company’s monitoring operations. Harrison told the gathering Wal-Mart faces a wide range of threats: “A bombing in China, an armed robbery in Brazil, an armed robbery in Las Vegas, another bomb threat, and that was just yesterday,” Harrison said.
To safeguard its employees and operations Wal-Mart has tapped its massive data warehouse of information, now believed to be larger than 4 petabytes (4,000 terabytes), to look for potential threats. It tracks customers who buy propane tanks, for example, or anyone who has fraudulently cashed a check, or anyone making bulk purchases of pre-paid cell phones, which could be tied to criminal activities. “If you try to buy more than three cell phones at one time, it will be tracked,” he reportedly told the audience.
Gabbard, the Wal-Mart employee fired for recording reporters’ phone calls, said in his interview with The Wall Street Journal that Wal-Mart uses software from Raytheon Oakley Networks to monitor activity on its network. The Oakley product was originally developed for the U.S. Department of Defense.
The Oakley software is so sophisticated it can allow administrators to visually see what types of information are moving across the network, from Excel spreadsheets to job searches on Monster.com, or photos with flesh tones that might indicate a user is viewing pornography.
And this article talks about ex-CIA agents working for corporations:
The best estimate is that several hundred former intelligence agents now work in corporate espionage, including some who left the C.I.A. during the agency turmoil that followed 9/11. They quickly joined private-investigation firms whose U.S. corporate clients were planning to expand into Russia, China, and other countries with opaque business practices and few public records, and who needed the skinny on international partners or rivals.
These ex-spies apply a higher level of expertise, honed by government service, to the cruder tactics already practiced by private investigators. One such ploy is pretexting—obtaining information by pretending to be somebody else. While private detectives have long posed as freelance reporters or job recruiters to get people to talk, former agents have elevated pretexting to an art.
Similarly, ex-agents have helped popularize the use of G.P.S.-based monitoring devices and long-range cameras for following people around. One corporate-espionage technique comes straight from the C.I.A. playbook. In the constant search for the slightest edge, some hedge funds and investment companies have turned to a handful of private-investigation firms for a tactic that seems to fall between science and voodoo. Called tactical behavior assessment, it relies on dozens of verbal and nonverbal cues to determine whether someone is lying. Signs of potential deception include meandering off topic rather than sticking to the facts and excessive personal grooming, such as nervously picking lint off a jacket. This method was developed by former lie-detector experts from the C.I.A.’s Office of Security, which administers polygraph tests to keep agents honest and verify the stories of would-be defectors.
Most of the ex-agents’ activities, from surveillance to lie detection, are perfectly legal. In the wake of the 2006 Hewlett-Packard scandal, detectives used pretexting to obtain the private telephone records of company directors, employees, and journalists. In an effort to track leaks to the media, federal law was tightened to prohibit using fraudulent means to obtain telephone records. Financial records were already off-limits. But federal law doesn’t forbid assuming a false identity to get other information—an area that ex-spies exploit.
Still, a few techniques favored by the spies-for-hire do appear to violate privacy statutes. One of these involves using “data haunts,” extreme methods of electronic monitoring such as tracking cell-phone calls and gathering emails by relying on secretly installed software to record computer keystrokes. An ex-C.I.A. agent described a group of his former colleagues who set up shop offshore so that they could tap into telephone calls—a practice prohibited by federal law—outside U.S. jurisdiction. “They call themselves the bad boys in the Bahamas,” he said.
Even some of the legal methods are controversial within the industry. Certain old-school firms won’t stoop to dumpster diving or stealing garbage—which is usually legal as long as the trash is on a curb or other public property—” because they consider it unethical. They say that the prevalence of former intelligence agents in the field and the rise of unscrupulous tactics have tarnished a business that often struggles with its reputation. One longtime investigator complained that he recently lost business to some ex-C.I.A. officers who promised a potential client that they could obtain the phone and bank records of a target—something that is illegal in most cases.
Current and former employees said Diligence’s ex-spies also held classes in using false identities to obtain confidential information. Ex-employees said it wasn’t unusual for an investigator to have five or six cell phones, each representing a different identity, on his or her desk. And while ex-C.I.A. and former MI5 agents were old hands at such deception, the new initiates sometimes got confused and answered a phone with the wrong name.
All interesting. It seems that corporate espionage has gone mainstream, and the debate is more about how and when.
On a related note, this paragraph disturbed me:
On occasion, Diligence investigators were dispatched to collect garbage from a target’s home or office. In some cases, two former employees said, Diligence hired off-duty or retired police officers to take trash so that they could wave their badges and fend off any awkward questions.
It’s public authority being used for private interests. We see it a lot—off-duty police officers guarding private businesses, for example—and it erodes public trust of authority. In the case above, I’m not even sure it’s legal.
On Wednesday, a man dressed as an armored truck employee with the company AT Systems walked into a BB&T bank in Wheaton about 11 a.m., was handed more than $500,000 in cash and walked out, a source familiar with the case said.
It wasn’t until the actual AT Systems employees arrived at the bank, at 11501 Georgia Ave., the next day that bank officials realized they’d been had.
And on Thursday, about 9:30 a.m., a man dressed as an employee of the security company Brink’s walked into a Wachovia branch in downtown Washington and walked out with more than $350,000.
The man had a badge and a gun holster on his belt, said Debbie Weierman, a spokeswoman for the FBI’s Washington field office. He told officials at the bank, at 801 Pennsylvania Ave. NW, that he was filling in for the regular courier.
About 4 p.m., when the real guard showed up, a bank official told him that someone had picked up the cash, D.C. police said. The guard returned to his office and told a supervisor that he did not make the pickup at the bank. The supervisor called a Wachovia manager, who in turn notified authorities. Police were called nearly 11 hours after the heist.
Social engineering at its finest.
EDITED TO ADD (1/16): Seems to be an inside job.
Whenever I talk or write about my own security setup, the one thing that surprises people—and attracts the most criticism—is the fact that I run an open wireless network at home. There’s no password. There’s no encryption. Anyone with wireless capability who can see my network can use it to access the internet.
To me, it’s basic politeness. Providing internet access to guests is kind of like providing heat and electricity, or a hot cup of tea. But to some observers, it’s both wrong and dangerous.
I’m told that uninvited strangers may sit in their cars in front of my house, and use my network to send spam, eavesdrop on my passwords, and upload and download everything from pirated movies to child pornography. As a result, I risk all sorts of bad things happening to me, from seeing my IP address blacklisted to having the police crash through my door.
While this is technically true, I don’t think it’s much of a risk. I can count five open wireless networks in coffee shops within a mile of my house, and any potential spammer is far more likely to sit in a warm room with a cup of coffee and a scone than in a cold car outside my house. And yes, if someone did commit a crime using my network the police might visit, but what better defense is there than the fact that I have an open wireless network? If I enabled wireless security on my network and someone hacked it, I would have a far harder time proving my innocence.
This is not to say that the new wireless security protocol, WPA, isn’t very good. It is. But there are going to be security flaws in it; there always are.
I spoke to several lawyers about this, and in their lawyerly way they outlined several other risks with leaving your network open.
While none thought you could be successfully prosecuted just because someone else used your network to commit a crime, any investigation could be time-consuming and expensive. You might have your computer equipment seized, and if you have any contraband of your own on your machine, it could be a delicate situation. Also, prosecutors aren’t always the most technically savvy bunch, and you might end up being charged despite your innocence. The lawyers I spoke with say most defense attorneys will advise you to reach a plea agreement rather than risk going to trial on child-pornography charges.
In a less far-fetched scenario, the Recording Industry Association of America is known to sue copyright infringers based on nothing more than an IP address. The accuser’s chance of winning is higher than in a criminal case, because in civil litigation the burden of proof is lower. And again, lawyers argue that even if you win it’s not worth the risk or expense, and that you should settle and pay a few thousand dollars.
I remain unconvinced of this threat, though. The RIAA has conducted about 26,000 lawsuits, and there are more than 15 million music downloaders. Mark Mulligan of Jupiter Research said it best: “If you’re a file sharer, you know that the likelihood of you being caught is very similar to that of being hit by an asteroid.”
I’m also unmoved by those who say I’m putting my own data at risk, because hackers might park in front of my house, log on to my open network and eavesdrop on my internet traffic or break into my computers. This is true, but my computers are much more at risk when I use them on wireless networks in airports, coffee shops and other public places. If I configure my computer to be secure regardless of the network it’s on, then it simply doesn’t matter. And if my computer isn’t secure on a public network, securing my own network isn’t going to reduce my risk very much.
Yes, computer security is hard. But if your computers leave your house, you have to solve it anyway. And any solution will apply to your desktop machines as well.
Finally, critics say someone might steal bandwidth from me. Despite isolated court rulings that this is illegal, my feeling is that they’re welcome to it. I really don’t mind if neighbors use my wireless network when they need it, and I’ve heard several stories of people who have been rescued from connectivity emergencies by open wireless networks in the neighborhood.
Similarly, I appreciate an open network when I am otherwise without bandwidth. If someone were using my network to the point that it affected my own traffic or if some neighbor kid was dinking around, I might want to do something about it; but as long as we’re all polite, why should this concern me? Pay it forward, I say.
Certainly this does concern ISPs. Running an open wireless network will often violate your terms of service. But despite the occasional cease-and-desist letter and providers getting pissy at people who exceed some secret bandwidth limit, this isn’t a big risk either. The worst that will happen to you is that you’ll have to find a new ISP.
A company called Fon has an interesting approach to this problem. Fon wireless access points have two wireless networks: a secure one for you, and an open one for everyone else. You can configure your open network in either “Bill” or “Linus” mode: In the former, people pay you to use your network, and you have to pay to use any other Fon wireless network. In Linus mode, anyone can use your network, and you can use any other Fon wireless network for free. It’s a really clever idea.
Security is always a trade-off. I know people who rarely lock their front door, who drive in the rain (and, while using a cell phone) and who talk to strangers. In my opinion, securing my wireless network isn’t worth it. And I appreciate everyone else who keeps an open wireless network, including all the coffee shops, bars and libraries I have visited in the past, the Dayton International Airport where I started writing this and the Four Points Sheraton where I finished. You all make the world a better place.
This essay originally appeared on Wired.com, and has since generated a lot of controversy. There’s a Slashdot thread. And here are three opposing essays and three supporting essays. Presumably there will be a lot of back and forth in the comments section here as well.
EDITED TO ADD (1/18): Another. In the beginning, comments agreeing with me and disagreeing with me were about tied. By now, those that disagree with me are firmly in the lead.
“The goal of this project is to develop a reusable and behaviorally founded computer model of pedestrian movement and crowd behavior amid dense urban environments, to serve as a test-bed for experimentation,” says Torrens. “The idea is to use the model to test hypotheses, real-world plans and strategies that are not very easy, or are impossible to test in practice.”
Such as the following: 1) simulate how a crowd flees from a burning car toward a single evacuation point; 2) test out how a pathogen might be transmitted through a mobile pedestrian over a short period of time; 3) see how the existing urban grid facilitate or does not facilitate mass evacuation prior to a hurricane landfall or in the event of dirty bomb detonation; 4) design a mall which can compel customers to shop to the point of bankruptcy, to walk obliviously for miles and miles and miles, endlessly to the point of physical exhaustion and even death; 5) identify, if possible, the tell-tale signs of a peaceful crowd about to metamorphosize into a hellish mob; 6) determine how various urban typologies, such as plazas, parks, major arterial streets and banlieues, can be reconfigured in situ into a neutralizing force when crowds do become riotous; and 7) conversely, figure out how one could, through spatial manipulation, inflame a crowd, even a very small one, to set in motion a series of events that culminates into a full scale Revolution or just your average everyday Southeast Asian coup d’état—regime change through landscape architecture.
Their special report from December includes a bunch of different articles.
Excellent essay from The New York Times:
In the end, I’m not sure which is more troubling, the inanity of the existing regulations, or the average American’s acceptance of them and willingness to be humiliated. These wasteful and tedious protocols have solidified into what appears to be indefinite policy, with little or no opposition. There ought to be a tide of protest rising up against this mania. Where is it? At its loudest, the voice of the traveling public is one of grumbled resignation. The op-ed pages are silent, the pundits have nothing meaningful to say.
The airlines, for their part, are in something of a bind. The willingness of our carriers to allow flying to become an increasingly unpleasant experience suggests a business sense of masochistic capitulation. On the other hand, imagine the outrage among security zealots should airlines be caught lobbying for what is perceived to be a dangerous abrogation of security and responsibility—even if it’s not. Carriers caught plenty of flack, almost all of it unfair, in the aftermath of September 11th. Understandably, they no longer want that liability.
As for Americans themselves, I suppose that it’s less than realistic to expect street protests or airport sit-ins from citizen fliers, and maybe we shouldn’t expect too much from a press and media that have had no trouble letting countless other injustices slip to the wayside. And rather than rethink our policies, the best we’ve come up with is a way to skirt them—for a fee, naturally—via schemes like Registered Traveler. Americans can now pay to have their personal information put on file just to avoid the hassle of airport security. As cynical as George Orwell ever was, I doubt he imagined the idea of citizens offering up money for their own subjugation.
How we got to this point is an interesting study in reactionary politics, fear-mongering and a disconcerting willingness of the American public to accept almost anything in the name of “security.” Conned and frightened, our nation demands not actual security, but security spectacle. And although a reasonable percentage of passengers, along with most security experts, would concur such theater serves no useful purpose, there has been surprisingly little outrage. In that regard, maybe we’ve gotten exactly the system we deserve.
This story made the rounds in European newspapers about ten years ago—mostly stories in German, if I remember—but it wasn’t covered much here in the U.S.
For half a century, Crypto AG, a Swiss company located in Zug, has sold to more than 100 countries the encryption machines their officials rely upon to exchange their most sensitive economic, diplomatic and military messages. Crypto AG was founded in 1952 by the legendary (Russian born) Swedish cryptographer Boris Hagelin. During World War II, Hagelin sold 140,000 of his machine to the US Army.
“In the meantime, the Crypto AG has built up long standing cooperative relations with customers in 130 countries,” states a prospectus of the company. The home page of the company Web site says, “Crypto AG is the preferred top-security partner for civilian and military authorities worldwide. Security is our business and will always remain our business.”
And for all those years, US eavesdroppers could read these messages without the least difficulty. A decade after the end of WWII, the NSA, also known as No Such Agency, had rigged the Crypto AG machines in various ways according to the targeted countries. It is probably no exaggeration to state that this 20th century version of the “Trojan horse” is quite likely the greatest sting in modern history.
We don’t know the truth here, but the article lays out the evidence pretty well.
See this essay of mine on how the NSA might have been able to read Iranian encrypted traffic.
It’s not on their website yet, and you’d have to pay to read it in any case, but the February 2008 issue of Consumer Reports has an article on aviation security. Much of it you’ve all heard before, but there are some new bits:
Larry Tortorich, a TSA training officer and former representative to the Joint Terrorism Task Force who retired in 2006, also says he saw problems from the inside. “There was a facade of security. There were numerous security flaws and vulnerabilities I identified. The response was, it wasn’t apparent to the public, so there would not be any corrective action.”
I’ve regularly pointed to reinforcing the cockpit doors as something that was a good idea, and should have been done years earlier.
Critics, however, say a stronger door is only half of the solution. “People have this illusion that hardened cockpit doors work, and they don’t,” Dzakovic says. “If you want to have a secure door, you need to have a double hulled door.”
Consumer Reports searched NAS, the Aviation Safety Reporting System, and found 51 incidents since April 2002 in which flight crews reported problems with the hardened doors.
Most of them weren’t really security issues: locking mechanisms failing, doors popping open in flight, and so on. But this was more interesting:
A 2006 study of aviation security by DFI International, a Washington, D.C. security consultancy, found that a drunken passenger kicked a hole in a door panel and that aircraft cleaners “broke a fortified door off its hinges by running a heavy snack cart into it on a bet.”
El Al, of course, has double doors. But since the cost is between $5K and $10K per aircraft, the airline industry has fought the measure in the U.S.
The article also talks about how poor the screeners actually are, but I’ve covered all that already.
His name is similar to someone on the “no fly” list:
A five-year-old boy was taken into custody and thoroughly searched at Sea-Tac because his name is similar to a possible terrorist alias. As the Consumerist reports, “When his mother went to pick him up and hug him and comfort him during the proceedings, she was told not to touch him because he was a national security risk. They also had to frisk her again to make sure the little Dillinger hadn’t passed anything dangerous weapons or materials to his mother when she hugged him.”
The explanation is simple: to the TSA, following procedure is more important than common sense. But unfortunately, catching the next terrorist will require more common sense than it will following proper procedure.
If I ever get to interview Kip Hawley again, I’ll ask him about this.
EDITED TO ADD (1/12): Another kid on the no-fly list.
Canada comes in first.
Individual privacy is best protected in Canada but under threat in the United States and the European Union as governments introduce sweeping surveillance and information-gathering measures in the name of security and border control, an international rights group said in a report released Saturday.
Canada, Greece and Romania had the best privacy records of 47 countries surveyed by London-based watchdog Privacy International. Malaysia, Russia and China were ranked worst.
Both Britain and the United States fell into the lowest-performing group of “endemic surveillance societies.”
EDITED TO ADD (1/10): Actually, Canada comes in second.
The daily newspaper, Aftonbladet, turned the stick over to the Armed Forces on Thursday. The paper’s editorial office obtained the memory stick from an individual who discovered it in a public computer center in Stockholm.
An employee of the Armed Forces has reported that the misplaced USB memory stick belongs to him. The employee contacted his superior on Friday and divulged that he had forgotten the memory stick in a public computer. A preliminary technical investigation confirms that the stick belongs to the employee.
The stick contained both unclassified and classified information such as information regarding IED and mine threats in Afghanistan.
I wrote about this sort of thing two years ago:
The point is that it’s now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I’d never know it.
Interesting article from Newsweek:
The evolutionary primacy of the brain’s fear circuitry makes it more powerful than the brain’s reasoning faculties. The amygdala sprouts a profusion of connections to higher brain regions—neurons that carry one-way traffic from amygdala to neocortex. Few connections run from the cortex to the amygdala, however. That allows the amygdala to override the products of the logical, thoughtful cortex, but not vice versa. So although it is sometimes possible to think yourself out of fear (“I know that dark shape in the alley is just a trash can”), it takes great effort and persistence. Instead, fear tends to overrule reason, as the amygdala hobbles our logic and reasoning circuits. That makes fear “far, far more powerful than reason,” says neurobiologist Michael Fanselow of the University of California, Los Angeles. “It evolved as a mechanism to protect us from life-threatening situations, and from an evolutionary standpoint there’s nothing more important than that.”
I’ve already written about this sort of thing.
Investigative report on passport fraud worldwide.
Six years after 9/11, an NBC News undercover investigation has found that the black market in fraudulent passports is thriving. On the streets of South America, NBC documented the sale of stolen and doctored passports, and travel papers prized by terrorists: genuine passports issued under false names. For a few thousand dollars, an undercover investigator was able to purchase several entirely new identities from organized criminal networks with access to corrupt government employees. The investigator obtained passports from Spain, Peru, and Venezuela and used the Peruvian and Venezuelan passports to travel widely in the Western Hemisphere, with practically no scrutiny.
All they know is that something makes them uneasy, usually based on fear, media hype, or just something being different.
If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.
Yesterday The New York Times wrote about New York City’s campaign:
Now, an overview of police data relating to calls to the hot line over the past two years reveals the answer and provides a unique snapshot of post-9/11 New York, part paranoia and part well-founded caution. Indeed, no terrorists were arrested, but a wide spectrum of other activity was reported.
In all, the hot line received 8,999 calls in 2006, including calls that were transferred from 911 and the 311 help line, Mr. Browne said. They included a significant number of calls about suspicious packages, many in the transit system. Most involved backpacks, briefcases or other items accidentally left behind by their owners. None of them, Mr. Browne said, were bombs.
There were, however, 816 calls to the hot line in 2006 that were deemed serious enough to require investigation by the department’s intelligence division or its joint terrorism task force with the F.B.I. Mr. Browne said that 109 of those calls had a connection to the transit system and included reports of suspicious people in tunnels and yards, and of people taking pictures of the tracks.
The hot line received many more calls in 2007, possibly because of the authority’s advertising campaign, Mr. Browne said. Through early December, the counterterrorism hot line received 13,473 calls, with 644 of those meriting investigation. Of that group, 45 calls were transit related.
Then there were the 11 calls about people counting.
Mr. Browne said several callers reported seeing men clicking hand-held counting devices while riding on subway trains or waiting on platforms.
The callers said that the men appeared to be Muslims and that they seemed to be counting the number of people boarding subway trains or the number of trains passing through a station. They feared the men might be collecting data to maximize the casualties in a terror attack.
But when the police looked into the claims, they determined that the men were counting prayers with the devices, essentially a modern version of rosary beads.
None of those calls led to arrests, but several others did. At least three calls resulted in arrests for trying to sell false identification, including driver’s licenses and Social Security cards. One informer told the police about a Staten Island man who was later found to have a cache of firearms. A Queens man was charged with having an illegal gun and with unlawful dealing in fireworks.
A Brooklyn man was charged with making anti-Semitic threats against his landlord and threatening to use sarin gas on him. At least two men arrested on tips from the hot line were turned over to immigration officials for deportation, Mr. Browne said.
And as long as we’re on the topic, read about the couple branded as terrorists in the UK for taking photographs in a mall. And this about a rail fan being branded a terrorist for trying to film a train. (Note that the member of the train’s crew was trying to incite the other passengers to do something about the filmer.) And about this Icelandic woman’s experience with U.S. customs because she overstayed a visa in 1995.
And lastly, this funny piece of (I trust) fiction.
Remember that every one of these incidents requires police resources to investigate, resources that almost certainly could be better spent keeping us actually safe.
The news articles are pretty sensational:
The computer network in the Dreamliner’s passenger compartment, designed to give passengers in-flight internet access, is connected to the plane’s control, navigation and communication systems, an FAA report reveals.
According to the U.S. Federal Aviation Administration, the new Boeing 787 Dreamliner aeroplane may have a serious security vulnerability in its on-board computer networks that could allow passengers to access the plane’s control systems.
If this is true, this is a very serious security vulnerability. And it’s not just terrorists trying to control the airplane, but the more common software flaw that causes some unforeseen interaction with something else and cascades into a bigger problem. However, the FAA document in the Federal Register is not as clear as all that. It does say:
The proposed architecture of the 787 is different from that of existing production (and retrofitted) airplanes. It allows new kinds of passenger connectivity to previously isolated data networks connected to systems that perform functions required for the safe operation of the airplane. Because of this new passenger connectivity, the proposed data network design and integration may result in security vulnerabilities from intentional or unintentional corruption of data and systems critical to the safety and maintenance of the airplane. The existing regulations and guidance material did not anticipate this type of system architecture or electronic access to aircraft systems that provide flight critical functions. Furthermore, 14 CFR regulations and current system safety assessment policy and techniques do not address potential security vulnerabilities that could be caused by unauthorized access to aircraft data buses and servers. Therefore, special conditions are imposed to ensure that security, integrity, and availability of the aircraft systems and data networks are not compromised by certain wired or wireless electronic connections between airplane data buses and networks.
But, honestly, this isn’t nearly enough information to work with. Normally, the aviation industry is really good about this sort of thing, and it doesn’t make sense that they’d do something as risky as this. I’d like more definitive information.
EDITED TO ADD (1/16): The FAA responds. Seems like there’s more hype than story here. Still, it’s worth paying attention to.
Because they’re harder to hack:
Though Apple machines are still pricier than their Windows counterparts, the added security they offer might be worth the cost, says Wallington. He points out that Apple’s X Serve servers, which are gradually becoming more commonplace in Army data centers, are proving their mettle. “Those are some of the most attacked computers there are. But the attacks used against them are designed for Windows-based machines, so they shrug them off,” he says.
I’m generally a fan of behavioral profiling. While it sounds weird and creepy and has been likened to Orwell’s “facecrime”, there’s no doubt that—when done properly—it works at catching common criminals:
On Dec. 4, Juan Carlos Berriel-Castillo, 22, and Bernardo Carmona-Olivares, 20, were planning to fly to Maui but were instead arrested on suspicion of forgery.
They tried to pass through a Terminal 4 security checkpoint with suspicious documents, Phoenix police spokeswoman Stacie Derge said.
The pair had false permanent-resident identification, and authorities also found false Social Security cards, officials say.
While the pair were questioned about the papers, a TSA official who had received behavior-recognition training observed a third man in the area who appeared to be connected to Berriel-Castillo and Carmona-Olivares, Melendez said.
As a result, police later arrested Samuel Gonzalez, 32. A background check revealed that Gonzalez was wanted on two misdemeanor warrants.
TSA press release here.
Security is a trade-off. The question is whether the expense of the Screening Passengers by Observation Techniques (SPOT) program, given the minor criminals it catches, is worth it. (Remember, it’s supposed to catch terrorists, not people with outstanding misdemeanor warrants.) Especially with the 99% false alarm rate:
Since January 2006, behavior-detection officers have referred about 70,000 people for secondary screening, Maccario said. Of those, about 600 to 700 were arrested on a variety of charges, including possession of drugs, weapons violations and outstanding warrants.
And the other social costs, including loss of liberty, restriction of fundamental freedoms, and the creation of a thoughtcrime. Is this the sort of power we want to give a police force in a constitutional democracy, or does it feel more like a police-state sort of thing?
This “Bizarro” cartoon sums it up nicely.
Join “My SHC Community” on Sears.com, and the company will install some pretty impressive spyware on your computer:
Sears.com is distributing spyware that tracks all your Internet usage – including banking logins, email, and all other forms of Internet usage – all in the name of “community participation.” Every website visitor that joins the Sears community installs software that acts as a proxy to every web transaction made on the compromised computer. In other words, if you have installed Sears software (“the proxy”) on your system, all data transmitted to and from your system will be intercepted. This extreme level of user tracking is done with little and inconspicuous notice about the true nature of the software. In fact, while registering to join the “community,” very little mention is made of software or tracking. Furthermore, after the software is installed, there is no indication on the desktop that the proxy exists on the system, so users are tracked silently.
Here is a summary of what the software does and how it is used. The proxy:
- Monitors and transmits a copy of all Internet traffic going from and coming to the compromised system.
- Monitors secure sessions (websites beginning with ‘https’), which may include shopping or banking sites.
- Records and transmits “the pace and style with which you enter information online…”
- Parses the header section of personal emails.
- May combine any data intercepted with additional information like “select credit bureau information” and other sources like “consumer preference reporting companies or credit reporting agencies”.
If a kid with a scary hacker name did this sort of thing, he’d be arrested. But this is Sears, so who knows what will happen to them. But what should happen is that the anti-spyware companies should treat this as the malware it is, and not ignore it because it’s done by a Fortune 500 company.
The British Government changes their rhetoric:
The words “war on terror” will no longer be used by the British government to describe attacks on the public, the country’s chief prosecutor said Dec. 27.
Sir Ken Macdonald said terrorist fanatics were not soldiers fighting a war but simply members of an aimless “death cult.”
The Director of Public Prosecutions said: ‘We resist the language of warfare, and I think the government has moved on this. It no longer uses this sort of language.”
London is not a battlefield, he said.
“The people who were murdered on July 7 were not the victims of war. The men who killed them were not soldiers,” Macdonald said. “They were fantasists, narcissists, murderers and criminals and need to be responded to in that way.”
This is excellent. The only war has been rhetorical, and using that language only served to scare people and legitimize the terrorists. Someday the U.S. will follow suit.
While standard commercial software vendors sell software as a service, malware vendors sell malware as a service, which is advertised and distributed like standard software. Communicating via internet relay chat (IRC) and forums, hackers advertise Iframe exploits, pop-unders, click fraud, posting and spam. “If you don’t have it, you can rent it here,” boasts one post, which also offers online video tutorials. Prices for services vary by as much as 100-200 percent across sites, while prices for non-Russian sites are often higher: “If you want the discount rate, buy via Russian sites,” says Genes.
In March the price quoted on malware sites for the Gozi Trojan, which steals data and sends it to hackers in an encrypted form, was between $1,000 (£500) and $2,000 for the basic version. Buyers could purchase add-on services at varying prices starting at $20.
This kind of thing is also discussed here.
Sidebar photo of Bruce Schneier by Joe MacInnis.