Schneier on Security
A blog covering security and security technology.
October 2007 Archives
From the UK:
"We continuously monitor the effectiveness of, in particular, the liquid security measures..."
They use high-tech data-mining algorithms to scan through the huge daily logs of every call made on the AT&T network; then they use sophisticated algorithms to analyze the connections between phone numbers: who is talking to whom? The paper literally uses the term "Guilt by Association" to describe what they're looking for: what phone numbers are in contact with other numbers that are in contact with the bad guys?
A specialized printer used to print Missouri driver's licenses was stolen and recovered.
It's a funny story, actually. Turns out the thief couldn't get access to the software needed to run the printer; a lockout on the control computer apparently thwarted him. When he called tech support, they tipped off the Secret Service.
On the one hand, this probably won't deter a more sophisticated thief. On the other hand, you can make pretty good forgeries with off-the-shelf equipment.
Oh, the stupid:
State officials have decided not to publicize their list of polling places in Pennsylvania, citing concerns that terrorists could disrupt elections in the commonwealth.
A few days later the governor rescinded the order.
This otherwise amusing story has some serious lessons:
John: Yes, I'm calling to find out why request number 48931258 to transfer somedomain.com was rejected.
Ha ha. The idiot ISP guy doesn't realize how easy it for anyone with a word processor and a laser printer to fake a letterhead. But what this story really shows is how hard it is for people to change their security intuition. Security-by-letterhead was fairly robust when printing was hard, and faking a letterhead was real work. Today it's easy, but people -- especially people who grew up under the older paradigm -- don't act as if it is. They would if they thought about it, but most of the time our security runs on intuition and not on explicit thought.
This kind of thing bites us all the time. Mother's maiden name is no longer a good password. An impressive-looking storefront on the Internet is not the same as an impressive-looking storefront in the real world. The headers on an e-mail are not a good authenticator of its origin. It's an effect of technology moving faster than our ability to develop a good intuition about that technology.
And, as technology changes ever increasingly faster, this will only get worse.
Here's a interesting paper from Carnegie Mellon University: "An Inquiry into the Nature and Causes of the Wealth of Internet Miscreants."
The paper focuses on the large illicit market that specializes in the commoditization of activities in support of Internet-based crime. The main goal of the paper was to understand and measure how these markets function, and discuss the incentives of the various market entities. Using a dataset collected over seven months and comprising over 13 million messages, they were able to categorize the market's participants, the goods and services advertised, and the asking prices for selected interesting goods.
Really cool stuff.
Unfortunately, the data is extremely noisy and so far the authors have no way to cross-validate it, so it is difficult to make any strong conclusions.
Basically, the Swiss company ID Quantique convinced the Swiss government to use quantum cryptography to protect vote transmissions during their October 21 election. It was a great publicity stunt, and the news articles were filled with hyperbole: how the "unbreakable" encryption will ensure the integrity of the election, how this will protect the election against hacking, and so on.
Complete idiocy. There are many serious security threats to voting systems, especially paperless touch-screen voting systems, but they're not centered around the transmission of votes from the voting site to the central tabulating office. The software in the voting machines themselves is a much bigger threat, one that quantum cryptography doesn't solve in the least.
Moving data from point A to point B securely is one of the easiest security problems we have. Conventional encryption works great. PGP, SSL, SSH could all be used to solve this problem, as could pretty much any good VPN software package; there's no need to use quantum crypto for this at all. Software security, OS security, network security, and user security are much harder security problems; and quantum crypto doesn't even begin to address them.
So, congratulations to ID Quantique for a nice publicity stunt. But did they actually increase the security of the Swiss election? Doubtful.
Just the thing for Hallowe'en.
The Utah company AccessData has been doing this sort of thing much longer, and has way better technology.
So, this pedophile posts photos of himself with young boys, but obscures his face with the Photoshop "twirl" tool. Turns out that the transformation isn't lossy, and that you can untwirl his face.
He was caught in Thailand.
Moral: Don't blindly trust technology; you need to really know what it's doing.
The Colorado Rockies will try again to sell World Series tickets through their Web site starting on Tuesday at noon.
There was a presale that "went well":
The Colorado Rockies had a chance Sunday to test their online-sales operation in advance.
Certainly scalpers have an incentive to attack this system.
EDITED TO ADD (10/28): The FBI is investigating.
Brandon Mayfield, the Oregon man who was arrested because his fingerprint "matched" that of an Algerian who handled one of the Madrid bombs, now has a legacy: a judge has ruled partial prints cannot be used in a murder case.
"The repercussions are terrifically broad," said David L. Faigman, a professor at the University of California's Hastings College of the Law and an editor of Modern Scientific Evidence: The Law and Science of Expert Testimony.
A school in the UK is using RFID chips in school uniforms to track attendance.
So now it's easy to cut class; just ask someone to carry your shirt around the building while you're elsewhere.
Yet another movie-plot threat to worry about:
One of the cheapest and most destructive weapons available to terrorists today is also one of the most widely ignored: insects. These biological warfare agents are easy to sneak across borders, reproduce quickly, spread disease, and devastate crops in an indefatigable march. Our stores of grain could be ravaged by the khapra beetle, cotton and soybean fields decimated by the Egyptian cottonworm, citrus and cotton crops stripped by the false codling moth, and vegetable fields pummeled by the cabbage moth. The costs could easily escalate into the billions of dollars, and the resulting disruption of our food supply - and our sense of well-being - could be devastating. Yet the government focuses on shoe bombs and anthrax while virtually ignoring insect insurgents.
I made my own as a kid. These are much nicer. You can even order your hollow book by topic, the better to blend it into the rest of your library.
Politicians of both major parties wield this as the ultimate political threat. Its invocation typically predicts that if a certain piece of legislation is passed (or not passed) Americans will die. Variations may warn that children will die or troops will die. Any version is difficult for the target to combat.
Clever technique to put a checksum into the bill total when you add a tip at a restaurant.
I don't know how common tip fraud is. This thread implies that it's pretty common, but I use my credit card in restaurants all the time all over the world and I've never been the victim of this sort of fraud. On the other hand, I'm not a lousy tipper. And maybe I don't frequent the right sort of restaurants.
He cites a key advantage to bringing in lawyers up front: "If you hire a law firm to supervise the process, even if there are technical engineers involved, then the process will be covered by attorney-client privilege," Cunningham said.
Gregory Engel has some good comments about this:
This isn't a "prevention initiative" for data security, it's a preemptive initiative for corporate irresponsibility.
I'm not sure it will work, though. I don't think you can run all of your data past your attorney and then magically have it imbued with the un-subpoena-able power of "attorney-client privilege."
EDITED TO ADD (10/22): This talk from Defcon this year is related.
From The Onion:
"Your outdated ideas of what terrorism is have been challenged," an unidentified, disembodied voice announces following the video's first 45 minutes of random imagery set to minimalist techno music. "It is not your simple bourgeois notion of destructive explosions and weaponized biochemical agents. True terror lies in the futility of human existence."
A photo of the still unnamed species:
A 0.2-inch-long (0.5-centimeter-long) larval squid is seen through a microscope's lens in this handout image released October 16, 2007, by WHOI.
There's good news:
This year, the TSA for the first time began running covert tests every day at every checkpoint at every airport. That began partly in response to the classified TSA report showing that screeners at San Francisco International Airport were tested several times a day and found about 80% of the fake bombs.
Repeated testing is good, for a whole bunch of reasons.
There's bad news:
Howe said the increased difficulty explains why screeners at Los Angeles and Chicago O'Hare airports failed to find more than 60% of fake explosives that TSA agents tried to get through checkpoints last year.
Sure, the tests are harder. But those are miserable numbers.
And there's unexplainable news:
At San Diego International Airport, tests are run by passengers whom local TSA managers ask to carry a fake bomb, said screener Cris Soulia, an official in a screeners union.
Someone please tell me this doesn't actually happen. "Hi Mr. Passenger. I'm a TSA manager. You know I'm not lying to you because of this official-looking laminated badge I have. We need you to help us test airport security. Here's a 'fake' bomb that we'd like you to carry through security in your luggage. Another TSA manager will, um, meet you at your destination. Give the fake bomb to him when you land. And, by the way, what's your mother's maiden name?"
How in the world is this a good idea? And how hard is it to dress real TSA managers up like vacationers?
EDITED TO ADD (10/24): Here's a story of someone being asked to carry an item through airport security at Dulles Airport.
EDITED TO ADD (10/26): TSA claims that this doesn't happen:
TSA officials do not ask random passengers to carry fake bombs through checkpoints for testing at San Diego International Airport, or any other airport.
Is there anyone else who has had this happen to them?
Fascinating story of insider cheating:
Some opponents became suspicious of how a certain player was playing. He seemed to know what the opponents' hole cards were. The suspicious players provided examples of these hands, which were so outrageous that virtually all serious poker players were convinced that cheating had occurred. One of the players who'd been cheated requested that Absolute Poker provide hand histories from the tournament (which is standard practice for online sites). In this case, Absolute Poker "accidentally" did not send the usual hand histories, but instead sent a file that contained all sorts of private information that the poker site would never release. The file contained every player's hole cards, observations of the tables, and even the IP addresses of every person playing. (I put "accidentally" in quotes because the mistake seems like too great a coincidence when you learn what followed.) I suspect that someone at Absolute knew about the cheating and how it happened, and was acting as a whistleblower by sending these data. If that is the case, I hope whomever "accidentally" sent the file gets their proper hero's welcome in the end.
More details here.
EDITED TO ADD (10/20): More information.
EDITED TO ADD (11/13): This graph of players' river aggression is a great piece of evidence. Note the single outlying point.
There are no details of what the "hacking" was, or whether it was anything more spoofing the Caller ID:
Randal T. Ellis, 19, allegedly impersonated a caller from the Lake Forest home shortly before midnight March 29, saying he had murdered someone in the house and threatened to shoot others.
It's not true that no one worries about terrorists attacking chemical plants, it's just that our politics seem to leave us unable to deal with the threat.
Toxins such as ammonia, chlorine, propane and flammable mixtures are constantly being produced or stored in the United States as a result of legitimate industrial processes. Chlorine gas is particularly toxic; in addition to bombing a plant, someone could hijack a chlorine truck or blow up a railcar. Phosgene is even more dangerous. According to the Environmental Protection Agency, there are 7,728 chemical plants in the United States where an act of sabotage -- or an accident -- could threaten more than 1,000 people. Of those, 106 facilities could threaten more than a million people.
The problem of securing chemical plants against terrorism -- or even accidents -- is actually simple once you understand the underlying economics. Normally, we leave the security of something up to its owner. The basic idea is that the owner of each chemical plant 1) best understands the risks, and 2) is the one who loses out if security fails. Any outsider -- i.e., regulatory agency -- is just going to get it wrong. It's the basic free-market argument, and in most instances it makes a lot of sense.
And chemical plants do have security. They have fences and guards (which might or might not be effective). They have fail-safe mechanisms built into their operations. For example, many large chemical companies use hazardous substances like phosgene, methyl isocyanate and ethylene oxide in their plants, but don't ship them between locations. They minimize the amounts that are stored as process intermediates. In rare cases of extremely hazardous materials, no significant amounts are stored; instead they are only present in pipes connecting the reactors that make them with the reactors that consume them.
This is all good and right, and what free-market capitalism dictates. The problem is, that isn't enough.
Any rational chemical plant owner will only secure the plant up to its value to him. That is, if the plant is worth $100 million, then it makes no sense to spend $200 million on securing it. If the odds of it being attacked are less than 1 percent, it doesn't even make sense to spend $1 million on securing it. The math is more complicated than this, because you have to factor in such things as the reputational cost of having your name splashed all over the media after an incident, but that's the basic idea.
But to society, the cost of an actual attack can be much, much greater. If a terrorist blows up a particularly toxic plant in the middle of a densely populated area, deaths could be in the tens of thousands and damage could be in the hundreds of millions. Indirect economic damage could be in the billions. The owner of the chlorine plant would pay none of these potential costs.
Sure, the owner could be sued. But he's not at risk for more than the value of his company, and -- in any case -- he'd probably be smarter to take the chance. Expensive lawyers can work wonders, courts can be fickle, and the government could step in and bail him out (as it did with airlines after Sept. 11). And a smart company can often protect itself by spinning off the risky asset in a subsidiary company, or selling it off completely. The overall result is that our nation's chemical plants are secured to a much smaller degree than the risk warrants.
In economics, this is called an externality: an effect of a decision not borne by the decision maker. The decision maker in this case, the chemical plant owner, makes a rational economic decision based on the risks and costs to him.
If we -- whether we're the community living near the chemical plant or the nation as a whole -- expect the owner of that plant to spend money for increased security to account for those externalities, we're going to have to pay for it. And we have three basic ways of doing that. One, we can do it ourselves, stationing government police or military or contractors around the chemical plants. Two, we can pay the owners to do it, subsidizing some sort of security standard.
Or three, we could regulate security and force the companies to pay for it themselves. There's no free lunch, of course. "We," as in society, still pay for it in increased prices for whatever the chemical plants are producing, but the cost is paid for by the product’s consumers rather than by taxpayers in general.
Personally, I don't care very much which method is chosen: that's politics, not security. But I do know we'll have to pick one, or some combination of the three. Asking nicely just isn't going to work. It can't; not in a free-market economy.
We taxpayers pay for airport security, and not the airlines, because the overall effects of a terrorist attack against an airline are far greater than their effects to the particular airline targeted. We pay for port security because the effects of bringing a large weapon into the country are far greater than the concerns of the port's owners. And we should pay for chemical plant, train and truck security for exactly the same reasons.
Thankfully, after years of hoping the chemical industry would do it on its own, this April the Department of Homeland Security started regulating chemical plant security. Some complain that the regulations don't go far enough, but at least it's a start.
This essay previously appeared on Wired.com.
When Jackson logged in, the genius of 76service became immediately clear. 76service customers weren't weren't paying for already-stolen credentials. Instead, 76service sold subscriptions or "projects" to Gozi-infected machines. Usually, projects were sold in 30-day increments because that's a billing cycle, enough time to guarantee that the person who owns the machine with Gozi on it will have logged in to manage their finances, entering data into forms that could be grabbed.
And about banks not caring:
As much as the HangUp Team has relied on distributed pain for its success, financial institutions have relied on transferred risk to keep the Internet crime problem from becoming a consumer cause and damaging their businesses. So far, it has been cheaper to follow regulations enough to pass audits and then pay for the fraud rather than implement more serious security. "If you look at the volume of loss versus revenue, it's not horribly bad yet," says Chris Hoff, with a nod to the criminal hacker's strategy of distributed pain. "The banks say, 'Regulations say I need to do these seven things, so I do them and let's hope the technology to defend against this catches up.'"
The whole thing is worth reading.
If I could only install one "offensive" extension, it would absolutely be Tamper Data. In the past, I used Paros Proxy and Burp Suite for intercepting requests and responses between my Web browser and the Web server. These tasks can now be done within Firefox via Tamper Data -- without configuring the proxy settings.
The presidential campaigns' tactic of relying on impulsive giving spurred by controversial news events and hyped-up deadlines, combined with a number of other factors such as inconsistent Web addresses and a muddle of payment mechanisms creates a conducive environment for fraud, says Soghoian.
He has a point, but it's not new to online contributions. Fake charities and political organizations have long been problems. When you get a solicitation in the mail for "Concerned Citizens for a More Perfect Country" -- insert whatever personal definition you have for "more perfect" and "country" -- you don't know if the money is going to your cause or into someone's pocket. When you give money on the street to someone soliciting contributions for this cause or that one, you have no idea what will happen to the money at the end of the day.
In the end, contributing money requires trust. While the Internet certainly makes frauds like this easier -- anyone can set up a webpage that accepts PayPal and send out a zillion e-mails -- it's nothing new.
A handful of prominent security researchers have published a report on the security risks of the large-scale eavesdropping made temporarily legal by the "Protect America Act" passed in the U.S. in August, and which may be made permanently legal soon. "Risking Communications Security: Potential Hazards of the 'Protect America Act'" -- dated October 1, 2007, and marked "draft" -- is well worth reading:
The civil-liberties concern is whether the new law puts Americans at risk of spurious -- and invasive -- surveillance by their own government. The security concern is whether the new law puts Americans at risk of illegitimate surveillance by others. We focus on security. How will the collection system determine that communications have one end outside the United States? How will the surveillance be secured? We examine the risks and put forth recommendations to address them.
Not surprising, the risks are considerable. And difficult to address.
We see three serious security risks that have not been adequately addressed (or perhaps not even addressed at all): the danger that the system can be exploited by unauthorized users, the danger of criminal misuse by a trusted insider, and the danger of misuse by the U.S. government. Our recommendations are based on these concern.
The group has two basic recommendations: data minimization, and oversight:
Minimization is critical. Allowing collection of calls on U.S. territory necessarily entails greater access to the communications of U.S. persons; the architecture must minimize collection of both the call details and the content of these communications. The best way to prevent problems is to intercept as early as possible: at the cableheads; such a solution, by decreasing the number of interception points will simplify the security problem. Surveilling at the cableheads will help minimize collection but it is not sufficient. Intercepted traffic should be studied (by geo-location and any other available techniques) to determine whether it comes from non-targeted U.S. persons and if so, discarded before any further processing is done.
More in the report, of course.
EDITED TO ADD (2/4/08): Here's the final report.
Now this is a good idea:
In a letter sent Thursday to the Payment Card Industry (PCI) Security Standards Council, the group responsible for setting data-security guidelines for merchants and vendors, the National Retail Federation requested that member companies be allowed to instead keep only the authorization code and a truncated receipt, the NRF said in a statement.
Erasing the data is the easiest way to secure it from theft. But, of course, the issue is more complicated than that, and there's lots of politics. See the article for details.
It's nice to see the Palisades Medical Center take this kind of action. I wish places would do the same when the personal data of non-celebrities is exposed.
Computer and behavioral scientists at the University at Buffalo are developing automated systems that track faces, voices, bodies and other biometrics against scientifically tested behavioral indicators to provide a numerical score of the likelihood that an individual may be about to commit a terrorist act.
I am generally in favor of funding all sorts of research, no matter how outlandish -- you never know when you'll discover something really good -- and I am generally in favor of this sort of behavioral assessment profiling.
But I wish reporters would approach these topics with something resembling skepticism. The false-positive rate matters far more than the false-negative rate, and I doubt something like this will be ready for fielding any time soon.
EDITED TO ADD (10/13): Another comment.
And don't forget "How to be Happy."
Okay, this xkcd cartoon is really funny.
Magic fingers and an unerring eye gave "Hologram Tam," one of the best forgers in Europe, the skills to produce counterfeit banknotes so authentic that when he was arrested nearly £700,000 worth were in circulation.
This is too funny:
Fear that terrorists could poison children has led three Dover aldermen to begin inspecting gumball machines.
Here's another article.
This is simply too stupid for words.
Here's a video of my talk at Defcon 15.
I'm not sure this is a good idea:
Starting with about 20 models for 2009, the service will be able to slowly halt a car that is reported stolen, and the radio may even speak up and tell the thief to pull over because police are watching.
Anyone want to take a guess on how soon this system will be hacked?
At least, for now, you can opt out:
Those who want OnStar but don't like police having the ability to slow down their car can opt out of the service, Huber said. But he said their research shows that 95 percent of subscribers would like that feature.
This is a tough trade-off. Giving the good guys the ability to disable a car, as long as it can be done safely, is a good idea. But giving the bad guys the same ability is a really bad idea. Can we do the former without also doing the latter?
I'm not sure of the point of this law. Certainly it will have the effect of spooking businesses, who now have to worry about the police demanding their encryption keys and exposing their entire operations.
Cambridge University security expert Richard Clayton said in May of 2006 that such laws would only encourage businesses to house their cryptography operations out of the reach of UK investigators, potentially harming the country's economy. "The controversy here [lies in] seizing keys, not in forcing people to decrypt. The power to seize encryption keys is spooking big business," Clayton said.
But if you're guilty of something that can only be proved by the decrypted data, you might be better off refusing to divulge the key (and facing the maximum five-year penalty the statue provides) instead of being convicted for whatever more serious charge you're actually guilty of.
I think this is just another skirmish in the "war on encryption" that has been going on for the past fifteen years. (Anyone remember the Clipper chip?) The police have long maintained that encryption is an insurmountable obstacle to law and order:
The Home Office has steadfastly proclaimed that the law is aimed at catching terrorists, pedophiles, and hardened criminals -- all parties which the UK government contents are rather adept at using encryption to cover up their activities.
We heard the same thing from FBI Director Louis Freeh in 1993. I called them "The Four Horsemen of the Information Apocalypse" -- terrorists, drug dealers, kidnappers, and child pornographers -- and have been used to justify all sorts of new police powers.
I flew through Orlando today, and saw an automatic shoe-scanner in the lane for Clear passengers.
Poking around on the TSA website, I found this undated page. It seems they didn't pass the TSA tests, and will be discontinued:
The shoe scanning feature on the machine presented for testing on August 20 does not meet minimum detection standards. While significant improvements were made, (in fact a new machine was submitted) the shoe scanner still does not meet standards to ensure detection of explosives.
The idea of drawing cipher DAGs certainly isn't new; DAGs are common in cryptographic research and even more common in cryptographic education. What's new here is the level of automation, minimizing the amount of cipherspecific effort required to build a DAG from a cipher (starting from a typical reference implementation in C or C++) and to visualize the DAG.
Only $166. It's the size of a cell phone, has a 5-10 meter range, and blocks GSM 850, 900, 1800, and 1900 MHz.
I want one.
Pity they're illegal to use in the U.S.:
In the United States, United Kingdom, Australia and many other countries, blocking cell-phone services (as well as any other electronic transmissions) is against the law. In the United States, cell-phone jamming is covered under the Communications Act of 1934, which prohibits people from "willfully or maliciously interfering with the radio communications of any station licensed or authorized" to operate. In fact, the "manufacture, importation, sale or offer for sale, including advertising, of devices designed to block or jam wireless transmissions is prohibited" as well.
EDITED TO ADD (10/12): Here's an even cheaper model. I've been told that Deal Extreme ships the unit with a label that says it's a LED flashlight -- with a value of HKD 45 --so it will just slip through customs.
EDITED TO ADD (11/6): A video demo.
Hawaiian alleged Murnane -- who was placed on a 90-leave by Mesa's board last week -- deleted hundreds of pages of computer records that would have shown that Mesa misappropriated the Hawaiian information.
Burma's ruling junta is attempting to seize United Nations computers containing information on opposition activists in the latest stage of its brutal crackdown on pro-democracy demonstrations, The Times has learnt.
Another reason law enforcement's demands that e-mails be tracable is a bad idea.
Methanol fuel cells are now allowed on airplanes. This paragraph sums up the inconsistency nicely:
In some sense, though, that's missing the point. Read the last restriction again. So now, innocuous gels/liquids/shampoos are deemed too hazardous to bring inside the airplane cabin, but a known volatile liquid (however safe it may be) is required to be stored inside your carryon baggage? I'm not criticizing the technology here, but I have a feeling that that this DOT logic is going to be questioned repeatedly by frazzled flyers.
This is all strange:
In a telephone interview, Fischvogt also told me, "we received word from the pilot about the suspicious activity before the flight landed." Fischvogt explained that when Flight 518 landed, it sat on the tarmac for 45 minutes before FBI "took jurisdiction," boarded the plane and arrested two people. DHS and local law enforcement were also present on the tarmac but "FBI took over the sight and the situation," Fischvogt said.
Of course the threat was a false alarm, but still....
EDITED TO ADD (10/9): Read the comments. The author of this blog seems to be a fear-mongering nutcase. (I should have read more about the source before posting this.)
If you've seen a Hollywood caper movie in the last 20 years you know the old video-camera-spoofing trick. That's where the criminal mastermind taps into a surveillance camera system and substitutes his own video stream, leaving hapless security guards watching an endless loop of absolutely-nothing-happening while the bank robber empties the vault.
Sixth one down; only in Japan.
In a startling discovery, officials of the Kalutara Prison on Horana Road have found a tunnel nearly 200 metres long and eight feet below the prison ground leading to the Kalu Ganga complete with electricity and light bulbs, dug by LTTE suspects in custody over a period of one year.
The tunnel was uncompleted. And the article fails to answer the most important question about this sort of thing: What did they do with the dirt?
"We also suspect that they would have daubed their bodies with soil and had later washed it away to prevent detection of their clandestine project," the official said.
I don't see that method being able to dispose of 200 meters worth of dirt over the course of a year, even assuming a small tunnel.
Amber Alerts are general notifications in the first few hours after a child has been abducted. The idea is that if you get the word out quickly, you have a better chance of recovering the child.
There's an interesting social dynamic here, though. If you issue too many of these, the public starts ignoring them. This is doubly true if the alerts turn out to be false.
Out of 233 Amber Alerts issued last year, at least 46 were made for children who were lost, had run away or were the subjects of hoaxes and misunderstandings, according to the Scripps Howard study, which used records from the National Center for Missing and Exploited Children.
Think of it as a denial-of-service attack against the real world.
Now this seems to be a great idea:
Security officials at Los Angeles International Airport now have a new weapon in their fight against terrorism: complete, baffling randomness. Anxious to thwart future terror attacks in the early stages while plotters are casing the airport, LAX security patrols have begun using a new software program called ARMOR, NEWSWEEK has learned, to make the placement of security checkpoints completely unpredictable. Now all airport security officials have to do is press a button labeled "Randomize," and they can throw a sort of digital cloak of invisibility over where they place the cops' antiterror checkpoints on any given day.
Your tax dollars at work:
Frustrated by press leaks about its most sensitive electronic surveillance work, the secretive National Security Agency convened an unprecedented series of off-the-record "seminars" in recent years to teach reporters about the damage caused by such leaks and to discourage reporting that could interfere with the agency's mission to spy on America's enemies.
In California, if you want to buy a police uniform, you'll need to prove you're a policeman:
Assembly Bill 1448 by Assemblyman Roger Niello, R-Fair Oaks, makes it a misdemeanor punishable by up to a $1,000 fine for vendors who do not verify the identification of those purchasing law enforcement uniforms. Previous law made it illegal to impersonate police but did not require an ID check at the point of purchase. The measure takes effect Jan. 1.
Remote controlled toys are getting more scrutiny:
Airport screeners are giving additional scrutiny to remote-controlled toys because terrorists could use them to trigger explosive devices, the Transportation Security Administration said Monday.
Okay, let's think this through. The one place where you don't need a modified remote-controlled toy is in the passenger cabin, because you have your hands available to push any required buttons. But a remote-controlled toy in checked luggage, now that's a clever idea. I put my modified remote-controlled toy bomb in my checked suitcase, and use the controller to detonate it once I'm in the air.
So maybe we want the remote-controlled toy in carry-on luggage, where there's a greater chance of detecting it (at the security checkpoint). And maybe we want to require the remote controller to be in checked luggage.
In any case, it's a great movie plot.
The Storm worm first appeared at the beginning of the year, hiding in e-mail attachments with the subject line: "230 dead as storm batters Europe." Those who opened the attachment became infected, their computers joining an ever-growing botnet.
Although it's most commonly called a worm, Storm is really more: a worm, a Trojan horse and a bot all rolled into one. It's also the most successful example we have of a new breed of worm, and I've seen estimates that between 1 million and 50 million computers have been infected worldwide.
Old style worms -- Sasser, Slammer, Nimda -- were written by hackers looking for fame. They spread as quickly as possible (Slammer infected 75,000 computers in 10 minutes) and garnered a lot of notice in the process. The onslaught made it easier for security experts to detect the attack, but required a quick response by antivirus companies, sysadmins and users hoping to contain it. Think of this type of worm as an infectious disease that shows immediate symptoms.
Worms like Storm are written by hackers looking for profit, and they're different. These worms spread more subtly, without making noise. Symptoms don't appear immediately, and an infected computer can sit dormant for a long time. If it were a disease, it would be more like syphilis, whose symptoms may be mild or disappear altogether, but which will eventually come back years later and eat your brain.
Storm represents the future of malware. Let's look at its behavior:
Not that we really have any idea how to mess with Storm. Storm has been around for almost a year, and the antivirus companies are pretty much powerless to do anything about it. Inoculating infected machines individually is simply not going to work, and I can't imagine forcing ISPs to quarantine infected hosts. A quarantine wouldn't work in any case: Storm's creators could easily design another worm -- and we know that users can't keep themselves from clicking on enticing attachments and links.
Redesigning the Microsoft Windows operating system would work, but that's ridiculous to even suggest. Creating a counterworm would make a great piece of fiction, but it's a really bad idea in real life. We simply don't know how to stop Storm, except to find the people controlling it and arrest them.
Unfortunately we have no idea who controls Storm, although there's some speculation that they're Russian. The programmers are obviously very skilled, and they're continuing to work on their creation.
Oddly enough, Storm isn't doing much, so far, except gathering strength. Aside from continuing to infect other Windows machines and attacking particular sites that are attacking it, Storm has only been implicated in some pump-and-dump stock scams. There are rumors that Storm is leased out to other criminal groups. Other than that, nothing.
Personally, I'm worried about what Storm's creators are planning for Phase II.
This essay originally appeared on Wired.com.
EDITED TO ADD (10/17): Storm is being partitioned, presumably so parts can be sold off. If that's true, we should expect more malicious activitity out of Storm in the future; anyone buying a botnet will want to use it.
Slashdot thread on Storm.
EDITEDT TO ADD (10/22): Here's research that suggests Storm is shinking.
EDITED T OADD (10/24): Another article about Storm striking back at security researchers.
When you build a surveillance system, you invite trusted insiders to abuse that system:
According to the indictment, Robinson, began a relationship with an unidentified woman in 2002 that ended acrimoniously seven months later. After the breakup, federal authorities allege Robinson accessed a government database known as the TECS (Treasury Enforcement Communications System) at least 163 times to track the travel patterns of the woman and her family.
What I want to know is how he got caught. It can be very hard to catch insiders like this; good audit systems are essential, but often overlooked in the design process.
A high school bans backpacks as a security measure. This also includes purses, which inconveniences girls who need to carry menstrual supplies. So now, girls who are carrying purses get asked by police: "Are you on your period?" The predictable uproar follows.
Three streets were closed and people evacuated from the area as the search was carried out. After locating the source at about 7pm, emergency crews smashed their way into the Thai Cottage restaurant in D'Arblay Street only to emerge with a 9lb pot of smouldering dried chillies.
Were this the U.S., that restaurant would be charged with terrorism, or creating a fake bomb, or anything to make the authorities feel better. On the other hand, at least the cook wasn't shot.
EDITED TO ADD (10/4): Common sense:
The police spokesman said no arrests were made in the case.
EDITED TO ADD (10/11): The BBC has a recipe, in case you need to create your own chemical weapon scare.
This story has been percolating around for a few days. Basically, Unisys was hired by the U.S. Department of Homeland Security to manage and monitor the department's network security. After data breaches were discovered, DHS blamed Unisys -- and I figured that everyone would be in serious CYA mode and that we'd never know what really happened. But it seems that there was a cover-up at Unisys, and that's a big deal:
As part of the contract, Unisys, based in Blue Bell, Pa., was to install network-intrusion detection devices on the unclassified computer systems for the TSA and DHS headquarters and monitor the networks. But according to evidence gathered by the House Homeland Security Committee, Unisys's failure to properly install and monitor the devices meant that DHS was not aware for at least three months of cyber-intrusions that began in June 2006. Through October of that year, Thompson said, 150 DHS computers -- including one in the Office of Procurement Operations, which handles contract data -- were compromised by hackers, who sent an unknown quantity of information to a Chinese-language Web site that appeared to host hacking tools.
What interests me the most (as someone with a company that does network security management and monitoring) is that there might be some liability here:
"For the hundreds of millions of dollars that have been spent on building this system within Homeland, we should demand accountability by the contractor," [Congressman] Thompson said in an interview. "If, in fact, fraud can be proven, those individuals guilty of it should be prosecuted."
And, as an aside, we see how useless certifications can be:
She said that Unisys has provided DHS "with government-certified and accredited security programs and systems, which were in place throughout 2006 and remain so today."
This article about the arms race between the U.S. military and jihadi Improvised Explosive Device (IED) makers in Iraq illustrates that more technology isn't always an effective security solution:
Insurgents have deftly leveraged consumer electronics technology to build explosive devices that are simple, cheap and deadly: Almost anything that can flip a switch at a distance can detonate a bomb. In the past five years, bombmakers have developed six principal detonation triggers -- pressure plates, cellphones, command wire, low-power radio-controlled, high-power radio-controlled and passive infrared -- that have prompted dozens of U.S. technical antidotes, some successful and some not.
Great article from The Economist on data collection, privacy, surveillance, and the future.
Here's the conclusion:
If the erosion of individual privacy began long before 2001, it has accelerated enormously since. And by no means always to bad effect: suicide-bombers, by their very nature, may not be deterred by a CCTV camera (even a talking one), but security wonks say many terrorist plots have been foiled, and lives saved, through increased eavesdropping, computer profiling and "sneak and peek" searches. But at what cost to civil liberties?
I assume you've all seen the news:
A government video shows the potential destruction caused by hackers seizing control of a crucial part of the U.S. electrical grid: an industrial turbine spinning wildly out of control until it becomes a smoking hulk and power shuts down.
I haven't written much about SCADA security, except to say that I think the risk is overblown today but is getting more serious all the time -- and we need to deal with the security before it's too late. I didn't know quite what to make of the Idaho National Laboratory video; it seemed like hype, but I couldn't find any details. (The CNN headline, "Mouse click could plunge city into darkness, experts say," was definitely hype.)
Then, I received this anonymous e-mail:
I was one of the industry technical folks the DHS consulted in developing the "immediate and required" mitigation strategies for this problem.
Remember the TJX hack from May 2007?
Seems that the credit card information was stolen by eavesdropping on wireless traffic at two Marshals stores in Miami. More details from the Canadian privacy commissioner:
"The company collected too much personal information, kept it too long and relied on weak encryption technology to protect it -- putting the privacy of millions of its customers at risk," said Stoddart, who serves as an ombudsman and advocate to protect Canadians' privacy rights.
This is an excellent series of blog posts by Microsoft's Larry Osterman about threat modeling, using the PlaySound API as an example. Long, detailed, and complicated, but well worth reading. The last post is particularly good.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.