Schneier on Security
A blog covering security and security technology.
July 2008 Archives
This is an engaging and fascinating video presentation by Professor James Duane of the Regent University School of Law, explaining why -- in a criminal matter -- you should never, ever, ever talk to the police or any other government agent. It doesn't matter if you're guilty or innocent, if you have an alibi or not -- it isn't possible for anything you say to help you, and it's very possible that innocuous things you say will hurt you.
Definitely worth half an hour of your time.
And this is a video of Virginia Beach Police Department Officer George Bruch, who basically says that Duane is right.
Video demonstrating how easy it is to social engineer your way into clubs by pretending you're the DJ.
This is just sad. The TSA confiscated a battery pack not because it's dangerous, but because other passengers might think it's dangerous. And they're proud of the fact.
"We must treat every suspicious item the same and utilize the tools we have available to make a final determination," said Federal Security Director David Wynn. "Procedures are in place for a reason and this is a clear indication our workforce is doing a great job."
My guess is that if Kip Hawley were allowed to comment on my blog, he would say something like this: "It's not just bombs that are prohibited; it's things that look like bombs. This looks enough like a bomb to fool the other passengers, and that in itself is a threat."
Okay, that's fair. But the average person doesn't know what a bomb looks like; all he knows is what he sees on television and the movies. And this rule means that all homemade electronics are confiscated, because anything homemade with wires can look like a bomb to someone who doesn't know better. The rule just doesn't work.
And in today's passengers-fight-back world, do you think anyone is going to successfully do anything with a fake bomb?
Great security story from an obituary of former OSS agent Roger Hall:
One of his favorite OSS stories involved a colleague sent to occupied France to destroy a seemingly impenetrable German tank at a key crossroads. The French resistance found that grenades were no use.
Hall's book about his OSS days, You're Stepping on My Cloak and Dagger, is a must-read.
Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven't already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.
The details of the vulnerability aren't important, but basically it's a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 188.8.131.52. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com -- there's no way for you to tell the difference -- and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.
Here's the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There's a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability -- but not the details -- and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.
Of course, the details leaked. How isn't important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I'm kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.
What's the moral here? It's easy to condemn Kaminsky: If he had shut up about the problem, we wouldn't be in this mess. But that's just wrong. Kaminsky found the vulnerability by accident. There's no reason to believe he was the first one to find it, and it's ridiculous to believe he would be the last. Don't shoot the messenger. The problem is with the DNS protocol; it's insecure.
The real lesson is that the patch treadmill doesn't work, and it hasn't for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won't prevent every vulnerability, but it's much more secure -- and cheaper -- than the patch treadmill we're all on now.
What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It's not that he discovers all possible attacks before the bad guys do; it's more that he anticipates potential types of attacks, and defends against them even if he doesn't know their details. I see this all the time in good cryptographic designs. It's over-engineering based on intuition, but if the security engineer has good intuition, it generally works.
Kaminsky's vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That's exactly the work-around being rolled out now following Kaminsky's discovery. Bernstein didn't discover Kaminsky's attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn't need to be patched; it's already immune to Kaminsky's attack.
That's what a good design looks like. It's not just secure against known attacks; it's also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards ... everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.
This essay previously appeared on Wired.com.
EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.
EDITED TO ADD (8/13): Someone else discovered the vulnerability first.
Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.
Don't worry, they won't be.
The key to understanding this is that this sort of contractual liability is part of a contract, and with free software -- or free anything -- there's no contract. Free software wouldn't fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law -- that would be bad.)
There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.
The insurance industry is key to making this work. Luckily, they're good at protecting people against liabilities. There's no reason to think they won't be able to do it here.
I've written more about liabilities and the insurance industry here.
SanDisk has introduced Write-Once Read-Many Memory (WORM) cards for forensic applications.
At the June Apogaea regional Burning Man event in Colorado, they burned a wooden/cloth giant squid. Before the burn, participants could crawl into the base of the body and turn a massive kaleidoscope with sun shining in the top. (Pictures of the squid and its demise. A picture from the inside.)
From this article, published last April:
Batiste confided, somewhat fantastically, that he wanted to blow up the Sears Tower in Chicago, which would then fall into a nearby prison, freeing Muslim prisoners who would become the core of his Moorish army. With them, he would establish his own country.
Somewhat fantastically? What would the Washington Post consider to be truly fantastic? A plan involving Godzilla? Clearly they have some very high standards.
I'm sick of people taking these idiots seriously. This plot is beyond fantastic, it's delusional.
They're confiscating sunscreen at Yankee Stadium:
The team contends that sunscreen has long been on the list of stadium contraband, but there is no mention of it on the Yankee Web site.
Next, I suppose, is confiscating liquids at pools.
We've collectively lost our minds.
This story has a happy ending, though. A day after The New York Post published this story, Yankee Stadium reversed its ban. Now, if only the Post had that same effect on airport security.
In my fourth column for the Guardian last Thursday, I talk about information security and liabilities:
Last summer, the House of Lords Science and Technology Committee issued a report on "Personal Internet Security." I was invited to give testimony for that report, and one of my recommendations was that software vendors be held liable when they are at fault. Their final report included that recommendation. The government rejected the recommendations in that report last autumn, and last week the committee issued a report on their follow-up inquiry, which still recommends software liabilities.
In this article about British speed cameras, and a trick to avoid them that does not work, is this sentence:
As vehicles pass between the entry and exit camera points their number plates are digitally recorded, whether speeding or not.
Without knowing more, I can guarantee that those records are kept forever.
EDITED TO ADD (7/25): As pointed out by Pete Darby in comments: Passenger moons speeding camera and gets his picture published even though the car was not speeding.
Police may take action against the man for public order offences and not wearing a seat belt.
How did they even know to look at the picture in the first place?
Thieves took a legitimate paper Farecard with $40 in value, sliced the card's magnetic strip into four lengthwise pieces, and then reattached one piece each to four separate defunct paper Farecards. The thieves then took the doctored Farecards to a Farecard machine and added fare, typically a nickel. By doing so, the doctored Farecard would go into the machine and a legitimate Farecard with the new value, $40.05, would come out.
My guess is that the thieves were caught not through some fancy technology, but because they had to monetize their attack. They sold Farecards on the street at half face value.
A high-level British government employee had his BlackBerry stolen by Chinese intelligence:
The aide, a senior Downing Street adviser who was with the prime minister on a trip to China earlier this year, had his BlackBerry phone stolen after being picked up by a Chinese woman who had approached him in a Shanghai hotel disco.
That can't look good on your annual employee review.
But it's this part of the article that has me confused:
Experts say that even if the aide’s device did not contain anything top secret, it might enable a hostile intelligence service to hack into the Downing Street server, potentially gaining access to No 10’s e-mail traffic and text messages.
Um, what? I assume the IT department just turned off the guy's password. Was this nonsense peddled to the press by the UK government, or is some "expert" trying to sell us something? The article doesn't say.
EDITED TO ADD (7/22): The first commenter makes a good point, which I didn't think of. The article says that it's Chinese intelligence:
A senior official said yesterday that the incident had all the hallmarks of a suspected honeytrap by Chinese intelligence.
But Chinese intelligence would be far more likely to clone the BlackBerry and then return it. Much better information that way. This is much more likely to be petty theft.
EDITED TO ADD (7/23): The more I think about this story, the less sense it makes. If you're a Chinese intelligence officer and you manage to get an aide to the British Prime Minister to have sex with one of your agents, you're not going to immediately burn him by stealing his BlackBerry. That's just stupid.
Who can not feel a little chill of fear after reading this: "Britain on alert for deadly new knife with exploding tip that freezes victims' organs."
Yes, it's real. The knife is designed for people who need to drop large animals quickly: sharks, bears, etc.
I have no idea why Britain is on alert for it, though.
EDITED TO ADD (7/24): Knife crime is rising in the UK.
This report, "Assessing the risks, costs and benefits of United States aviation security measures" by Mark Stewart and John Mueller, is excellent reading:
The United States Office of Management and Budget has recommended the use of cost-benefit assessment for all proposed federal regulations. Since 9/11 government agencies in Australia, United States, Canada, Europe and elsewhere have devoted much effort and expenditure to attempt to ensure that a 9/11 type attack involving hijacked aircraft is not repeated. This effort has come at considerable cost, running in excess of US$6 billion per year for the United States Transportation Security Administration (TSA) alone. In particular, significant expenditure has been dedicated to two aviation security measures aimed at preventing terrorists from hijacking and crashing an aircraft into buildings and other infrastructure: (i) Hardened cockpit doors and (ii) Federal Air Marshal Service. These two security measures cost the United States government and the airlines nearly $1 billion per year. This paper seeks to discover whether aviation security measures are cost-effective by considering their effectiveness, their cost and expected lives saved as a result of such expenditure. An assessment of the Federal Air Marshal Service suggests that the annual cost is $180 million per life saved. This is greatly in excess of the regulatory safety goal of $1-$10 million per life saved. As such, the air marshal program would seem to fail a cost-benefit analysis. In addition, the opportunity cost of these expenditures is considerable, and it is highly likely that far more lives would have been saved if the money had been invested instead in a wide range of more cost-effective risk mitigation programs. On the other hand, hardening of cockpit doors has an annual cost of only $800,000 per life saved, showing that this is a cost-effective security measure.
From the body:
Hardening cockpit doors has the highest risk reduction (16.67%) at lowest additional cost of $40 million. On the other hand, the Federal Air Marshal Service costs $900 million pa but reduces risk by only 1.67%. The Federal Air Marshal Service may be more cost-effective if it is able to show extra benefit over the cheaper measure of hardening cockpit doors. However, the Federal Air Marshal Service seems to have significantly less benefit which means that hardening cockpit doors is the more cost-effective measure.
Cost-benefit analysis is definitely the way to look at these security measures. It's hard for people to do, because it requires putting a dollar value on a human life -- something we can't possibly do with our own. But as a society, it is something we do again and again: when we raise or lower speed limits, when we ban a certain pesticide, when we enact building codes. Insurance companies do it all the time. We do it implicitly, because we can't talk about it explicitly. I think there is considerable value in talking about it.
(Note the table on page 5 of the report, which lists the cost per lives saved for a variety of safety and security measures.)
The final paper will eventually be published in the Journal of Transportation Security. I never even knew there was such a thing.
EDITED TO ADD (8/13): New York Times op-ed on the subject.
I sure want to know more:
Giants have very strange sexual behaviour where the male has a metre-long muscular penis that he uses a bit like a nail gun and shoots cords of sperm under the skin of the female's arms and she carries the sperm around with her until she is ready to lay her big jelly mass of a million eggs.
By Mitchell & Webb.
Did you know that, in some jurisdictions, police can inject midazolam (better known as Versed) into suspects to subdue them?
"There is no research guideline. There is no validated protocol for this. There's not even a clear set of indications for when this is to be used except when people are agitated. By saying that it's done by the emergency medical personnel, they basically are trying to have it both ways. That is, they’re trying to use a medical protocol that is not validated, not for a police function, arrest and detention," Miles said.
The biggest side effect is amnesia, which makes it harder for any defendant to defend himself in court.
Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.
ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.
The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.
There are several threat models against which a DFS could potentially be secure:
We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.
So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn't trust it.
One talks about a generalization to encrypted partitions. If you don't encrypt the entire drive, there is the possibility -- and it seems very probable -- that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.
Hobby groups throughout North America have cracked supposedly unbeatable locks. Mr. Nekrep, who maintains a personal collection of more than 300 locks, has demonstrated online how to open a Kensington laptop lock using Scotch tape and a Post-it note. Another Lockpicking101.com member discovered the well-publicized method of opening Kryptonite bike locks with a ball-point pen, a revelation that prompted Kryptonite to replace all of its compromised locks.
This is an excellent paper by Ohio State political science professor John Mueller. Titled "The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland," it lays out some common send premises and policy implications.
1. The number of potential terrorist targets is essentially infinite.
The policy implications:
1. Any protective policy should be compared to a "null case": do nothing, and use the money saved to rebuild and to compensate any victims.
Here's the abstract:
This paper attempts to set out some general parameters for coming to grips with a central homeland security concern: the effort to make potential targets invulnerable, or at least notably less vulnerable, to terrorist attack. It argues that protection makes sense only when protection is feasible for an entire class of potential targets and when the destruction of something in that target set would have quite large physical, economic, psychological, and/or political consequences. There are a very large number of potential targets where protection is essentially a waste of resources and a much more limited one where it may be effective.
The whole paper is worth reading.
Tomorrow, in Australia.
EDITED TO ADD (7/22): A final note:
...the poor museum volunteers, the hardy souls who showed the members of the public around, explained what was going to happen, and led them to their seats. Raise your glass to ...
Trusted insiders can do a lot of damage:
Childs created a password that granted him exclusive access to the system, authorities said. He initially gave pass codes to police, but they didn't work. When pressed, Childs refused to divulge the real code even when threatened with arrest, they said.
EDITED TO ADD (8/10): According to another article, "officials say the network so far has been humming along just fine without admin access by the city." So it's not a complete shutdown as much as an admin lock out.
EDITED TO ADD (8/13): This is getting weirder. Terry Childs gave the right passwords, but only to the mayor personally.
The U.S terrorist watch list has hit one million names. I sure hope we're giving our millionth terrorist a prize of some sort.
Is this idiotic, or what?
Some people are saying fix it, but there seems to be no motivation to do so. I'm sure the career incentives aren't aligned that way. You probably get promoted by putting people on the list. But taking someone off the list...if you're wrong, no matter how remote that possibility is, you can probably lose your career. This is why in civilized societies we have a judicial system, to be an impartial arbiter between law enforcement and the accused. But that system doesn't apply here.
Kafka would be proud.
EDITED TO ADD (7/16): More information:
There are only 400,000 on it, and 95 percent are not U.S. "persons." (Persons = citizens plus others with a legal right to be in the U.S.)
Not that 400,000 terrorists is any less absurd.
Screening and law enforcement agencies encountered the actual people on the watch list (not false matches) more than 53,000 times from December 2003 to May 2007, according to a Government Accountability Office report last fall.
Okay, so I have a question. How many of those 53,000 were arrested? Of those who were not, why not? How many have we taken off the list after we've investigated them?
EDITED TO ADD (7/17): Bob Blakely runs the numbers.
EDITED TO ADD (8/13): The Daily Show's Jon Stewart on the subject.
By a California court:
The designer, Carter Bryant, has been accused by Mattel of using Evidence Eliminator on his laptop computer just two days before investigators were due to copy its hard drive.
I have often recommended that people use file erasure tools regularly, especially when crossing international borders with their computers. Now we have one more reason to use them regularly: plausible deniability if you're accused of erasing data to keep it from the police.
Last week's dramatic rescue of 15 hostages held by the guerrilla organization FARC was the result of months of intricate deception on the part of the Colombian government. At the center was a classic man-in-the-middle attack.
In a man-in-the-middle attack, the attacker inserts himself between two communicating parties. Both believe they're talking to each other, and the attacker can delete or modify the communications at will.
The Wall Street Journal reported how this gambit played out in Colombia:
"The plan had a chance of working because, for months, in an operation one army officer likened to a 'broken telephone,' military intelligence had been able to convince Ms. Betancourt's captor, Gerardo Aguilar, a guerrilla known as 'Cesar,' that he was communicating with his top bosses in the guerrillas' seven-man secretariat. Army intelligence convinced top guerrilla leaders that they were talking to Cesar. In reality, both were talking to army intelligence."
This ploy worked because Cesar and his guerrilla bosses didn't know one another well. They didn't recognize one anothers' voices, and didn't have a friendship or shared history that could have tipped them off about the ruse. Man-in-the-middle is defeated by context, and the FARC guerrillas didn't have any.
And that's why man-in-the-middle, abbreviated MITM in the computer-security community, is such a problem online: Internet communication is often stripped of any context. There's no way to recognize someone's face. There's no way to recognize someone's voice. When you receive an e-mail purporting to come from a person or organization, you have no idea who actually sent it. When you visit a website, you have no idea if you're really visiting that website. We all like to pretend that we know who we're communicating with -- and for the most part, of course, there isn't any attacker inserting himself into our communications -- but in reality, we don't. And there are lots of hacker tools that exploit this unjustified trust, and implement MITM attacks.
Even with context, it's still possible for MITM to fool both sides -- because electronic communications are often intermittent. Imagine that one of the FARC guerrillas became suspicious about who he was talking to. So he asks a question about their shared history as a test: "What did we have for dinner that time last year?" or something like that. On the telephone, the attacker wouldn't be able to answer quickly, so his ruse would be discovered. But e-mail conversation isn't synchronous. The attacker could simply pass that question through to the other end of the communications, and when he got the answer back, he would be able to reply.
This is the way MITM attacks work against web-based financial systems. A bank demands authentication from the user: a password, a one-time code from a token or whatever. The attacker sitting in the middle receives the request from the bank and passes it to the user. The user responds to the attacker, who passes that response to the bank. Now the bank assumes it is talking to the legitimate user, and the attacker is free to send transactions directly to the bank. This kind of attack completely bypasses any two-factor authentication mechanisms, and is becoming a more popular identity-theft tactic.
There are cryptographic solutions to MITM attacks, and there are secure web protocols that implement them. Many of them require shared secrets, though, making them useful only in situations where people already know and trust one another.
The NSA-designed STU-III and STE secure telephones solve the MITM problem by embedding the identity of each phone together with its key. (The NSA creates all keys and is trusted by everyone, so this works.) When two phones talk to each other securely, they exchange keys and display the other phone's identity on a screen. Because the phone is in a secure location, the user now knows who he is talking to, and if the phone displays another organization -- as it would if there were a MITM attack in progress -- he should hang up.
Zfone, a secure VoIP system, protects against MITM attacks with a short authentication string. After two Zfone terminals exchange keys, both computers display a four-character string. The users are supposed to manually verify that both strings are the same -- "my screen says 5C19; what does yours say?" -- to ensure that the phones are communicating directly with each other and not with an MITM. The AT&T TSD-3600 worked similarly.
This sort of protection is embedded in SSL, although no one uses it. As it is normally used, SSL provides an encrypted communications link to whoever is at the other end: bank and phishing site alike. And the better phishing sites create valid SSL connections, so as to more effectively fool users. But if the user wanted to, he could manually check the SSL certificate to see if it was issued to "National Bank of Trustworthiness" or "Two Guys With a Computer in Nigeria."
This essay originally appeared on Wired.com.
Impressive. Be sure to watch the video.
From his blog:
Future presidents can learn a lot from all this -- do exactly what the Bush Administration did! If the law holds you back, don't first go to Congress and try to work something out. Secretly violate that law, and then when you get caught, staunchly demand that Congress change the law to your liking and then immunize any company that might have illegally cooperated with you. That's the lesson. You spit in Congress's face, and they'll give you what you want.
The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers -- military, government corporate -- and steal secrets. The truth is a lot more complicated.
There certainly is a lot of hacking coming out of China. Any company that does security monitoring sees it all the time.
These hacker groups seem not to be working for the Chinese government. They don't seem to be coordinated by the Chinese military. They're basically young, male, patriotic Chinese citizens, trying to demonstrate that they're just as good as everyone else. As well as the American networks the media likes to talk about, their targets also include pro-Tibet, pro-Taiwan, Falun Gong and pro-Uyghur sites.
The hackers are in this for two reasons: fame and glory, and an attempt to make a living. The fame and glory comes from their nationalistic goals. Some of these hackers are heroes in China. They're upholding the country's honor against both anti-Chinese forces like the pro-Tibet movement and larger forces like the United States.
And the money comes from several sources. The groups sell owned computers, malware services, and data they steal on the black market. They sell hacker tools and videos to others wanting to play. They even sell T-shirts, hats and other merchandise on their Web sites.
This is not to say that the Chinese military ignores the hacker groups within their country. Certainly the Chinese government knows the leaders of the hacker movement and chooses to look the other way. They probably buy stolen intelligence from these hackers. They probably recruit for their own organizations from this self-selecting pool of experienced hacking experts. They certainly learn from the hackers.
And some of the hackers are good. Over the years, they have become more sophisticated in both tools and techniques. They're stealthy. They do good network reconnaissance. My guess is what the Pentagon thinks is the problem is only a small percentage of the actual problem.
And they discover their own vulnerabilities. Earlier this year, one security company noticed a unique attack against a pro-Tibet organization. That same attack was also used two weeks earlier against a large multinational defense contractor.
They also hoard vulnerabilities. During the 1999 conflict over the two-states theory conflict, in a heated exchange with a group of Taiwanese hackers, one Chinese group threatened to unleash multiple stockpiled worms at once. There was no reason to disbelieve this threat.
If anything, the fact that these groups aren't being run by the Chinese government makes the problem worse. Without central political coordination, they're likely to take more risks, do more stupid things and generally ignore the political fallout of their actions.
In this regard, they're more like a non-state actor.
So while I'm perfectly happy that the U.S. government is using the threat of Chinese hacking as an impetus to get their own cybersecurity in order, and I hope they succeed, I also hope that the U.S. government recognizes that these groups are not acting under the direction of the Chinese military and doesn't treat their actions as officially approved by the Chinese government.
It's due to rising sea temperatures.
"You ain't takin' this through," she says. "No knives. You can't bring a knife through here."
Here's the video of a panel I was on at Supernova; the topic was security and privacy.
As they were walking around, Jeff saw some interesting looking produce and pulled out his Canon G-9 Point-and-Shoot and took a few pictures. Within a few minutes a man came up dressed in plain clothes, flashed a badge, and told him he couldn't take photos in the store. My brother said "no problem" (after all, it's a private store, right?), but then the guy demanded my brother's memory card.
The Rail Tram and Bus Union (RTBU) said today it was planning a 24-hour strike by rail workers on July 17, the busiest day of the Catholic event.
That's Morris Iemma, the Premier of New South Wales.
Terrorism is a heinous crime, and a serious international problem. It's not a catchall word to describe anything you don't like or don't agree with, or even anything that adversely affects a large number of people. By using the word more broadly than its actual meaning, we muddy the already complicated popular conceptions of the issue. The word "terrorism" has a specific meaning, and we shouldn't debase it.
They work by mounting two small infrared lights on the front. The wearer is completely inconspicuous to the human eye, but cameras only see a big white blur where your face should be.
EDITED TO ADD (7/8): Doubts have been raised about whether this works as advertised against paparazzi cameras. I can't tell for sure one way or the other.
Automated passenger profiling is rubbish, the Home Office has conceded in an amusing -- and we presume inadvertent -- blurt. "Attempts at automated profiling have been used in trial operations [at UK ports of entry] and has proved [sic] that the systems and technology available are of limited use," says home secretary Jacqui Smith in her response to Lord Carlile's latest terror legislation review.
The U.S. wants to do it anyway:
The Justice Department is considering letting the FBI investigate Americans without any evidence of wrongdoing, relying instead on a terrorist profile that could single out Muslims, Arabs or other racial or ethnic groups.
I've written about profiling before.
It's twenty-five feet long, with tentacles the size of human legs.
Not recommended to wear at the airport.
The UK is learning:
The Scottish Ambulance Service confirmed today that a package containing contact information from its Paisley Emergency Medical Dispatch Centre (EMDC) has been lost by the courier, TNT, while in transit to one of its IT suppliers.
News story here.
That's what you want to do. There is no problem if encrypted disks are lost. You can mail them directly to your worst enemy and there's no problem. Well, assuming you've implemented the encryption properly and chosen a good key.
This is much better than what the HM Revenue & Customs office did in November.
I wrote about disk and laptop encryption previously.
This is a weird statistic:
Some of the largest and medium-sized U.S. airports report close to 637,000 laptops lost each year, according to the Ponemon Institute survey released Monday. Laptops are most commonly lost at security checkpoints, according to the survey.
I don't know how to generalize that to a total number of lost laptops in the U.S.; let's call it 750,000. At $1,000 per laptop -- a very conservative estimate -- that's $750 million in lost laptops annually. Most are lost at security checkpoints, and I'm sure the numbers went up considerably since those checkpoints got more annoying after 9/11.
There aren't a lot of real numbers about the costs of increased airport security. We pay in time, in anxiety, in inconvenience. But we also pay in goods. TSA employees steal out of suitcases. And opportunists steal hundreds of millions of dollars of laptops annually.
EDITED TO ADD (7/14): Seems like this is not a story.
An air traveler in Canada is first told by an airline employee that it is "illegal" to say certain words, and then that if she raised a fuss she would be falsely accused:
When we boarded a little later, I asked for the ninny's name. He refused and hissed, "If you make a scene, I'll call the pilot and you won't be flying tonight."
More on the British war on photographers.
A British man is forced to give up his hobby of photographing buses due to harrassment.
The credit controller, from Gloucester, says he now suffers "appalling" abuse from the authorities and public who doubt his motives.
Is everything illegal and damaging now terrorism?
Israeli authorities are investigating why a Palestinian resident of Jerusalem rammed his bulldozer into several cars and buses Wednesday, killing three people before Israeli police shot him dead.
New Jersey public school locked down after someone saw a ninja:
Turns out the ninja was actually a camp counselor dressed in black karate garb and carrying a plastic sword.
And finally, not terrorism-related but a fine newspaper headline: "Giraffe helps camels, zebras escape from circus":
Amsterdam police say 15 camels, two zebras and an undetermined number of llamas and potbellied swine briefly escaped from a traveling Dutch circus after a giraffe kicked a hole in their cage.
Are llamas really that hard to count?
EDITED TO ADD (7/2): Errors fixed.
This excellent paper measures insecurity in the global population of browsers, using Google's web server logs. Why is this important? Because browsers are an increasingly popular attack vector.
The results aren't good.
...at least 45.2%, or 637 million users, were not using the most secure Web browser version on any working day from January 2007 to June 2008. These browsers are an easy target for drive-by download attacks as they are potentially vulnerable to known exploits.
That number breaks down as 577 million users of Internet Explorer, 38 million of Firefox, 17 million of Safari, and 5 million of Opera. Lots more detail in the paper, including some ideas for technical solutions.
EDITED TO ADD (7/2): More commentary.
It's been a while since I've written about electronic voting machines, but Dan Wallach has an excellent blog post about the current line of argument from the voting machine companies and why it's wrong.
Unsurprisingly, the vendors and their trade organization are spinning the results of these studies, as best they can, in an attempt to downplay their significance. Hopefully, legislators and election administrators are smart enough to grasp the vendors’ behavior for what it actually is and take appropriate steps to bolster our election integrity.
It used to be that just the entertainment industries wanted to control your computers -- and televisions and iPods and everything else -- to ensure that you didn't violate any copyright rules. But now everyone else wants to get their hooks into your gear.
OnStar will soon include the ability for the police to shut off your engine remotely. Buses are getting the same capability, in case terrorists want to re-enact the movie Speed. The Pentagon wants a kill switch installed on airplanes, and is worried about potential enemies installing kill switches on their own equipment.
Microsoft is doing some of the most creative thinking along these lines, with something it's calling "Digital Manners Policies." According to its patent application, DMP-enabled devices would accept broadcast "orders" limiting their capabilities. Cellphones could be remotely set to vibrate mode in restaurants and concert halls, and be turned off on airplanes and in hospitals. Cameras could be prohibited from taking pictures in locker rooms and museums, and recording equipment could be disabled in theaters. Professors finally could prevent students from texting one another during class.
The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That's a difficult security problem even in its simplest form. Distributing that system among a variety of different devices -- computers, phones, PDAs, cameras, recorders -- with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.
Once we go down this path -- giving one device authority over other devices -- the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?
How do we prevent this from being abused? Can a burglar, for example, enforce a "no photography" rule and prevent security cameras from working? Can the police enforce the same rule to avoid another Rodney King incident? Do the police get "superuser" devices that cannot be limited, and do they get "supercontroller" devices that can limit anything? How do we ensure that only they get them, and what do we do when the devices inevitably fall into the wrong hands?
It's comparatively easy to make this work in closed specialized systems -- OnStar, airplane avionics, military hardware -- but much more difficult in open-ended systems. If you think Microsoft's vision could possibly be securely designed, all you have to do is look at the dismal effectiveness of the various copy-protection and digital-rights-management systems we've seen over the years. That's a similar capabilities-enforcement mechanism, albeit simpler than these more general systems.
And that's the key to understanding this system. Don't be fooled by the scare stories of wireless devices on airplanes and in hospitals, or visions of a world where no one is yammering loudly on their cellphones in posh restaurants. This is really about media companies wanting to exert their control further over your electronics. They not only want to prevent you from surreptitiously recording movies and concerts, they want your new television to enforce good "manners" on your computer, and not allow it to record any programs. They want your iPod to politely refuse to copy music to a computer other than your own. They want to enforce their legislated definition of manners: to control what you do and when you do it, and to charge you repeatedly for the privilege whenever possible.
"Digital Manners Policies" is a marketing term. Let's call this what it really is: Selective Device Jamming. It's not polite, it's dangerous. It won't make anyone more secure -- or more polite.
This essay originally appeared in Wired.com.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.