Schneier on Security
A blog covering security and security technology.
January 2006 Archives
US-VISIT is the program to program to fingerprint and otherwise keep tabs on foriegn visitors to the U.S. This article talks about how the program is being rolled out, but the last paragraph is the most interesting:
Since January 2004, US-VISIT has processed more than 44 million visitors. It has spotted and apprehended nearly 1,000 people with criminal or immigration violations, according to a DHS press release.
I wrote about US-VISIT in 2004, and back then I said that it was too expensive and a bad trade-off. The price tag for "the next phase" was $15B; I'm sure the total cost is much higher.
But take that $15B number. One thousand bad guys, most of them not very bad, caught through US-VISIT. That's $15M per bad guy caught.
Surely there's a more cost-effective way to catch bad guys?
Seems that the censorship service that Google has set up at China's request suffers from a trivial bug: if you type your searches using capital letters, you bypass the censor.
This'll be fixed real soon, I'm sure.
There's a good write-up from The Register.
Two points stand out. One, the RFID chip in the passport can be read from ten meters. Two, lots of predictability in the encryption key -- sloppy, sloppy -- makes the brute-force attack much easier.
But the references are from last summer. Why is this being reported now?
Dead drops have gone high tech:
Russia's Federal Security Service (FSB) has opened an investigation into a spying device discovered in Moscow, the service said Monday.
BBC had this to say:
The old idea of the dead-drop ('letterboxes' the British tend to call them) - by the oak tree next to the lamppost in such-and-such a park etc - has given way to hand-held computers and short-range transmitters.
Transferring information to and from spies has always been risky. It's interesting to see modern technology help with this problem.
Phil Karn wrote to me in e-mail:
My first reaction: what a clever idea! It's about time spycraft went hi-tech. I'd like to know if special hardware was used, or if it was good old 802.11. Special forms of spread-spectrum modulation and oddball frequencies could make the RF hard to detect, but then your spies run the risk of being caught with highly specialized hardware. 802.11 is almost universal, so it's inherently less suspicious. Randomize your MAC address, change the SSID frequently and encrypt at multiple layers. Store sensitive files encrypted, without headers, in the free area of a laptop's hard drive so they're not likely to be found in forensic analysis. Keep all keys physically separate from encrypted data.
I am reminded of a dead drop technique used by, I think, the 9/11 terrorists. They used Hotmail (or some other anonymous e-mail service) accounts, but instead of e-mailing messages to each other, one would save a message as "draft" and the recipient would retrieve it from the same account later. I thought that was pretty clever, actually.
The 2005 Information Security Salary and Career Advancement Survey is interesting to read.
I get e-mail, occasionally weird e-mail. Every once in a while I get an e-mail like this:
I know this is going to sound like a plot from a movie. It isn't. A very good friend of mine Linda Rayburn and her son Michael Berry were brutally murdered by her husband...the son's stepfather.
I have no idea if any of this is true, but here's a news blip from 2004:
Feb. 2: Linda Rayburn, 44, and Michael Berry, 23, of Saugus, both killed at home. According to police, Rayburn's husband, David Rayburn, killed his wife and stepson with a hammer. Their bodies were found in adjacent bedrooms. David Rayburn left a suicide note, went to the basement, and hanged himself.
And here is the cryptogram:
The rectangle drawn over the top two lines was not done by the murderer. It was done by a family member afterwards.
Assuming this is all real, it's a real-world puzzle with no solution. No one knows what the message is, or even if there is a message.
If anyone figures it out, please let me know.
This is 100% right.
In 2002, a 60-foot long giant squid washed up on the beach in Tasmania.
Because of the low number of observations, scientists have struggled to build up a profile of the giant squid, discovering only in the last five years how it reproduces.
The National Security Agency has established a formal technology transfer mechanism for openly sharing technologies with the external community. Our scientists and engineers, along with our academic and research partners, have developed cutting-edge technologies, which have not only satisfied our mission requirements, but have also served to improve the global technological leadership of the United States. In addition, these technical advances have contributed to the creation and improvement of many commercial products in America.
Look at their 44 Technology Profile Fact Sheets.
This person didn't even land in the U.S. His plane flew from Canada to Mexico over U.S. airspace:
Fifteen minutes after the plane left Toronto's Pearson International Airport, the airline provided customs officials in the United States with a list of passengers. Agents ran the list through a national data base and up popped a name matching Mr. Kahil's.
Just another case of mistaken identity.
And here's a story of a four-year-old boy on the watch list.
This program has been a miserable failure in every respect. Not one terrorist caught, ever. (I say this because I believe 100% that if this administration caught anyone through this program, they would be trumpeting it for all to hear.) Thousands of innocents subjected to lengthy and extreme searches every time they fly, prevented from flying, or arrested.
EPIC FOIA Notes #11: No-Bid Contracts Go to Vendors with Close Ties to Election Advisory Group
From a security perspective, this seems like a really bad idea.
This is fascinating:
Among the personal papers bequeathed to the nation by former Prime Minister David Lange is a numbered copy of a top secret report from the organisation that runs the 'spy domes' at Waihopai and Tangimoana. It provides an unprecedented insight into how espionage was conducted 20 years ago.
If you have a moment, take this survey.
This research project seeks to understand how secrecy and openness can be balanced in the analysis and alerting of security vulnerabilities to protect critical national infrastructures. To answer this question, this thesis will investigate:
This looks interesting.
It's Friday, so why not something a little silly?
This is a good start:
i'm reading about how to survive a robot uprising. i'm not gonna give away all the secrets, but i'll share a few...
Surely we can do better. Any other suggestions?
EDITED TO ADD (1/30): Okay, it was Tuesday.
Super Cipher P2P Messenger uses "unbreakable Infinity bit Triple Layer Socket Encryption for completely secure communication."
Wow. That sure sounds secure.
EDITED TO ADD (2/15): More humor from their website:
Combining today's most advanced encryption techniques, and expanding on them. The maximum encryption cipher size is Infinity! Which means each bit of your file or message is encrypted uniquely, with no repetition. You define a short key in the program, this key is used in an algorithm to generate the Random Infinity bit Triple Cipher. Every time you send a message or file, even if it is exactly the same, the Triple Cipher completely changes; hence then name 'Random'. Using this method a hackers chances of decoding your messages or file is one to infinity. In fact, I challenge anyone in the world to try and break a single encrypted message; because it can't be done. Brute Force and pattern searching will never work. The Encryption method Super Cipher P2P Messenger uses is unbreakable.
Interesting article on how the French utilize domestic spying as a counterterrorism tool:
In the French system, an investigating judge is the equivalent of an empowered U.S. prosecutor. The judge is in charge of a secret probe, through which he or she can file charges, order wiretaps, and issue warrants and subpoenas. The conclusions of the judge are then transmitted to the prosecutor's office, which decides whether to send the case to trial. The antiterrorist magistrates have even broader powers than their peers. For instance, they can request the assistance of the police and intelligence services, order the preventive detention of suspects for six days without charge, and justify keeping someone behind bars for several years pending an investigation. In addition, they have an international mandate when a French national is involved in a terrorist act, be it as a perpetrator or as a victim. As a result, France today has a pool of specialized judges and investigators adept at dismantling and prosecuting terrorist networks.
This is a great use of massively parallel computing:
The 700 campus computers are part of an international grid called PrimeNet, consisting of 70,000 networked computers in virtually every time zone of the world. PrimeNet organizes the parallel number crunching to create a virtual supercomputer running 24x7 at 18 trillion calculations per second, or 'teraflops.' This greatly accelerates the search. This prime, found in just 10 months, would have taken 4,500 years on a single PC.
This article talks about a not-a-passport ID card that U.S. citizens could use to go back and forth between the U.S. and Canada or Mexico. Pretty basic stuff, but this paragraph jumped out:
Officials said the card would be about the size of a credit card, carry a picture of the holder and cost about $50, about half the price of a passport. It will be equipped with radio frequency identification, allowing it to be read from several yards away at border crossings.
"Several yards away"? What about inches?
Way back in 1974, Paul Karger and Roger Schell discovered a devastating attack against computer systems. Ken Thompson described it in his classic 1984 speech, "Reflections on Trusting Trust." Basically, an attacker changes a compiler binary to produce malicious versions of some programs, INCLUDING ITSELF. Once this is done, the attack perpetuates, essentially undetectably. Thompson demonstrated the attack in a devastating way: he subverted a compiler of an experimental victim, allowing Thompson to log in as root without using a password. The victim never noticed the attack, even when they disassembled the binaries -- the compiler rigged the disassembler, too.
This attack has long been part of the lore of computer security, and everyone knows that there's no defense. And that makes this paper by David A. Wheeler so interesting. It's "Countering Trusting Trust through Diverse Double-Compiling," and here's the abstract:
An Air Force evaluation of Multics, and Ken Thompson's famous Turing award lecture "Reflections on Trusting Trust," showed that compilers can be subverted to insert malicious Trojan horses into critical software, including themselves. If this attack goes undetected, even complete analysis of a system's source code will not find the malicious code that is running, and methods for detecting this particular attack are not widely known. This paper describes a practical technique, termed diverse double-compiling (DDC), that detects this attack and some unintended compiler defects as well. Simply recompile the purported source code twice: once with a second (trusted) compiler, and again using the result of the first compilation. If the result is bit-for-bit identical with the untrusted binary, then the source code accurately represents the binary. This technique has been mentioned informally, but its issues and ramifications have not been identified or discussed in a peer-reviewed work, nor has a public demonstration been made. This paper describes the technique, justifies it, describes how to overcome practical challenges, and demonstrates it.
To see how this works, look at the attack. In a simple form, the attacker modifies the compiler binary so that whenever some targeted security code like a password check is compiled, the compiler emits the attacker's backdoor code in the executable.
Now, this would be easy to get around by just recompiling the compiler. Since that will be done from time to time as bugs are fixed or features are added, a more robust form of of the attack adds a step: Whenever the compiler is itself compiled, it emits the code to insert malicious code into various programs, including itself.
Assuming broadly that the compiler source is updated, but not completely rewritten, this attack is undetectable.
Wheeler explains how to defeat this more robust attack. Suppose we have two completely independent compilers: A and T. More specifically, we have source code SA of compiler A, and executable code EA and ET. We want to determine if the binary of compiler A -- EA -- contains this trusting trust attack.
Here's Wheeler's trick:
Step 1: Compile SA with EA, yielding new executable X.
Step 2: Compile SA with ET, yielding new executable Y.
Since X and Y were generated by two different compilers, they should have different binary code but be functionally equivalent. So far, so good. Now:
Step 3: Compile SA with X, yielding new executable V.
Step 4: Compile SA with Y, yielding new executable W.
Since X and Y are functionally equivalent, V and W should be bit-for-bit equivalent.
And that's how to detect the attack. If EA is infected with the robust form of the attack, then X and Y will be functionally different. And if X and Y are functionally different, then V and W will be bitwise different. So all you have to do is to run a binary compare between V and W; if they're different, then EA is infected.
Now you might read this and think: "What's the big deal? All I need to test if I have a trusted compiler is...another trusted compiler. Isn't it turtles all the way down?"
Not really. You do have to trust a compiler, but you don't have to know beforehand which one you must trust. If you have the source code for compiler T, you can test it against compiler A. Basically, you still have to have at least one executable compiler you trust. But you don't have to know which one you should start trusting.
And the definition of "trust" is much looser. This countermeasure will only fail if both A and T are infected in exactly the same way. The second compiler can be malicious; it just has to be malicious in some different way: i.e., it can't have the same triggers and payloads of the first. You can greatly increase the odds that the triggers/payloads are not identical by increasing diversity: using a compiler from a different era, on a different platform, without a common heritage, transforming the code, etc.
Also, the only thing compiler B has to do is compile the compiler-under-test. It can be hideously slow, produce code that is hideously slow, or only work on a machine that hasn't been produced in a decade. You could create a compiler specifically for this task. And if you're really worried about "turtles all the way down," you can write Compiler B yourself for a computer you built yourself from vacuum tubes that you made yourself. Since Compiler B only has to occasionally recompile your "real" compiler, you can impose a lot of restrictions that you would never accept in a typical production-use compiler. And you can periodically check Compiler B's integrity using every other compiler out there.
For more detailed information, see Wheeler's website.
Now, this technique only detects when the binary doesn't match the source, so someone still needs to examine the compiler source code. But now you only have to examine the source code (a much easier task), not the binary.
It's interesting: the "trusting trust" attack has actually gotten easier over time, because compilers have gotten increasingly complex, giving attackers more places to hide their attacks. Here's how you can use a simpler compiler -- that you can trust more -- to act as a watchdog on the more sophisticated and more complex compiler.
"The Squid Seller's Call" by Matsuo Basho (1644-1694):
The squid seller's call
Translated by Robert Hass.
According to The New Scientist:
THE US Department of Defense has revealed plans to develop a lie detector that can be used without the subject knowing they are being assessed. The Remote Personnel Assessment (RPA) device will also be used to pinpoint fighters hiding in a combat zone, or even to spot signs of stress that might mark someone out as a terrorist or suicide bomber.
"Revealed plans" is a bit of an overstatement. It seems that they're just asking for proposals:
In a call for proposals on a DoD website, contractors are being given until 13 January to suggest ways to develop the RPA, which will use microwave or laser beams reflected off a subject's skin to assess various physiological parameters without the need for wires or skin contacts. The device will train a beam on "moving and non-cooperative subjects", the DoD proposal says, and use the reflected signal to calculate their pulse, respiration rate and changes in electrical conductance, known as the "galvanic skin response". "Active combatants will in general have heart, respiratory and galvanic skin responses that are outside the norm," the website says.
The DoD asks for pie-in-the-sky stuff all the time. For example, they've wanted a synthetic blood substitute for decades. A surreptitious lie detector would be pretty neat.
This seems like a really important development: an anonymous operating system:
Titled Anonym.OS, the system is a type of disc called a "live CD" -- meaning it's a complete solution for using a computer without touching the hard drive. Developers say Anonym.OS is likely the first live CD based on the security-heavy OpenBSD operating system.
Get yours here.
See also this Slashdot thread.
Lots of details I didn't know.
Today is the 20th Anniversary of the oldest computer virus known: the Brain virus.
It was a boot sector virus, and spread via infected floppy disks.
EDITED TO ADD (1/19): F-Secure has some amusing comments.
EDITED TO ADD (1/30): As many people pointed out, Brain is not the first computer virus. It's the first PC virus.
Great story illustrating how criminals adapt to security measures.
The notes were all $5 bills that had been bleached and altered to look like $100 bills, sheriff's investigators said. They passed muster with the pen because it determines only whether the paper used to manufacture the currency is legitimate, Bandy said.
As a security measure, the merchants use a chemical pen that determines if the bills are counterfeit. But that's not exactly what the pen does. The pen only verifies that the paper is legitimate. The criminals successfully exploited this security hole.
From the Scientific American essay "Murdercide: Science unravels the myth of suicide bombers":
Another method [of reducing terrorism], says Princeton University economist Alan B. Krueger, is to increase the civil liberties of the countries that breed terrorist groups. In an analysis of State Department data on terrorism, Krueger discovered that "countries like Saudi Arabia and Bahrain, which have spawned relatively many terrorists, are economically well off yet lacking in civil liberties. Poor countries with a tradition of protecting civil liberties are unlikely to spawn suicide terrorists. Evidently, the freedom to assemble and protest peacefully without interference from the government goes a long way to providing an alternative to terrorism." Let freedom ring.
This seems obvious to me.
Found on John Quarterman's blog.
All of that extra-legal NSA eavesdropping resulted in a whole lot of dead ends.
In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month.
Surely this can't be a surprise to anyone? And as I've been arguing for years, programs like this divert resources from real investigations.
President Bush has characterized the eavesdropping program as a "vital tool" against terrorism; Vice President Dick Cheney has said it has saved "thousands of lives."
A lot of this article reads like a turf war between the NSA and the FBI, but the "inside baseball" aspects are interesting.
EDITED TO ADD (1/18): Jennifer Granick has written on the topic.
The U.S. government's Department of Homeland Security plans to spend $1.24 million over three years to fund an ambitious software auditing project aimed at beefing up the security and reliability of several widely deployed open-source products.
I think this is a great use of public funds. One of the limitations of open-source development is that it's hard to fund tools like Coverity. And this kind of thing improves security for a lot of different organizations against a wide variety of threats. And it increases competition with Microsoft, which will force them to improve their OS as well. Everybody wins.
In addition to Linux, Apache, MySQL and Sendmail, the project will also pore over the code bases for FreeBSD, Mozilla, PostgreSQL and the GTK (GIMP Tool Kit) library.
And from ZDNet:
The list of open-source projects that Stanford and Coverity plan to check for security bugs includes Apache, BIND, Ethereal, KDE, Linux, Firefox, FreeBSD, OpenBSD, OpenSSL and MySQL, Coverity said.
Today is Ben Franklin's 300th birthday. Among many other discoveries and inventions, Franklin worked out a way of protecting buildings from lightning strikes, by providing a conducting path to ground -- outside a building -- from one or more pointed rods high atop the structure. People tried this, and it worked. Franklin became a celebrity, not just among "electricians," but among the general public.
An article in this month's issue of Physics Today has a great 1769 quote by Franklin about lightning rods, and the reality vs. the feeling of security:
Those who calculate chances may perhaps find that not one death (or the destruction of one house) in a hundred thousand happens from that cause, and that therefore it is scarce worth while to be at any expense to guard against it. But in all countries there are particular situations of buildings more exposed than others to such accidents, and there are minds so strongly impressed with the apprehension of them, as to be very unhappy every time a little thunder is within their hearing; it may therefore be well to render this little piece of new knowledge as general and well understood as possible, since to make us safe is not all its advantage, it is some to make us easy. And as the stroke it secures us from might have chanced perhaps but once in our lives, while it may relieve us a hundred times from those painful apprehensions, the latter may possibly on the whole contribute more to the happiness of mankind than the former.
One problem with cameras is that you can't trust the watchers not to misuse them:
Two council CCTV camera operators have been jailed for spying on a naked woman in her own home.
Also, The Register reported on this.
Reuters is reporting that Customs and Border Protection is opening international mail coming into the U.S. without warrant.
Sadly, this is legal.
Congress passed a trade act in 2002, 107 H.R. 3009, that expanded the Custom Service's ability to open international mail. Here's the beginning of Section 344:
(1) In general.--For purposes of ensuring compliance with the Customs laws of the United States and other laws enforced by the Customs Service, including the provisions of law described in paragraph (2), a Customs officer may, subject to the provisions of this section, stop and search at the border, without a search warrant, mail of domestic origin transmitted for export by the United States Postal Service and foreign mail transiting the United States that is being imported or exported by the United States Postal Service.
If I remember correctly, the ACLU was able to temper the amendment, and this language is better than what the government originally wanted.
Domestic First Class mail is still private; the police need a warrant is to open it. But there is a lower standard for Media Mail and the like, and a lower standard for "mail covers": the practice of collecting address information from the outside of the envelope.
It's from last September, but it's the biggest giant squid news in years -- a live giant squid caught on camera:
In their efforts to photograph the huge cephalopod, Tsunemi Kubodera and Kyoichi Mori have been using a camera and depth recorder attached to a long-line, which they lower into the sea from their research vessel.
See also this article from Nature.
According to the Associated Press:
State motor vehicle officials nationwide who will have to carry out the Real ID Act say its authors grossly underestimated its logistical, technological and financial demands.
I've already written about REAL ID, including the obscene costs:
REAL ID is expensive. It's an unfunded mandate: the federal government is forcing the states to spend their own money to comply with the act. I've seen estimates that the cost to the states of complying with REAL ID will be $120 million. That's $120 million that can't be spent on actual security.
According to the AP, I was way off:
Pennsylvania alone estimated a hit of up to $85 million. Washington state projected at least $46 million annually in the first several years.
Remember, security is a trade-off. REAL ID is a bad idea primarily because the security gained is not worth the enormous expense.
See also the ACLU's site on REAL ID.
In Beyond Fear, I wrote about the difficulty of verifying credentials. Here's a real story about that very problem:
When Frank Coco pulled over a 24-year-old carpenter for driving erratically on Interstate 55, Coco was furious. Coco was driving his white Chevy Caprice with flashing lights and had to race in front of the young man and slam on his brakes to force him to stop.
There's no obvious way to solve this. This is some of what I wrote in Beyond Fear:
Authentication systems suffer when they are rarely used and when people aren't trained to use them.
And that's precisely where Kelly makes his mistake. The problem isn't anonymity; it's accountability. If someone isn't accountable, then knowing his name doesn't help. If you have someone who is completely anonymous, yet just as completely accountable, then -- heck, just call him Fred.
Please read the whole thing before you comment.
From The Times:
Residents of a trendy London neighbourhood are to become the first in Britain to receive "Asbo TV" -- television beamed live to their homes from CCTV cameras on the surrounding streets.
Someone knows what the deal is here:
"The CCTV element is part curiosity, like a 21st-century version of Big Brother, and partly about security," said Atul Hatwell, of the Shoreditch Digital Bridge project.
Certainly this kind of system can be abused, but my guess is that worrying about this is kind of silly:
Andrew Duff, a Conservative councillor, raised concerns about the system being adopted by burglars to check unoccupied properties. "It could be used by dishonest people as well," he said.
My guess is that this sort of system will reduce the crime rate, as criminals move to neighborhoods without these sorts of systems. But once everyone has this sort of system, criminals will adapt and the crime rate will return to its original rate.
Meanwhile, everybody loses more privacy.
The government is already thinking about security checks for space tourists.
According to the BBC:
It has recommended security checks similar to those for airline passengers.
Here's the FAA draft.
Last Thursday, President Bush signed into law a prohibition on posting annoying Web messages or sending annoying e-mail messages without disclosing your true identity.
What does this mean for the comment section of this blog? Or any blog? Or Usenet?
More importantly, what does it mean for our society when obviously stupid laws like this get passed, and we have to rely on the police being nice enough to not enforce them?
EDITED TO ADD (1/9) Some commenters to BoingBoing clarify the legal issues. This is from an anonymous attorney:
The anonymous harassment provision ( Link ) is the old telephone-annoyance statute that has been on the books for decades. It was updated in the widely (and in many respects deservedly) ridiculed Communications Decency Act to include new technologies, and the cases make clear its applicability to Internet communications. See, e.g., ACLU v. Reno, 929 F. Supp. 824, 829 n.5 (E.D. Pa. 1996) (text here), aff'd, 521 U.S. 824 (1997). Unlike the indecency provisions of the CDA, this scope update was not invalidated in the courts and remains fully effective.
Interested in who your spouse is talking to? Your boss? A celebrity? A politician?
The Chicago Police Department is warning officers their cell phone records are available to anyone -- for a price. Dozens of online services are selling lists of cell phone calls, raising security concerns among law enforcement and privacy experts....
EDITED TO ADD (1/9): More information on BoingBoing.
EDITED TO ADD (1/9): Also see this on EPIC West.
EDITED TO ADD (1/14): Daniel Solove has some good commentary.
A squid that cares for its young:
But a team of ocean scientists exploring the inky depths of the Monterey Canyon off California has discovered that at least one squid species cares for its young with loving attention, the mother cradling the eggs in her arms for months, waving her tentacles to bathe the eggs in fresh seawater. The scientists suspect that other species are doting parents, too, and that misperceptions about squid behavior have arisen because the deep is so poorly explored.
Be careful what you write in your journal:
An airline passenger with the words "suicide bomber" written in his journal was arrested when his plane arrived in San Jose, California, on Wednesday, but the words appeared to refer to music and he was later released, officials said.
I'm not sure I want "Suicide Bombers" displayed on my iPod. I certainly wouldn't want to be in a band with that name, flying around the country with crates of gear marked "Suicide Bombers." That would be asking for trouble.
On the other hand, it's pretty sad what is enough to get you arrested these days:
"A male was observed by his fellow passengers as having a journal and handwritten on the journal were the words 'suicide bomber,'" FBI spokeswoman LaRae Quy said.
My guess is that it wouldn't matter how he held his backpack; once the jittery passenger saw the words everything else was interpreted suspiciously.
Here's an impressive piece of common sense:
Among the 15 bills governor Jim Doyle signed into law on Wednesday will require the software of touch-screen voting machines used in elections to be open-source.
Promachoteuthis sloani is a new type of squid, discovered in 2006 in the Mid-Atlantic Ridge.
He's against it:
More anonymity is good: that's a dangerous idea.
I don't even know where to begin. Anonymity is essential for free and fair elections. It's essential for democracy and, I think, liberty. It's essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority...and to protect individual self-respect.
Kelly makes the very valid point that reputation makes society work. But that doesn't mean that 1) reputation can't be anonymous, or 2) anonymity isn't also essential for society to work.
I'm writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.
Now, imagine the false alarms and abuses that are possible if you have lots more data, and lots more computers to slice and dice it.
Of course, there are applications where this sort of data mining makes a whole lot of sense. But finding terrorists isn't one of them. It's a needle-in-a-haystack problem, and piling on more hay doesn't help matters much.
This is an interesting demonstration project: a hand-held device that disables passive RFID tags.
There are several ways to deactivate RFID-Tags. One that might be offered by the industries are RFID-deactivators, which will send the RFID-Tag to sleep. A problem with this method is, that it is not permanent, the RFID-Tag can be reactivated (probably without your knowledge). Several ways of permanently deactivating RFID-Tags are know, e.g. cutting off the antenna from the actual microchip or overloading and literally frying the RFID-Tag by placing it in a common microwave-oven for even very short periods of time. Unfortunately both methods aren't suitable for the destruction of RFID-Tags in clothes: cutting off the antenna would require to damage the piece of cloth, while frying the chips is likely to cause a short but potent flame, which would damage most textiles or even set them on fire.
An obvious application would be to disable the RFID chip on your passport, but this kind of thing will probably be more popular with professional shoplifters.
The Treasury Department says that cyber crime has now outgrown illegal drug sales in annual proceeds, netting an estimated $105 billion in 2004, the report said.
And its Top Ten Issues to Watch in 2006:
More information on each item behind the link. I don't think the lists are in any order.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.