Schneier on Security
A blog covering security and security technology.
December 2006 Archives
To the reader out there that got me a PowerSquid: thank you.
(And here's a home-made version.)
A review of Kim:
Kipling packed a great deal of information and concept into his stories, and in "Kim" we find The Great Game: espionage and spying. Within the first twenty pages we have authentication by something you have, denial of service, impersonation, stealth, masquerade, role- based authorization (with ad hoc authentication by something you know), eavesdropping, and trust based on data integrity. Later on we get contingency planning against theft and cryptography with key changes.
This is a big deal. AACS (Advanced Access Content System), the copy protection is used in both Blu Ray and HD DVD, might have been cracked -- but it's still a rumor.
If it's true, what will be interesting is the system's in-the-field recovery system. Will it work?
Hypothetical fallout could be something like this: if PowerDVD is the source of the keys, an AACS initiative will be launched to revoke the player's keys to render it inoperable and in need of an update. There is some confusion regarding this process, however. It is not the case that you can protect a cracked player by hiding it offline (the idea being that the player will never "update" with new code that way). Instead, the player's existing keys will be revoked at the disc level, meaning that new pressings of discs won't play on the cracked player. In this way, hiding a player from updates will not result in having a cracked player that will work throughout the years. It could mean that all bets are off for discs that are currently playable on the cracked player, however (provided it is not updated). Again, this is all hypothetical at this time.
Copy protection is inherently futile. The best it can be is a neverending arms race, which is why Big Media is increasingly relying on legal and social barriers.
EDITED TO ADD (12/30): An update.
EDITED TO ADD (1/3): More info from the author of the tool.
This is interesting: A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb. Presumably, they decided that the loss of revenue due to an evacuation was not worth the additional security of an evacuation:
During the nearly two-hour search Wal-Mart officials opted not to evacuated the busy discount store even though police recomended [sic] they do so. Wal-Mart officials said the call was a hoax and not a threat.
I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.
Remember, though: security trade-offs are based on agenda. From the perspective of the Wal-Mart managers, the store's revenues are the most important; most of the risks of the bomb threat are externalities.
Of course, the store employees have a different agenda -- there is no upside to staying open, and only a downside due to the additional risk -- and they didn't like the decision:
The incident has family members of Wal-Mart employees criticizing store officials for failing to take police’s recommendation to evacuate.
Everyone knows that writing your password on your monitor is bad security. Is it really so hard to realize that attaching your SecurID token to your computer is just as bad?
The Communications Director for Montana's Congressman Denny Rehberg solicited "hackers" to break into the computer system at Texas Christian University and change his grades (so they would look better when he eventually ran for office, I presume). The hackers posted the email exchange instead. Very funny:
First, let's be clear. You are soliciting me to break the law and hack into a computer across state lines. That is a federal offense and multiple felonies. Obviously I can't trust anyone and everyone that mails such a request, you might be an FBI agent, right?
Automobile tires are now being outfitted with RFID transmitters:
Schrader Bridgeport is the market leader in direct Tire Pressure Monitoring Systems. Direct TPMS use pressure sensors inside each tire to transmit data to a dashboard display alerting drivers to tire pressure problems.
I'll bet anything you can track cars with them, just as you can track some joggers by their sneakers.
As I said before, the people who are designing these systems are putting "zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they're evil -- just because it's easier to ignore the externality than to worry about it."
Peter Gutman's "A Cost Analysis of Windows Vista Content Protection" is fascinating reading:
It contains stuff like:
Denial-of-Service via Driver Revocation
Read the whole thing.
And here's commentary on the paper.
Might even be good.
If you've traveled abroad recently, you've been investigated. You've been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.
Curious about your score? You can't see it. Interested in what information was used? You can't know that. Want to clear your name if you've been wrongly categorized? You can't challenge it. Want to know what kind of rules the computer is using to judge you? That's secret, too. So is when and how the score will be used.
U.S. customs agencies have been quietly operating this system for several years. Called Automated Targeting System, it assigns a "risk assessment" score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.
Little is known about this program. Its bare outlines were disclosed in the Federal Register in October. We do know that the score is partially based on details of your flight record--where you're from, how you bought your ticket, where you're sitting, any special meal requests--or on motor vehicle records, as well as on information from crime, watch-list and other databases.
The idea of feeding a limited set of characteristics into a computer, which then somehow divines a person's terrorist leanings, is farcical. Uncovering terrorist plots requires intelligence and investigation, not large-scale processing of everyone.
Additionally, any system like this will generate so many false alarms as to be completely unusable. In 2005 Customs & Border Protection processed 431 million people. Assuming an unrealistic model that identifies terrorists (and innocents) with 99.9% accuracy, that's still 431,000 false alarms annually.
The number of false alarms will be much higher than that. The no-fly list is filled with inaccuracies; we've all read about innocent people named David Nelson who can't fly without hours-long harassment. Airline data, too, are riddled with errors.
The odds of this program's being implemented securely, with adequate privacy protections, are not good. Last year I participated in a government working group to assess the security and privacy of a similar program developed by the Transportation Security Administration, called Secure Flight. After five years and $100 million spent, the program still can't achieve the simple task of matching airline passengers against terrorist watch lists.
In 2002 we learned about yet another program, called Total Information Awareness, for which the government would collect information on every American and assign him or her a terrorist risk score. Congress found the idea so abhorrent that it halted funding for the program. Two years ago, and again this year, Secure Flight was also banned by Congress until it could pass a series of tests for accuracy and privacy protection.
In fact, the Automated Targeting System is arguably illegal, as well (a point several congressmen made recently); all recent Department of Homeland Security appropriations bills specifically prohibit the department from using profiling systems against persons not on a watch list.
There is something un-American about a government program that uses secret criteria to collect dossiers on innocent people and shares that information with various agencies, all without any oversight. It's the sort of thing you'd expect from the former Soviet Union or East Germany or China. And it doesn't make us any safer from terrorism.
Here's an odd division of labor: a corporate data consultant argues for more openness, while a journalist favors more secrecy.
It's only odd if you don't understand security.
Two men have been issued Virginia drivers' licenses even though they were wearing outlandish disguises when they had their pictures taken at the Department of Motor Vehicles:
Will Carsola and Dave Stewart posted Internet videos of their pranks, which included scenes of Carsola spray-painting his face and neck bright red and Stewart painting the top of his head black and sticking a row of fake buckteeth in his mouth in an Asian caricature. They each enter the DMV office and return with real licenses with photos of their new likenesses.
The Virginia DMV is now demanding that the two come back and get real pictures taken.
I never thought I would say this, but I agree with everything Michelle Malkin says on this issue:
These guys have done the Virginia DMV--and the nation-- a big favor. Many of us have tried to argue how much of a joke these agencies and our homeland security remain after 9/11--particularly the issuance of driver's licenses (it was the Virginia DMV that issued state photo ID to several 9/11 hijackers who were aided by illegal aliens).
I honestly don't know if she realizes that REAL ID won't solve this kind of problem, though. Nor will it solve the problem of people getting legitimate IDs in the names of people whose identity they stole, or real IDs in fake names by bribing DMV employees. (Several of the 9/11 hijackers did this, in Virginia.)
The TSA website is a fascinating place to spend some time wandering around. They have rules for handling monkeys:
TSOs have been trained to not touch the monkey during the screening process.
And snow globes are prohibited in carry-on luggage:
Snow globes regardless of size or amount of liquid inside, even with documentation, are prohibited in your carry-on. Please ship these items or pack them in your checked baggage.
Ho ho ho, everyone.
Microsoft has a new anti-phishing service in Internet Explorer 7 that will turn the address bar green and display the website owner's identity when surfers visit on-line merchants previously approved as legitimate. So far, so good. But the service is only available to corporations: not to sole proprietorships, partnerships, or individuals.
Of course, if a merchant's bar doesn't turn green it doesn't mean that they're bad. It'll be white, which indicates "no information." There are also yellow and red indications, corresponding to "suspicious" and "known fraudulent site." But small businesses are worried that customers will be afraid to buy from non-green sites.
That's possible, but it's more likely that users will learn that the marker isn't reliable and start to ignore it.
Any white-list system like this has two sources of error. False positives, where phishers get the marker. And false negatives, where legitimate honest merchants don't. Any system like this has to effectively deal with both.
EDITED TO ADD (12/21): Research paper: "Phinding Phish: An Evaulation of Anti-Phishing Toolbars," by L. Cranor, S. Egleman, J. Hong, and Y. Zhang.
The stories keep getting better. Here's someone who climbs a fence at the Raleigh-Durham Airport, boards a Delta plane, and hangs out for a bunch of hours.
Best line of the article:
"It blows my mind that you can't get 3.5 ounces of toothpaste on a plane," he said, "yet somebody can sneak on a plane and take a nap."
Exactly. We're spending millions enhancing passenger screening -- new backscatter X-ray machines, confiscating liquids -- and we ignore the other, less secure, paths onto airplanes. It's idiotic, that's what it is.
Don't do this.
It's getting mainstream attention; here's an article from the BBC.
Good article on airport security and the TSA. Matt Blaze and I got some really good quotes.
BTW, regularly people chastise me for complaining about airline security but not offering any solutions. I generally send those people to the last two paragraphs of this article.
In the information age, surveillance isn't just for the police. Marketers want to watch you, too: what you do, where you go, what you buy. Integrated Media Measurement, Inc. wants to know what you watch and what you listen to -- wherever you are.
They do this by turning traditional ratings collection on its head. Instead of a Nielsen-like system, which monitors individual televisions in an effort to figure out who's watching, IMMI measures individual people and tries to figure out what they're watching (or listening to). They do this through specially designed cell phones that automatically eavesdrop on what's going on in the room they're in:
The IMMI phone randomly samples 10 seconds of room audio every 30 seconds. These samples are reduced to digital signatures, which are uploaded continuously to the IMMI servers.
These cell phones are given away to test subjects, who get free service in exchange for giving up all their privacy.
I'm sure the company will claim not to actually eavesdrop on in-room conversations, or cell phone conversations. And just how different are these special phones, anyway? Can the software be installed on off-the-shelf phones? Can it be done without the owner's knowledge or consent? The potential for abuse here is enormous.
Remember, the threats to privacy in the information age are not solely from government; they're from private industry as well. And the real threat is the alliance between the two.
Scary story of someone who was told by his bank that he's no longer welcome as a customer, because the bank's computer noticed a deposit that wasn't "normal."
After two written complaints and a phone call to customer services, a member of the "Team" finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent "suspicious" transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I'm not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC's "account review" affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.
Expect more of this kind of thing as computers continue to decide who is normal and who is not.
No, really. Makes a fine Christmas gift.
Gary McGraw interviewed me for his Silver Bullet Security Podcast.
An old trick, but a good story:
Everyone thought the doors were incredibly cool. Oh, and they were. Upon entering a secure area (that is, anywhere except the lobby), one simply waved his RFID-enabled access card across the sensor and the doors slid open almost instantly. When leaving an area, motion detectors automatically opened up the doors. The only thing that was missing was the cool "whoosh" noise and an access panel that could be shot with a phaser to permanently seal or, depending on the plot, automatically open the door. Despite that flaw, the doors just felt secure.
It seems to be the season for cybercrime hype. First, we have this article from CNN, which seems to have no actual news:
Computer hackers will open a new front in the multi-billion pound "cyberwar" in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.
And next, this article, which claims that criminal organizations are paying student members to get IT degrees:
The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.
Criminals targeting children -- that's sure to peg anyone's hype-meter.
To be sure, I don't want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain -- to large organized crime syndicates. Cybercrime is big business, and it's getting bigger.
But I'm not sure if stories like these help or hurt.
How good are the passwords people are choosing to protect their computers and online accounts?
It's a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.
The attack was pretty basic. The attackers created a fake MySpace login page, and collected login information when users thought they were accessing their own account on the site. The data was forwarded to various compromised web servers, where the attackers would harvest it later.
MySpace estimates that more than 100,000 people fell for the attack before it was shut down. The data I have is from two different collection points, and was cleaned of the small percentage of people who realized they were responding to a phishing attack. I analyzed the data, and this is what I learned.
Password Length: While 65 percent of passwords contain eight characters or less, 17 percent are made up of six characters or less. The average password is eight characters long.
Specifically, the length distribution looks like this:
Yes, there's a 32-character password: "1ancheste23nite41ancheste23nite4." Other long passwords are "fool2thinkfool2thinkol2think" and "dokitty17darling7g7darling7."
Character Mix: While 81 percent of passwords are alphanumeric, 28 percent are just lowercase letters plus a single final digit -- and two-thirds of those have the single digit 1. Only 3.8 percent of passwords are a single dictionary word, and another 12 percent are a single dictionary word plus a final digit -- once again, two-thirds of the time that digit is 1.
Only 0.34 percent of users have the user name portion of their e-mail address as their password.
Common Passwords: The top 20 passwords are (in order): password1, abc123, myspace1, password, blink182, qwerty1, fuckyou, 123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1, princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey. (Different analysis here.)
The most common password, "password1," was used in 0.22 percent of all accounts. The frequency drops off pretty fast after that: "abc123" and "myspace1" were only used in 0.11 percent of all accounts, "soccer" in 0.04 percent and "monkey" in 0.02 percent.
For those who don't know, Blink 182 is a band. Presumably lots of people use the band's name because it has numbers in its name, and therefore it seems like a good password. The band Slipknot doesn't have any numbers in its name, which explains the 1. The password "jordan23" refers to basketball player Michael Jordan and his number. And, of course, "myspace" and "myspace1" are easy-to-remember passwords for a MySpace account. I don't know what the deal is with monkeys.
We used to quip that "password" is the most common password. Now it's "password1." Who said users haven't learned anything about security?
But seriously, passwords are getting better. I'm impressed that less than 4 percent were dictionary words and that the great majority were at least alphanumeric. Writing in 1989, Daniel Klein was able to crack (.gz) 24 percent of his sample passwords with a small dictionary of just 63,000 words, and found that the average password was 6.4 characters long.
And in 1992 Gene Spafford cracked (.pdf) 20 percent of passwords with his dictionary, and found an average password length of 6.8 characters. (Both studied Unix passwords, with a maximum length at the time of 8 characters.) And they both reported a much greater percentage of all lowercase, and only upper- and lowercase, passwords than emerged in the MySpace data. The concept of choosing good passwords is getting through, at least a little.
On the other hand, the MySpace demographic is pretty young. Another password study (.pdf) in November looked at 200 corporate employee passwords: 20 percent letters only, 78 percent alphanumeric, 2.1 percent with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.
None of this changes the reality that passwords have outlived their usefulness as a serious security device. Over the years, password crackers have been getting faster and faster. Current commercial products can test tens -- even hundreds -- of millions of passwords per second. At the same time, there's a maximum complexity to the passwords average people are willing to memorize (.pdf). Those lines crossed years ago, and typical real-world passwords are now software-guessable. AccessData's Password Recovery Toolkit -- at 200,000 guesses per second -- would have been able to crack 23 percent of the MySpace passwords in 30 minutes, 55 percent in 8 hours.
Of course, this analysis assumes that the attacker can get his hands on the encrypted password file and work on it offline, at his leisure; i.e., that the same password was used to encrypt an e-mail, file or hard drive. Passwords can still work if you can prevent offline password-guessing attacks, and watch for online guessing. They're also fine in low-value security situations, or if you choose really complicated passwords and use something like Password Safe to store them. But otherwise, security by password alone is pretty risky.
This essay originally appeared on Wired.com.
Definitely worth reading:
Though data mining has many valuable uses, it is not well suited to the terrorist discovery problem. It would be unfortunate if data mining for terrorism discovery had currency within national security, law enforcement, and technology circles because pursuing this use of data mining would waste taxpayer dollars, needlessly infringe on privacy and civil liberties, and misdirect the valuable time and energy of the men and women in the national security community.
Hackers have gained access to a database containing personal information on 800,000 current and former UCLA students.
This is barely worth writing about: yet another database attack exposing personal information. My guess is that everyone in the U.S. has been the victim of at least one of these already. But there was a particular section of the article that caught my eye:
Jim Davis, UCLA's associate vice chancellor for information technology, described the attack as sophisticated, saying it used a program designed to exploit a flaw in a single software application among the many hundreds used throughout the Westwood campus.
It worries me that the associate vice chancellor for information technology doesn't understand that all attacks work like that.
Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.
However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Details are in the paper. Very scary.
This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It's a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.
To me, the real significance of this work is how easy it was. The people who designed the Nike/iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they're evil -- just because it's easier to ignore the externality than to worry about it.
I wrote an essay on spam for the Forbes.com website.
EDITED TO ADD (12/12): Another essay.
I've already written about the DHS's database of top terrorist targets and how dumb it is. Important sites are not on the list, and unimportant ones are. The reason is pork, of course; states get security money based on this list, so every state wants to make sure they have enough sites on it. And over the past five years, states with Republican congressmen got more money than states without.
Here's another article on this general topic, centering around an obscure quantity: the square root of terrorist intent:
The Department of Homeland Security is the home of many mysteries. There is, of course, the color-coded system for gauging the threat of an attack. And there is the department database of national assets to protect against a terrorist threat, which includes Old MacDonald's Petting Zoo in Woodville, Ala., and the Apple and Pork Festival in Clinton, Ill.
What I said last time still applies:
We're never going to get security right if we continue to make it a parody of itself.
Absolutely fascinating paper: "A Platform for RFID Security and Privacy Administration." The basic idea is that you carry a personalized device that jams the signals from all the RFID tags on your person until you authorize otherwise.
As Cory Doctorow points out, this is potentially a way to reap the benefits of RFID without paying the cost:
Up until now, the standard answer to privacy concerns with RFIDs is to just kill them -- put your new US Passport in a microwave for a few minutes to nuke the chip. But with an RFID firewall, it might be possible to reap the benefits of RFID without the cost.
General info here. They've even built a prototype.
This is a clever hack against gift cards:
Seems they take the cards off the racks in stores and copy down the serial numbers. Later on, they check to see if the card is activated, and if the answer is yes, they go on a shopping spree from the store's website.
What's the security problem? A serial number on the cards that's visible even though the card is not activated. This could be mitigated by hiding the serial number behind a scratch-off coating, or opaque packaging.
Banks are spending millions preventing outsiders from stealing their customers' identities, but there is a growing insider threat:
Widespread outsourcing of data management and other services has exposed some weaknesses and made it harder to prevent identity theft by insiders.
Last week I wrote about the security problems of having a secret stored in a device given to your attacker, and how they are vulnerable to class breaks. I singled out DRM systems as being particularly vulnerable to this kind of security problem.
This week we have an example: The DRM in TiVoToGo has been cracked:
An open source command-line utility that converts TiVoToGo movies into an MPEG file and strips the DRM is now available online. Released under a BSD license, the utility—called TiVo File Decoder—builds on the extensive reverse engineering efforts of the TiVo hacking community. The goal of the project is to bring TiVo media viewing capabilities to unsupported platforms like OS X and the open source Linux operating system. TiVoToGo support is currently only available on Windows.
EDITED TO ADD (12/8): I have been told that TiVoTo Go has not been hacked: "The decryption engine has been reverse engineered in cross-platform code - replicating what TiVo already provides customers on the Windows platform (in the form of TiVo Desktop software). Each customer's unique Media Access Key (MAK) is still needed as a *key* to decrypt content from their particular TiVo unit. I can't decrypt shows from your TiVo, and you can't decrypt shows from mine. Until someone figures out how to produce or bypass the required MAK, it hasn't been cracked."
And here's a guide to installing TiVoToGo on your Mac.
EDITED TO ADD (12/17): Log of several hackers working on the problem. Interesting.
I'll be the first to admit it: I know next to nothing about MySpace or Facebook. I do know that they're social networking sites, and that -- at least to some extent -- your reputation is based on who are your "friends" and what they say about you.
Which means that this follows, like day follows night. "Fake Your Space" is a site where you can hire fake friends to leave their pictures and personalized comments on your page. Now you can pretend that you're more popular than you actually are:
FakeYourSpace is an exciting new service that enables normal everyday people like me and you to have Hot friends on popular social networking sites such as MySpace and FaceBook. Not only will you be able to see these Gorgeous friends on your friends list, but FakeYourSpace enables you to create customized messages and comments for our Models to leave you on your comment wall. FakeYourSpace makes it easy for any regular person to make it seem like they have a Model for a friend. It doesn't stop there however. Maybe you want to appear as if you have a Model for a lover. FakeYourSpace can make this happen!
What's next? Services that verify friends on your friends' MySpace pages? Services that block friend verification services? Where will this all end up?
This is interesting. Ted Kaczynski wrote in code:
In a small journal written in code, he documented his thoughts about the crimes he was committing. That code was so difficult, a source says the CIA couldn't crack it -- until someone found the key itself among other documents, and then translated it.
Look at the photo. It was a manual, pencil-and-paper cipher. Does anyone know the details of the algorithm?
I've written about backscatter X-ray technology before. It's great for finding hidden weapons on a person, but it's also great for seeing naked images of them. The TSA is piloting this technology in Phoenix, and they're deliberately blurring the images to protect privacy.
Note that the system is being made better by making the resulting images less detailed. Excellent.
Interesting story of a British journalist buying 20 different fake EU passports. She bought a genuine Czech passport with a fake name and her real picture, a fake Latvian passport, and a stolen Estonian passport.
Despite information on stolen passports being registered to a central Interpol database, her Estonian passport goes undetected.
Note that harder-to-forge RFID passports would only help in one instance; it's certainly not the most important problem to solve.
Also, I am somewhat suspicious of this story. I don't know about the UK laws, but in the US this would be a major crime -- and I don't think being a reporter would be an adequate defense.
I give a talk called "The Future of Privacy," where I talk about current and future technological developments that erode our privacy. One of the things I talk about is auditory eavesdropping, and I hypothesize that a cell phone microphone could be turned on surreptitiously and remotely.
I never had any actual evidence one way or the other, but the technique has surfaced in an organized crime prosecution:
The surveillance technique came to light in an opinion published this week by U.S. District Judge Lewis Kaplan. He ruled that the "roving bug" was legal because federal wiretapping law is broad enough to permit eavesdropping even of conversations that take place near a suspect's cell phone.
Seems that the technique is to download eavesdropping software into the phone:
The U.S. Commerce Department's security office warns that "a cellular telephone can be turned into a microphone and transmitter for the purpose of listening to conversations in the vicinity of the phone." An article in the Financial Times last year said mobile providers can "remotely install a piece of software on to any handset, without the owner's knowledge, which will activate the microphone even when its owner is not making a call."
EDITED TO ADD (12/12): Another article.
It's been a while since I've seen one of these sorts of news stories:
A Romanian man has been indicted on charges of hacking into more than 150 U.S. government computers, causing disruptions that cost NASA, the Energy Department and the Navy nearly $1.5 million.
Remember pretexting? It's the cute name given to...well...fraud. It's when you call someone and pretend to be someone else, in order to get information. Or when you go online and pretend to be someone else, in order to get something. There's no question in my mind that it's fraud and illegal, but it seems to be a gray area.
California is considering a bill that would make this kind of thing illegal, and allow victims to sue for damages.
Who could be opposed to this? The MPAA, that's who:
The bill won approval in three committees and sailed through the state Senate with a 30-0 vote. Then, according to Lenny Goldberg, a lobbyist for the Privacy Rights Clearinghouse, the measure encountered unexpected, last-minute resistance from the Motion Picture Association of America.
These people are looking more and more like a criminal organization every day.
EDITED TO ADD (12/11): Congress has outlawed pretexting. The law doesn't go as far as some of the state laws -- which it pre-empts -- but it's still a good thing.
Beautiful time lapse photos of a squid, Loligo pealei, seizing its prey.
...such a splendidly baroque little carnivore.
From the Associated Press:
Without notifying the public, federal agents for the past four years have assigned millions of international travelers, including Americans, computer-generated scores rating the risk they pose of being terrorists or criminals.
Like all these systems, we are all judged in secret, by a computer algorithm, with no way to see or even challenge our score. Kafka would be proud.
"If this catches one potential terrorist, this is a success," Ahern said.
That's just too idiotic a statement to even rebut.
EDITED TO ADD (12/3): More commentary.
There's new software that can predict who is likely to become a murderer.
Using probation department cases entered into the system between 2002 and 2004, Berk and his colleagues performed a two-year follow-up study -- enough time, they theorized, for a person to reoffend if he was going to. They tracked each individual, with particular attention to the people who went on to kill. That created the model. What remains at this stage is to find a way to marry the software to the probation department's information technology system.
Pretty scary stuff, as it gets into the realm of thoughtcrime.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.