Friday Squid Blogging: PowerSquid
To the reader out there that got me a PowerSquid: thank you.
(And here’s a home-made version.)
To the reader out there that got me a PowerSquid: thank you.
(And here’s a home-made version.)
A review of Kim:
Kipling packed a great deal of information and concept into his stories, and in “Kim” we find The Great Game: espionage and spying. Within the first twenty pages we have authentication by something you have, denial of service, impersonation, stealth, masquerade, role- based authorization (with ad hoc authentication by something you know), eavesdropping, and trust based on data integrity. Later on we get contingency planning against theft and cryptography with key changes.
The book is out of copyright, so you can read it online.
This is a big deal. AACS (Advanced Access Content System), the copy protection is used in both Blu Ray and HD DVD, might have been cracked—but it’s still a rumor.
If it’s true, what will be interesting is the system’s in-the-field recovery system. Will it work?
Hypothetical fallout could be something like this: if PowerDVD is the source of the keys, an AACS initiative will be launched to revoke the player’s keys to render it inoperable and in need of an update. There is some confusion regarding this process, however. It is not the case that you can protect a cracked player by hiding it offline (the idea being that the player will never “update” with new code that way). Instead, the player’s existing keys will be revoked at the disc level, meaning that new pressings of discs won’t play on the cracked player. In this way, hiding a player from updates will not result in having a cracked player that will work throughout the years. It could mean that all bets are off for discs that are currently playable on the cracked player, however (provided it is not updated). Again, this is all hypothetical at this time.
Copy protection is inherently futile. The best it can be is a neverending arms race, which is why Big Media is increasingly relying on legal and social barriers.
EDITED TO ADD (12/30): An update.
EDITED TO ADD (1/3): More info from the author of the tool.
EDITED TO ADD (1/12): Excellent multi-part analysis here.
EDITED TO ADD (1/16): Part five of the above series of essays. And keys for different movies are starting to appear.
This is interesting: A Wal-Mart store in Mitchell, South Dakota receives a bomb threat. The store managers decide not to evacuate while the police search for the bomb. Presumably, they decided that the loss of revenue due to an evacuation was not worth the additional security of an evacuation:
During the nearly two-hour search Wal-Mart officials opted not to evacuated the busy discount store even though police recomended [sic] they do so. Wal-Mart officials said the call was a hoax and not a threat.
I think this is a good sign. It shows that people are thinking rationally about security trade-offs, and not thoughtlessly being terrorized.
Remember, though: security trade-offs are based on agenda. From the perspective of the Wal-Mart managers, the store’s revenues are the most important; most of the risks of the bomb threat are externalities.
Of course, the store employees have a different agenda—there is no upside to staying open, and only a downside due to the additional risk—and they didn’t like the decision:
The incident has family members of Wal-Mart employees criticizing store officials for failing to take police’s recommendation to evacuate.
Voorhees has worked at the Mitchell discount chain since Wal-Mart Supercenter opened in 2001. Her daughter, Charlotte Goode, 36, said Voorhees called her Sunday, crying and upset as she relayed the story.
“It’s right before Christmas. They were swamped with people,” she said. “To me, they endangerd [sic] the community, customers and associates. They put making a buck ahead of public safety.”
Everyone knows that writing your password on your monitor is bad security. Is it really so hard to realize that attaching your SecurID token to your computer is just as bad?
The Communications Director for Montana’s Congressman Denny Rehberg solicited “hackers” to break into the computer system at Texas Christian University and change his grades (so they would look better when he eventually ran for office, I presume). The hackers posted the email exchange instead. Very funny:
First, let’s be clear. You are soliciting me to break the law and hack into a computer across state lines. That is a federal offense and multiple felonies. Obviously I can’t trust anyone and everyone that mails such a request, you might be an FBI agent, right?
So, I need three things to make this happen:
1. A picture of a squirrel or pigeon on your campus. One close-up, one with background that shows buildings, a sign, or something to indicate you are standing on the campus.
2. The information I mentioned so I can find the records once I get into the database.
3. Some idea of what I get for all my trouble.
Automobile tires are now being outfitted with RFID transmitters:
Schrader Bridgeport is the market leader in direct Tire Pressure Monitoring Systems. Direct TPMS use pressure sensors inside each tire to transmit data to a dashboard display alerting drivers to tire pressure problems.
I’ll bet anything you can track cars with them, just as you can track some joggers by their sneakers.
As I said before, the people who are designing these systems are putting “zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.”
Peter Gutman’s “A Cost Analysis of Windows Vista Content Protection” is fascinating reading:
Windows Vista includes an extensive reworking of core OS elements in order to provide content protection for so-called “premium content”, typically HD data from Blu-Ray and HD-DVD sources. Providing this protection incurs considerable costs in terms of system performance, system stability, technical support overhead, and hardware and software cost. These issues affect not only users of Vista but the entire PC industry, since the effects of the protection measures extend to cover all hardware and software that will ever come into contact with Vista, even if it’s not used directly with Vista (for example hardware in a Macintosh computer or on a Linux server). This document analyses the cost involved in Vista’s content protection, and the collateral damage that this incurs throughout the computer industry.
Executive Executive Summary
The Vista Content Protection specification could very well constitute the longest suicide note in history.
It contains stuff like:
Denial-of-Service via Driver Revocation
Once a weakness is found in a particular driver or device, that driver will have its signature revoked by Microsoft, which means that it will cease to function (details on this are a bit vague here, presumably some minimum functionality like generic 640×480 VGA support will still be available in order for the system to boot). This means that a report of a compromise of a particular driver or device will cause all support for that device worldwide to be turned off until a fix can be found. Again, details are sketchy, but if it’s a device problem then presumably the device turns into a paperweight once it’s revoked. If it’s an older device for which the vendor isn’t interested in rewriting their drivers (and in the fast-moving hardware market most devices enter “legacy” status within a year of two of their replacement models becoming available), all devices of that type worldwide become permanently unusable.
Read the whole thing.
And here’s commentary on the paper.
Japanese researchers have captured a giant squid on video. Great pictures, too.
If you’ve traveled abroad recently, you’ve been investigated. You’ve been assigned a score indicating what kind of terrorist threat you pose. That score is used by the government to determine the treatment you receive when you return to the U.S. and for other purposes as well.
Curious about your score? You can’t see it. Interested in what information was used? You can’t know that. Want to clear your name if you’ve been wrongly categorized? You can’t challenge it. Want to know what kind of rules the computer is using to judge you? That’s secret, too. So is when and how the score will be used.
U.S. customs agencies have been quietly operating this system for several years. Called Automated Targeting System, it assigns a “risk assessment” score to people entering or leaving the country, or engaging in import or export activity. This score, and the information used to derive it, can be shared with federal, state, local and even foreign governments. It can be used if you apply for a government job, grant, license, contract or other benefit. It can be shared with nongovernmental organizations and individuals in the course of an investigation. In some circumstances private contractors can get it, even those outside the country. And it will be saved for 40 years.
Little is known about this program. Its bare outlines were disclosed in the Federal Register in October. We do know that the score is partially based on details of your flight record—where you’re from, how you bought your ticket, where you’re sitting, any special meal requests—or on motor vehicle records, as well as on information from crime, watch-list and other databases.
Civil liberties groups have called the program Kafkaesque. But I have an even bigger problem with it. It’s a waste of money.
The idea of feeding a limited set of characteristics into a computer, which then somehow divines a person’s terrorist leanings, is farcical. Uncovering terrorist plots requires intelligence and investigation, not large-scale processing of everyone.
Additionally, any system like this will generate so many false alarms as to be completely unusable. In 2005 Customs & Border Protection processed 431 million people. Assuming an unrealistic model that identifies terrorists (and innocents) with 99.9% accuracy, that’s still 431,000 false alarms annually.
The number of false alarms will be much higher than that. The no-fly list is filled with inaccuracies; we’ve all read about innocent people named David Nelson who can’t fly without hours-long harassment. Airline data, too, are riddled with errors.
The odds of this program’s being implemented securely, with adequate privacy protections, are not good. Last year I participated in a government working group to assess the security and privacy of a similar program developed by the Transportation Security Administration, called Secure Flight. After five years and $100 million spent, the program still can’t achieve the simple task of matching airline passengers against terrorist watch lists.
In 2002 we learned about yet another program, called Total Information Awareness, for which the government would collect information on every American and assign him or her a terrorist risk score. Congress found the idea so abhorrent that it halted funding for the program. Two years ago, and again this year, Secure Flight was also banned by Congress until it could pass a series of tests for accuracy and privacy protection.
In fact, the Automated Targeting System is arguably illegal, as well (a point several congressmen made recently); all recent Department of Homeland Security appropriations bills specifically prohibit the department from using profiling systems against persons not on a watch list.
There is something un-American about a government program that uses secret criteria to collect dossiers on innocent people and shares that information with various agencies, all without any oversight. It’s the sort of thing you’d expect from the former Soviet Union or East Germany or China. And it doesn’t make us any safer from terrorism.
This essay, without the links, was published in Forbes. They also published a rebuttal by William Baldwin, although it doesn’t seen to rebut any of the actual points.
Here’s an odd division of labor: a corporate data consultant argues for more openness, while a journalist favors more secrecy.
It’s only odd if you don’t understand security.
Two men have been issued Virginia drivers’ licenses even though they were wearing outlandish disguises when they had their pictures taken at the Department of Motor Vehicles:
Will Carsola and Dave Stewart posted Internet videos of their pranks, which included scenes of Carsola spray-painting his face and neck bright red and Stewart painting the top of his head black and sticking a row of fake buckteeth in his mouth in an Asian caricature. They each enter the DMV office and return with real licenses with photos of their new likenesses.
In another video, a shaved-headed Carsola comes out of the DMV with a photo of his eyes crossed, and another friend obtains a license after spray-painting on a thick, black beard and monobrow.
The Virginia DMV is now demanding that the two come back and get real pictures taken.
I never thought I would say this, but I agree with everything Michelle Malkin says on this issue:
These guys have done the Virginia DMV—and the nation—a big favor. Many of us have tried to argue how much of a joke these agencies and our homeland security remain after 9/11—particularly the issuance of driver’s licenses (it was the Virginia DMV that issued state photo ID to several 9/11 hijackers who were aided by illegal aliens).
But few dissertations and policy analyses drive the message home more effectively than these two damning videos.
I honestly don’t know if she realizes that REAL ID won’t solve this kind of problem, though. Nor will it solve the problem of people getting legitimate IDs in the names of people whose identity they stole, or real IDs in fake names by bribing DMV employees. (Several of the 9/11 hijackers did this, in Virginia.)
The TSA website is a fascinating place to spend some time wandering around. They have rules for handling monkeys:
TSOs have been trained to not touch the monkey during the screening process.
And snow globes are prohibited in carry-on luggage:
Snow globes regardless of size or amount of liquid inside, even with documentation, are prohibited in your carry-on. Please ship these items or pack them in your checked baggage.
Ho ho ho, everyone.
Microsoft has a new anti-phishing service in Internet Explorer 7 that will turn the address bar green and display the website owner’s identity when surfers visit on-line merchants previously approved as legitimate. So far, so good. But the service is only available to corporations: not to sole proprietorships, partnerships, or individuals.
Of course, if a merchant’s bar doesn’t turn green it doesn’t mean that they’re bad. It’ll be white, which indicates “no information.” There are also yellow and red indications, corresponding to “suspicious” and “known fraudulent site.” But small businesses are worried that customers will be afraid to buy from non-green sites.
That’s possible, but it’s more likely that users will learn that the marker isn’t reliable and start to ignore it.
Any white-list system like this has two sources of error. False positives, where phishers get the marker. And false negatives, where legitimate honest merchants don’t. Any system like this has to effectively deal with both.
EDITED TO ADD (12/21): Research paper: “Phinding Phish: An Evaulation of Anti-Phishing Toolbars,” by L. Cranor, S. Egleman, J. Hong, and Y. Zhang.
The stories keep getting better. Here’s someone who climbs a fence at the Raleigh-Durham Airport, boards a Delta plane, and hangs out for a bunch of hours.
Best line of the article:
“It blows my mind that you can’t get 3.5 ounces of toothpaste on a plane,” he said, “yet somebody can sneak on a plane and take a nap.”
Exactly. We’re spending millions enhancing passenger screening—new backscatter X-ray machines, confiscating liquids—and we ignore the other, less secure, paths onto airplanes. It’s idiotic, that’s what it is.
It’s getting mainstream attention; here’s an article from the BBC.
Good article on airport security and the TSA. Matt Blaze and I got some really good quotes.
BTW, regularly people chastise me for complaining about airline security but not offering any solutions. I generally send those people to the last two paragraphs of this article.
In the information age, surveillance isn’t just for the police. Marketers want to watch you, too: what you do, where you go, what you buy. Integrated Media Measurement, Inc. wants to know what you watch and what you listen to—wherever you are.
They do this by turning traditional ratings collection on its head. Instead of a Nielsen-like system, which monitors individual televisions in an effort to figure out who’s watching, IMMI measures individual people and tries to figure out what they’re watching (or listening to). They do this through specially designed cell phones that automatically eavesdrop on what’s going on in the room they’re in:
The IMMI phone randomly samples 10 seconds of room audio every 30 seconds. These samples are reduced to digital signatures, which are uploaded continuously to the IMMI servers.
IMMI also tracks all local media outlets actively broadcasting in any given designated media area (DMA). To identify media, IMMI compares the uploaded audio signatures computed by the phones with audio signatures computed on the IMMI servers monitoring TV and radio broadcasts. IMMI also maintains client-provided content files, such as commercials, promos, movies, and songs.
By matching the signatures, IMMI couples media broadcasts with the individuals who are exposed to them. The process takes just a few seconds.
Panel Members may sometimes delay watching or listening to a program by using satellite radio, DVRs, VCRs, or TiVo. IMMI captures these viewings with a “look-back” feature that recognizes when a Panel Member is exposed to a program outside of its normal broadcast hour, and then goes back in time (roughly two weeks) to identify it.
These cell phones are given away to test subjects, who get free service in exchange for giving up all their privacy.
I’m sure the company will claim not to actually eavesdrop on in-room conversations, or cell phone conversations. And just how different are these special phones, anyway? Can the software be installed on off-the-shelf phones? Can it be done without the owner’s knowledge or consent? The potential for abuse here is enormous.
Remember, the threats to privacy in the information age are not solely from government; they’re from private industry as well. And the real threat is the alliance between the two.
Scary story of someone who was told by his bank that he’s no longer welcome as a customer, because the bank’s computer noticed a deposit that wasn’t “normal.”
After two written complaints and a phone call to customer services, a member of the “Team” finally contacted me. She enquired about a single international deposit into my account, which I then explained to be my study grant for the coming year. Upon this explanation I was told that the bank would not close my account, and I was given a vague explanation of them not expecting students to get large deposits. I found this strange, since it had not been a problem in previous years, and even stranger since my deposit had cleared into my account two days after the letter was sent. In terms of recent “suspicious” transactions, this left only two recent international deposits: one from my parents overseas and one from my savings, neither of which could be classified as large. I’m not an expert on complex behavioural analysis networks and fraud detection within banking systems, but would expect that study grants and family support are not unexpected for students? Moreover, rather than this being an isolated incident, it would seem that HSBC’s “account review” affected a number of people within our student community, some of whom might choose not to question the decision and may be left without bank accounts. This should raise questions about the effectiveness of their fraud detection system, or possibly a flawed behaviour model for a specific demographic.
Expect more of this kind of thing as computers continue to decide who is normal and who is not.
No, really. Makes a fine Christmas gift.
Gary McGraw interviewed me for his Silver Bullet Security Podcast.
An old trick, but a good story:
Everyone thought the doors were incredibly cool. Oh, and they were. Upon entering a secure area (that is, anywhere except the lobby), one simply waved his RFID-enabled access card across the sensor and the doors slid open almost instantly. When leaving an area, motion detectors automatically opened up the doors. The only thing that was missing was the cool “whoosh” noise and an access panel that could be shot with a phaser to permanently seal or, depending on the plot, automatically open the door. Despite that flaw, the doors just felt secure.
That is, until one of G.R.G.’s colleagues had an idea. He grabbed one of those bank-branded folding yardsticks from the freebie table and headed on over to one of the security doors. He slipped the yardstick right through where the sliding doors met and the motion detector promptly noticed the yardstick and opened the door. He had unfettered access to the entire building thanks to a free folding yardstick.
It seems to be the season for cybercrime hype. First, we have this article from CNN, which seems to have no actual news:
Computer hackers will open a new front in the multi-billion pound “cyberwar” in 2007, targeting mobile phones, instant messaging and community Web sites such as MySpace, security experts predict.
As people grow wise to email scams, criminal gangs will find new ways to commit online fraud, sell fake goods or steal corporate secrets.
And next, this article, which claims that criminal organizations are paying student members to get IT degrees:
The most successful cyber crime gangs were based on partnerships between those with the criminals skills and contacts and those with the technical ability, said Mr Day.
“Traditional criminals have the ability to move funds and use all of the background they have,” he said, “but they don’t have the technical expertise.”
As the number of criminal gangs looking to move into cyber crime expanded, it got harder to recruit skilled hackers, said Mr Day. This has led criminals to target university students all around the world.
“Some students are being sponsored through their IT degree,” said Mr Day. Once qualified, the graduates go to work for the criminal gangs.
The aura of rebellion the name conjured up helped criminals ensnare children as young as 14, suggested the study.
By trawling websites, bulletin boards and chat rooms that offer hacking tools, cracks or passwords for pirated software, criminal recruiters gather information about potential targets.
Once identified, young hackers are drawn in by being rewarded for carrying out low-level tasks such as using a network of hijacked home computers, a botnet, to send out spam.
The low risk of being caught and the relatively high-rewards on offer helped the criminal gangs to paint an attractive picture of a cyber criminal’s life, said Mr Day.
As youngsters are drawn in the stakes are raised and they are told to undertake increasingly risky jobs.
Criminals targeting children—that’s sure to peg anyone’s hype-meter.
To be sure, I don’t want to minimize the threat of cybercrime. Nor do I want to minimize the threat of organized cybercrime. There are more and more criminals prowling the net, and more and more cybercrime has gone up the food chain—to large organized crime syndicates. Cybercrime is big business, and it’s getting bigger.
But I’m not sure if stories like these help or hurt.
How good are the passwords people are choosing to protect their computers and online accounts?
It’s a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.
The attack was pretty basic. The attackers created a fake MySpace login page, and collected login information when users thought they were accessing their own account on the site. The data was forwarded to various compromised web servers, where the attackers would harvest it later.
MySpace estimates that more than 100,000 people fell for the attack before it was shut down. The data I have is from two different collection points, and was cleaned of the small percentage of people who realized they were responding to a phishing attack. I analyzed the data, and this is what I learned.
Password Length: While 65 percent of passwords contain eight characters or less, 17 percent are made up of six characters or less. The average password is eight characters long.
Specifically, the length distribution looks like this:
Yes, there’s a 32-character password: “1ancheste23nite41ancheste23nite4.” Other long passwords are “fool2thinkfool2thinkol2think” and “dokitty17darling7g7darling7.”
Character Mix: While 81 percent of passwords are alphanumeric, 28 percent are just lowercase letters plus a single final digit—and two-thirds of those have the single digit 1. Only 3.8 percent of passwords are a single dictionary word, and another 12 percent are a single dictionary word plus a final digit—once again, two-thirds of the time that digit is 1.
|numbers only||1.3 percent|
|letters only||9.6 percent|
Only 0.34 percent of users have the user name portion of their e-mail address as their password.
Common Passwords: The top 20 passwords are (in order): password1, abc123, myspace1, password, blink182, qwerty1, fuckyou, 123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1, princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey. (Different analysis here.)
The most common password, “password1,” was used in 0.22 percent of all accounts. The frequency drops off pretty fast after that: “abc123” and “myspace1” were only used in 0.11 percent of all accounts, “soccer” in 0.04 percent and “monkey” in 0.02 percent.
For those who don’t know, Blink 182 is a band. Presumably lots of people use the band’s name because it has numbers in its name, and therefore it seems like a good password. The band Slipknot doesn’t have any numbers in its name, which explains the 1. The password “jordan23” refers to basketball player Michael Jordan and his number. And, of course, “myspace” and “myspace1” are easy-to-remember passwords for a MySpace account. I don’t know what the deal is with monkeys.
We used to quip that “password” is the most common password. Now it’s “password1.” Who said users haven’t learned anything about security?
But seriously, passwords are getting better. I’m impressed that less than 4 percent were dictionary words and that the great majority were at least alphanumeric. Writing in 1989, Daniel Klein was able to crack (.gz) 24 percent of his sample passwords with a small dictionary of just 63,000 words, and found that the average password was 6.4 characters long.
And in 1992 Gene Spafford cracked (.pdf) 20 percent of passwords with his dictionary, and found an average password length of 6.8 characters. (Both studied Unix passwords, with a maximum length at the time of 8 characters.) And they both reported a much greater percentage of all lowercase, and only upper- and lowercase, passwords than emerged in the MySpace data. The concept of choosing good passwords is getting through, at least a little.
On the other hand, the MySpace demographic is pretty young. Another password study (.pdf) in November looked at 200 corporate employee passwords: 20 percent letters only, 78 percent alphanumeric, 2.1 percent with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.
None of this changes the reality that passwords have outlived their usefulness as a serious security device. Over the years, password crackers have been getting faster and faster. Current commercial products can test tens—even hundreds—of millions of passwords per second. At the same time, there’s a maximum complexity to the passwords average people are willing to memorize (.pdf). Those lines crossed years ago, and typical real-world passwords are now software-guessable. AccessData’s Password Recovery Toolkit—at 200,000 guesses per second—would have been able to crack 23 percent of the MySpace passwords in 30 minutes, 55 percent in 8 hours.
Of course, this analysis assumes that the attacker can get his hands on the encrypted password file and work on it offline, at his leisure; i.e., that the same password was used to encrypt an e-mail, file or hard drive. Passwords can still work if you can prevent offline password-guessing attacks, and watch for online guessing. They’re also fine in low-value security situations, or if you choose really complicated passwords and use something like Password Safe to store them. But otherwise, security by password alone is pretty risky.
This essay originally appeared on Wired.com.
Definitely worth reading:
Though data mining has many valuable uses, it is not well suited to the terrorist discovery problem. It would be unfortunate if data mining for terrorism discovery had currency within national security, law enforcement, and technology circles because pursuing this use of data mining would waste taxpayer dollars, needlessly infringe on privacy and civil liberties, and misdirect the valuable time and energy of the men and women in the national security community.
Hackers have gained access to a database containing personal information on 800,000 current and former UCLA students.
This is barely worth writing about: yet another database attack exposing personal information. My guess is that everyone in the U.S. has been the victim of at least one of these already. But there was a particular section of the article that caught my eye:
Jim Davis, UCLA’s associate vice chancellor for information technology, described the attack as sophisticated, saying it used a program designed to exploit a flaw in a single software application among the many hundreds used throughout the Westwood campus.
“An attacker found one small vulnerability and was able to exploit it, and then cover their tracks,” Davis said.
It worries me that the associate vice chancellor for information technology doesn’t understand that all attacks work like that.
Researchers at the University of Washington have demonstrated a surveillance system that automatically tracks people through the Nike+iPod Sport Kit. Basically, the kit contains a transmitter that you stick in your sneakers and a receiver you attach to your iPod. This allows you to track things like time, distance, pace, and calories burned. Pretty clever.
However, it turns out that the transmitter in your sneaker can be read up to 60 feet away. And because it broadcasts a unique ID, you can be tracked by it. In the demonstration, the researchers built a surveillance device (at a cost of about $250) and interfaced their surveillance system with Google Maps. Details are in the paper. Very scary.
This is a great demonstration for anyone who is skeptical that RFID chips can be used to track people. It’s a good example because the chips have no personal identifying information, yet can still be used to track people. As long as the chips have unique IDs, those IDs can be used for surveillance.
To me, the real significance of this work is how easy it was. The people who designed the Nike/iPod system put zero thought into security and privacy issues. Unless we enact some sort of broad law requiring companies to add security into these sorts of systems, companies will continue to produce devices that erode our privacy through new technologies. Not on purpose, not because they’re evil—just because it’s easier to ignore the externality than to worry about it.
I wrote an essay on spam for the Forbes.com website.
There’s little in it I haven’t said here and here.
EDITED TO ADD (12/12): Another essay.
I’ve already written about the DHS’s database of top terrorist targets and how dumb it is. Important sites are not on the list, and unimportant ones are. The reason is pork, of course; states get security money based on this list, so every state wants to make sure they have enough sites on it. And over the past five years, states with Republican congressmen got more money than states without.
Here’s another article on this general topic, centering around an obscure quantity: the square root of terrorist intent:
The Department of Homeland Security is the home of many mysteries. There is, of course, the color-coded system for gauging the threat of an attack. And there is the department database of national assets to protect against a terrorist threat, which includes Old MacDonald’s Petting Zoo in Woodville, Ala., and the Apple and Pork Festival in Clinton, Ill.
And now Jim O’Brien, the director of the Office of Emergency Management and Homeland Security in Clark County, Nev., has discovered another hard-to-fathom DHS notion: a mathematical value purporting to represent the square root of terrorist intent. The figure appears deep in the mind-numbingly complex risk-assessment formulas that the department used in 2006 to decide the likelihood that a place is or will become a terrorist target—an all-important estimate outside the Beltway, because greater slices of the federal anti-terrorism pie go to the locations with the highest scores. Overall, the department awarded $711 million in high-risk urban counterterrorism grants last year.
As O’Brien reviewed the risk-assessment formulas—a series of calculations that runs into the billions—he found himself unable to account for several factors, the terrorist-intent notion principal among them. “I have a Ph.D. I think I understand formulas,” he says. “Take the square root of terrorist intent? Now, give me a break.” The whole notion, O’Brien says, is a contradiction in terms: “How can you quantify what somebody is thinking?”
Other designations for variables in the formula are almost befuddling, O’Brien says, such as the “attractiveness factor,” which seeks to establish how terrorists might prefer one sort of target over another, and the “chatter factor,” which tries to gauge the intent of potential terror plotters based on communication intercepts.
“One man’s garbage is another man’s treasure,” he says. “So I don’t know how you measure attractiveness.” The chatter factor, meanwhile, leaves O’Brien entirely in the dark: “I’m not sure what that means.”
What I said last time still applies:
We’re never going to get security right if we continue to make it a parody of itself.
Absolutely fascinating paper: “A Platform for RFID Security and Privacy Administration.” The basic idea is that you carry a personalized device that jams the signals from all the RFID tags on your person until you authorize otherwise.
This paper presents the design, implementation, and evaluation of the RFID Guardian, the first-ever unified platform for RFID security and privacy administration. The RFID Guardian resembles an “RFID firewall”, enabling individuals to monitor and control access to their RFID tags by combining a standard-issue RFID reader with unique RFID tag emulation capabilities. Our system provides a platform for coordinated usage of RFID security mechanisms, offering fine-grained control over RFID-based auditing, key management, access control, and authentication capabilities. We have prototyped the RFID Guardian using off-the-shelf components, and our experience has shown that active mobile devices are a valuable tool for managing the security of RFID tags in a variety of applications, including protecting low-cost tags that are unable to regulate their own usage.
As Cory Doctorow points out, this is potentially a way to reap the benefits of RFID without paying the cost:
Up until now, the standard answer to privacy concerns with RFIDs is to just kill them—put your new US Passport in a microwave for a few minutes to nuke the chip. But with an RFID firewall, it might be possible to reap the benefits of RFID without the cost.
General info here. They’ve even built a prototype.
This is a clever hack against gift cards:
Seems they take the cards off the racks in stores and copy down the serial numbers. Later on, they check to see if the card is activated, and if the answer is yes, they go on a shopping spree from the store’s website.
What’s the security problem? A serial number on the cards that’s visible even though the card is not activated. This could be mitigated by hiding the serial number behind a scratch-off coating, or opaque packaging.
Banks are spending millions preventing outsiders from stealing their customers’ identities, but there is a growing insider threat:
Widespread outsourcing of data management and other services has exposed some weaknesses and made it harder to prevent identity theft by insiders.
“There are lots of weak links,” said Oveissi Field. “Back-up tapes are being sent to offsite storage sites or being mailed and getting into the wrong hands or are lost through carelessness.”
In what many regard as the biggest wake-up call in recent memory for financial institutions, thieves disguised as cleaning staff last year nearly stole the equivalent of more than $400 million from the London branch of Sumitomo Mitsui.
Last week I wrote about the security problems of having a secret stored in a device given to your attacker, and how they are vulnerable to class breaks. I singled out DRM systems as being particularly vulnerable to this kind of security problem.
This week we have an example: The DRM in TiVoToGo has been cracked:
An open source command-line utility that converts TiVoToGo movies into an MPEG file and strips the DRM is now available online. Released under a BSD license, the utility—called TiVo File Decoder—builds on the extensive reverse engineering efforts of the TiVo hacking community. The goal of the project is to bring TiVo media viewing capabilities to unsupported platforms like OS X and the open source Linux operating system. TiVoToGo support is currently only available on Windows.
EDITED TO ADD (12/8): I have been told that TiVoTo Go has not been hacked: “The decryption engine has been reverse engineered in cross-platform code – replicating what TiVo already provides customers on the Windows platform (in the form of TiVo Desktop software). Each customer’s unique Media Access Key (MAK) is still needed as a *key* to decrypt content from their particular TiVo unit. I can’t decrypt shows from your TiVo, and you can’t decrypt shows from mine. Until someone figures out how to produce or bypass the required MAK, it hasn’t been cracked.”
And here’s a guide to installing TiVoToGo on your Mac.
EDITED TO ADD (12/17): Log of several hackers working on the problem. Interesting.
I’ll be the first to admit it: I know next to nothing about MySpace or Facebook. I do know that they’re social networking sites, and that—at least to some extent—your reputation is based on who are your “friends” and what they say about you.
Which means that this follows, like day follows night. “Fake Your Space” is a site where you can hire fake friends to leave their pictures and personalized comments on your page. Now you can pretend that you’re more popular than you actually are:
FakeYourSpace is an exciting new service that enables normal everyday people like me and you to have Hot friends on popular social networking sites such as MySpace and FaceBook. Not only will you be able to see these Gorgeous friends on your friends list, but FakeYourSpace enables you to create customized messages and comments for our Models to leave you on your comment wall. FakeYourSpace makes it easy for any regular person to make it seem like they have a Model for a friend. It doesn’t stop there however. Maybe you want to appear as if you have a Model for a lover. FakeYourSpace can make this happen!
What’s next? Services that verify friends on your friends’ MySpace pages? Services that block friend verification services? Where will this all end up?
This is interesting. Ted Kaczynski wrote in code:
In a small journal written in code, he documented his thoughts about the crimes he was committing. That code was so difficult, a source says the CIA couldn’t crack it—until someone found the key itself among other documents, and then translated it.
Look at the photo. It was a manual, pencil-and-paper cipher. Does anyone know the details of the algorithm?
I’ve written about backscatter X-ray technology before. It’s great for finding hidden weapons on a person, but it’s also great for seeing naked images of them. The TSA is piloting this technology in Phoenix, and they’re deliberately blurring the images to protect privacy.
Note that the system is being made better by making the resulting images less detailed. Excellent.
Interesting story of a British journalist buying 20 different fake EU passports. She bought a genuine Czech passport with a fake name and her real picture, a fake Latvian passport, and a stolen Estonian passport.
Despite information on stolen passports being registered to a central Interpol database, her Estonian passport goes undetected.
Note that harder-to-forge RFID passports would only help in one instance; it’s certainly not the most important problem to solve.
Also, I am somewhat suspicious of this story. I don’t know about the UK laws, but in the US this would be a major crime—and I don’t think being a reporter would be an adequate defense.
I give a talk called “The Future of Privacy,” where I talk about current and future technological developments that erode our privacy. One of the things I talk about is auditory eavesdropping, and I hypothesize that a cell phone microphone could be turned on surreptitiously and remotely.
I never had any actual evidence one way or the other, but the technique has surfaced in an organized crime prosecution:
The surveillance technique came to light in an opinion published this week by U.S. District Judge Lewis Kaplan. He ruled that the “roving bug” was legal because federal wiretapping law is broad enough to permit eavesdropping even of conversations that take place near a suspect’s cell phone.
Kaplan’s opinion said that the eavesdropping technique “functioned whether the phone was powered on or off.” Some handsets can’t be fully powered down without removing the battery; for instance, some Nokia models will wake up when turned off if an alarm is set.
Seems that the technique is to download eavesdropping software into the phone:
The U.S. Commerce Department’s security office warns that “a cellular telephone can be turned into a microphone and transmitter for the purpose of listening to conversations in the vicinity of the phone.” An article in the Financial Times last year said mobile providers can “remotely install a piece of software on to any handset, without the owner’s knowledge, which will activate the microphone even when its owner is not making a call.”
Nextel and Samsung handsets and the Motorola Razr are especially vulnerable to software downloads that activate their microphones, said James Atkinson, a counter-surveillance consultant who has worked closely with government agencies. “They can be remotely accessed and made to transmit room audio all the time,” he said. “You can do that without having physical access to the phone.”
Details of how the Nextel bugs worked are sketchy. Court documents, including an affidavit (p1) and (p2) prepared by Assistant U.S. Attorney Jonathan Kolodner in September 2003, refer to them as a “listening device placed in the cellular telephone.” That phrase could refer to software or hardware.
One private investigator interviewed by CNET News.com, Skipp Porteous of Sherlock Investigations in New York, said he believed the FBI planted a physical bug somewhere in the Nextel handset and did not remotely activate the microphone.
“They had to have physical possession of the phone to do it,” Porteous said. “There are several ways that they could have gotten physical possession. Then they monitored the bug from fairly near by.”
But other experts thought microphone activation is the more likely scenario, mostly because the battery in a tiny bug would not have lasted a year and because court documents say the bug works anywhere “within the United States”—in other words, outside the range of a nearby FBI agent armed with a radio receiver.
In addition, a paranoid Mafioso likely would be suspicious of any ploy to get him to hand over a cell phone so a bug could be planted. And Kolodner’s affidavit seeking a court order lists Ardito’s phone number, his 15-digit International Mobile Subscriber Identifier, and lists Nextel Communications as the service provider, all of which would be unnecessary if a physical bug were being planted.
A BBC article from 2004 reported that intelligence agencies routinely employ the remote-activation method. “A mobile sitting on the desk of a politician or businessman can act as a powerful, undetectable bug,” the article said, “enabling them to be activated at a later date to pick up sounds even when the receiver is down.”
For its part, Nextel said through spokesman Travis Sowders: “We’re not aware of this investigation, and we weren’t asked to participate.”
EDITED TO ADD (12/12): Another article.
It’s been a while since I’ve seen one of these sorts of news stories:
A Romanian man has been indicted on charges of hacking into more than 150 U.S. government computers, causing disruptions that cost NASA, the Energy Department and the Navy nearly $1.5 million.
The federal indictment charged Victor Faur, 26, of Arad, Romania, with nine counts of computer intrusion and one count of conspiracy. He faces up to 54 years in prison if convicted of all counts, said Thom Mrozek, spokesman for the U.S. Attorney’s office, on Thursday.
Faur was being prosecuted by authorities in Romania on separate computer hacking charges, Mrozek said, and will be brought to Los Angeles upon resolution of that case. It was not known whether Faur had retained a lawyer in the United States.
Remember pretexting? It’s the cute name given to…well…fraud. It’s when you call someone and pretend to be someone else, in order to get information. Or when you go online and pretend to be someone else, in order to get something. There’s no question in my mind that it’s fraud and illegal, but it seems to be a gray area.
California is considering a bill that would make this kind of thing illegal, and allow victims to sue for damages.
Who could be opposed to this? The MPAA, that’s who:
The bill won approval in three committees and sailed through the state Senate with a 30-0 vote. Then, according to Lenny Goldberg, a lobbyist for the Privacy Rights Clearinghouse, the measure encountered unexpected, last-minute resistance from the Motion Picture Association of America.
“The MPAA has a tremendous amount of clout and they told legislators, ‘We need to pose as someone other than who we are to stop illegal downloading,'” Goldberg said.
These people are looking more and more like a criminal organization every day.
EDITED TO ADD (12/11): Congress has outlawed pretexting. The law doesn’t go as far as some of the state laws—which it pre-empts—but it’s still a good thing.
Beautiful time lapse photos of a squid, Loligo pealei, seizing its prey.
…such a splendidly baroque little carnivore.
From the Associated Press:
Without notifying the public, federal agents for the past four years have assigned millions of international travelers, including Americans, computer-generated scores rating the risk they pose of being terrorists or criminals.
The travelers are not allowed to see or directly challenge these risk assessments, which the government intends to keep on file for 40 years.
The scores are assigned to people entering and leaving the United States after computers assess their travel records, including where they are from, how they paid for tickets, their motor vehicle records, past one-way travel, seating preference and what kind of meal they ordered.
The program’s existence was quietly disclosed earlier in November when the government put an announcement detailing the Automated Targeting System, or ATS, for the first time in the Federal Register, a fine-print compendium of federal rules. Privacy and civil liberties lawyers, congressional aides and even law enforcement officers said they thought this system had been applied only to cargo.
Like all these systems, we are all judged in secret, by a computer algorithm, with no way to see or even challenge our score. Kafka would be proud.
“If this catches one potential terrorist, this is a success,” Ahern said.
That’s just too idiotic a statement to even rebut.
EDITED TO ADD (12/3): More commentary.
There’s new software that can predict who is likely to become a murderer.
Using probation department cases entered into the system between 2002 and 2004, Berk and his colleagues performed a two-year follow-up study—enough time, they theorized, for a person to reoffend if he was going to. They tracked each individual, with particular attention to the people who went on to kill. That created the model. What remains at this stage is to find a way to marry the software to the probation department’s information technology system.
When caseworkers begin applying the model next year they will input data about their individual cases – what Berk calls “dropping ‘Joe’ down the model”—to come up with scores that will allow the caseworkers to assign the most intense supervision to the riskiest cases.
Even a crime as serious as aggravated assault—pistol whipping, for example—”might not mean that much” if the first-time offender is 30, but it is an “alarming indicator” in a first-time offender who is 18, Berk said.
The model was built using adult probation data stripped of personal identifying information for confidentiality. Berk thinks it could be an even more powerful diagnostic tool if he could have access to similarly anonymous juvenile records.
The central public policy question in all of this is a resource allocation problem. With not enough resources to go around, overloaded case workers have to cull their cases to find the ones in most urgent need of attention—the so-called true positives, as epidemiologists say.
But before that can begin in earnest, the public has to decide how many false positives it can afford in order to head off future killers, and how many false negatives (seemingly nonviolent people who nevertheless go on to kill) it is willing to risk to narrow the false positive pool.
Pretty scary stuff, as it gets into the realm of thoughtcrime.
Sidebar photo of Bruce Schneier by Joe MacInnis.