Also, I'm going to try something new. Let's use this weekly squid post to talk about the security stories in the news that I didn't cover. I'll be doing this every Friday, so please save any stories you want to post about for squid threads.
Blog: July 2011 Archives
Security researcher Charlie Miller, widely known for his work on Mac OS X and Apple's iOS, has discovered an interesting method that enables him to completely disable the batteries on Apple laptops, making them permanently unusable, and perform a number of other unintended actions. The method, which involves accessing and sending instructions to the chip housed on smart batteries could also be used for more malicious purposes down the road.
What he found is that the batteries are shipped from the factory in a state called "sealed mode" and that there's a four-byte password that's required to change that. By analyzing a couple of updates that Apple had sent to fix problems in the batteries in the past, Miller found that password and was able to put the battery into "unsealed mode."
From there, he could make a few small changes to the firmware, but not what he really wanted. So he poked around a bit more and found that a second password was required to move the battery into full access mode, which gave him the ability to make any changes he wished. That password is a default set at the factory and it's not changed on laptops before they're shipped. Once he had that, Miller found he could do a lot of interesting things with the battery.
"That lets you access it at the same level as the factory can," he said. "You can read all the firmware, make changes to the code, do whatever you want. And those code changes will survive a reinstall of the OS, so you could imagine writing malware that could hide on the chip on the battery. You'd need a vulnerability in the OS or something that the battery could then attack, though."
As components get smarter, they also get more vulnerable.
ShareMeNot is a Firefox add-on for preventing tracking from third-party buttons (like the Facebook "Like" button or the Google "+1" button) until the user actually chooses to interact with them. That is, ShareMeNot doesn't disable/remove these buttons completely. Rather, it allows them to render on the page, but prevents the cookies from being sent until the user actually clicks on them, at which point ShareMeNot releases the cookies and the user gets the desired behavior (i.e., they can Like or +1 the page).
Companies would be better off if they all provided meaningful privacy protections for consumers, but privacy is a collective action problem for them: many companies would love to see the ecosystem fixed, but no one wants to put themselves at a competitive disadvantage by imposing unilateral limitations on what they can do with user data.
The solution -- and one endorsed by the essay -- is a comprehensive privacy law. That reduces the incentive to defect.
Matt Blaze analyzes the 2010 U.S. Wiretap Report.
In 2000, government policy finally reversed course, acknowledging that encryption needed to become a critical part of security in modern networks, something that deserved to be encouraged, even if it might occasionally cause some trouble for law enforcement wiretappers. And since that time the transparent use of cryptography by everyday people (and criminals) has, in fact, exploded. Crypto software and algorithms, once categorized for arms control purposes as a "munition" alongside rocket launchers and nuclear triggers, can now be openly discussed, improved and incorporated into products and services without the end user even knowing that it's there. Virtually every cellular telephone call is today encrypted and effectively impervious to unauthorized over-the-air eavesdropping. Web transactions, for everything from commerce to social networking, are now routinely encrypted end-to-end. (A few applications, particularly email and wireline telephony, remain stubbornly unencrypted, but they are increasingly the exception rather than the rule.)
So, with this increasing proliferation of eavesdrop-thwarting encryption built in to our infrastructure, we might expect law enforcement wiretap rooms to have become quiet, lonely places.
But not so fast: the latest wiretap report identifies a total of just six (out of 3194) cases in which encryption was encountered, and that prevented recovery of evidence a grand total of ... (drumroll) ... zero times. Not once. Previous wiretap reports have indicated similarly minuscule numbers.
I second Matt's recommendation of Susan Landau's book: Surveillance or Security: The Risks Posed by New Wiretapping Technologies (MIT Press, 2011). It's an excellent discussion of the security and politics of wiretapping.
Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it's hard to mandate, or even to measure, "security consciousness" from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it's not likely to be effective unless management's heart is in it.
This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year's attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don't repeat that mistake in future.
The access control provided by a physical lock is based on the assumption that the information content of the corresponding key is private -- that duplication should require either possession of the key or a priori knowledge of how it was cut. However, the ever-increasing capabilities and prevalence of digital imaging technologies present a fundamental challenge to this privacy assumption. Using modest imaging equipment and standard computer vision algorithms, we demonstrate the effectiveness of physical key teleduplication -- extracting a key's complete and precise bitting code at a distance via optical decoding and then cutting precise duplicates. We describe our prototype system, Sneakey, and evaluate its effectiveness, in both laboratory and real-world settings, using the most popular residential key types in the U.S.
The design of common keys actually makes this process easier. There are only ten possible positions for each pin, any single key uses only half of those positions, and the positions of adjacent pins are deliberately set far apart.
EDITED TO ADD (7/26): I seem to have written about this in 2009. Apologies.
No indication about how well it works:
The smartphone-based scanner, named Mobile Offender Recognition and Information System, or MORIS, is made by BI2 Technologies in Plymouth, Massachusetts, and can be deployed by officers out on the beat or back at the station.
An iris scan, which detects unique patterns in a person's eyes, can reduce to seconds the time it takes to identify a suspect in custody. This technique also is significantly more accurate than results from other fingerprinting technology long in use by police, BI2 says.
When attached to an iPhone, MORIS can photograph a person's face and run the image through software that hunts for a match in a BI2-managed database of U.S. criminal records. Each unit costs about $3,000.
Roughly 40 law enforcement units nationwide will soon be using the MORIS, including Arizona's Pinal County Sheriff's Office, as well as officers in Hampton City in Virginia and Calhoun County in Alabama.
Sometimes too much security isn't good.
After observing children on playgrounds in Norway, England and Australia, Dr. Sandseter identified six categories of risky play: exploring heights, experiencing high speed, handling dangerous tools, being near dangerous elements (like water or fire), rough-and-tumble play (like wrestling), and wandering alone away from adult supervision. The most common is climbing heights.
"Climbing equipment needs to be high enough, or else it will be too boring in the long run," Dr. Sandseter said. "Children approach thrills and risks in a progressive manner, and very few children would try to climb to the highest point for the first time they climb. The best thing is to let children encounter these challenges from an early age, and they will then progressively learn to master them through their play over the years."
By gradually exposing themselves to more and more dangers on the playground, children are using the same habituation techniques developed by therapists to help adults conquer phobias, according to Dr. Sandseter and a fellow psychologist, Leif Kennair, of the Norwegian University for Science and Technology.
"Risky play mirrors effective cognitive behavioral therapy of anxiety," they write in the journal Evolutionary Psychology, concluding that this "anti-phobic effect" helps explain the evolution of children's fondness for thrill-seeking. While a youthful zest for exploring heights might not seem adaptive -- why would natural selection favor children who risk death before they have a chance to reproduce? -- the dangers seemed to be outweighed by the benefits of conquering fear and developing a sense of mastery.
A few miles away across the Rio Grande, the FBI determined that Chavez and Gomez were using lookouts to monitor the SENTRI Express Lane at the border. The lookouts identified "targets" -- people with regular commutes who primarily drove Ford vehicles. According to the FBI affidavit, the smugglers would follow their targets and get the vehicle identification number off the car's dashboard. Then a corrupt locksmith with access to Ford's vehicle database would make a duplicate key.
Keys in hand, the gang would put drugs in a car at night in Mexico and then pick up their shipment from the parked vehicle the next morning in Texas, authorities say.
This attack works because 1) there's a database of keys available to lots of people, and 2) both the SENTRI system and the victims are predictable.
Freakonomics asks: "Why has there been such a spike in hacking recently? Or is it merely a function of us paying closer attention and of institutions being more open about reporting security breaches?"
They posted five answers, including mine:
The apparent recent hacking epidemic is more a function of news reporting than an actual epidemic. Like shark attacks or school violence, natural fluctuations in data become press epidemics, as more reporters write about more events, and more people read about them. Just because the average person reads more articles about more events doesn’t mean that there are more events—just more articles.
Hacking for fun—like LulzSec—has been around for decades. It’s where hacking started, before criminals discovered the Internet in the 1990s. Criminal hacking for profit—like the Citibank hack—has been around for over a decade. International espionage existed for millennia before the Internet, and has never taken a holiday.
The past several months have brought us a string of newsworthy hacking incidents. First there was the hacking group Anonymous, and its hacktivism attacks as a response to the pressure to interdict contributions to Julian Assange‘s legal defense fund and the torture of Bradley Manning. Then there was the probably espionage-related attack against RSA, Inc. and its authentication token—made more newsworthy because of the bungling of the disclosure by the company—and the subsequent attack against Lockheed Martin. And finally, there were the very public attacks against Sony, which became the company to attack simply because everyone else was attacking it, and the public hacktivism by LulzSec.
None of this is new. None of this is unprecedented. To a security professional, most of it isn’t even interesting. And while national intelligence organizations and some criminal groups are organized, hacker groups like Anonymous and LulzSec are much more informal. Despite the impression we get from movies, there is no organization. There’s no membership, there are no dues, there is no initiation. It’s just a bunch of guys. You too can join Anonymous—just hack something, and claim you’re a member. That’s probably what the members of Anonymous arrested in Turkey were: 32 people who just decided to use that name.
It’s not that things are getting worse; it’s that things were always this bad. To a lot of security professionals, the value of some of these groups is to graphically illustrate what we’ve been saying for years: organizations need to beef up their security against a wide variety of threats. But the recent news epidemic also illustrates how safe the Internet is. Because news articles are the only contact most of us have had with any of these attacks.
This is interesting:
As we work to protect our users and their information, we sometimes discover unusual patterns of activity. Recently, we found some unusual search traffic while performing routine maintenance on one of our data centers. After collaborating with security engineers at several companies that were sending this modified traffic, we determined that the computers exhibiting this behavior were infected with a particular strain of malicious software, or “malware.” As a result of this discovery, today some people will see a prominent notification at the top of their Google web search results....
There's a lot that Google sees as a result of it's unique and prominent position in the Internet. Some of it is going to be stuff they never considered. And while they use a lot of it to make money, it's good of them to give this one back to the Internet users.
The police arrested sixteen suspected members of the Anonymous hacker group.
Whatever you may think of their politics, the group committed crimes and their members should be arrested and prosecuted. I just hope we don't get a media flurry about how they were some sort of cyber super criminals. Near as I can tell, they were just garden variety hackers who were lucky and caught a media wave.
EDITED TO ADD (7/19): I understand that the particular people arrested are innocent until proven guilty -- hence my use of the word "suspected" in the first sentence -- but there doesn't seem any question that members of the group claimed credit for criminal cyber attacks. I suppose I could have said "the group allegedly committed crimes," but that seemed overly cautious.
And yes, I agree that calling them a "group" is probably giving them more organizational credit than they have.
EDITED TO ADD (7/25): Last December, Richard Stallman wrote about the Anonymous group and their actions as a form of protest.
EDITED TO ADD (8/12): Department of Justice press release on the arrests.
This is really clever:
Many anticensorship systems work by making an encrypted connection (called a “tunnel”) from the user's computer to a trusted proxy server located outside the censor's network. This server relays requests to censored websites and returns the responses to the user over the encrypted tunnel. This approach leads to a cat-and-mouse game, where the censor attempts to discover and block the proxy servers. Users need to learn the address and login information for a proxy server somehow, and it's very difficult to broadcast this information to a large number of users without the censor also learning it.
Telex turns this approach on its head to create what is essentially a proxy server without an IP address. In fact, users don't need to know any secrets to connect. The user installs a Telex client app (perhaps by downloading it from an intermittently available website or by making a copy from a friend). When the user wants to visit a blacklisted site, the client establishes an encrypted HTTPS connection to a non-blacklisted web server outside the censor’s network, which could be a normal site that the user regularly visits. Since the connection looks normal, the censor allows it, but this connection is only a decoy.
The client secretly marks the connection as a Telex request by inserting a cryptographic tag into the headers. We construct this tag using a mechanism called public-key steganography. This means anyone can tag a connection using only publicly available information, but only the Telex service (using a private key) can recognize that a connection has been tagged.
As the connection travels over the Internet en route to the non-blacklisted site, it passes through routers at various ISPs in the core of the network. We envision that some of these ISPs would deploy equipment we call Telex stations. These devices hold a private key that lets them recognize tagged connections from Telex clients and decrypt these HTTPS connections. The stations then divert the connections to anticensorship services, such as proxy servers or Tor entry points, which clients can use to access blocked sites. This creates an encrypted tunnel between the Telex user and Telex station at the ISP, redirecting connections to any site on the Internet.
EDITED TO ADD (8/1): Another article.
EDITED TO ADD (8/13): Another article.
Ross Anderson discusses the technical and policy details.
EDITED TO ADD (7/18): Yet again, my preoccupation with my book is making it harder for me to write timely and lengthy blog posts. So I thank Ross for writing about this issue, so I don't have to.
You can now get a Master of Science in Strategic Studies in Weapons of Mass Destruction. Well, maybe you can't:
"It's not going to be open enrollment (or) traditional students," Giever said. "You worry about whether you might be teaching the wrong person this stuff."
At first, the FBI will select students from within its ranks, though Giever wants to open it to other law enforcement agencies. Rather than traditional tuition, agencies will contract with the school, paying about $300,000 a year for groups of 15 to 20 full-time students, according to documents submitted to the board of governors of the State System of Higher Education.
Thank you for all your comments and suggestions regarding my next book title. It will be:
Liars and Outliers:
How Security Holds Society Together
We're still deciding on a cover, but it won't be any of the five from the above link. Vaguely ominous crowd scenes are not what I want.
This creates far more security risks than it solves:
The city council in Cedar Falls, Iowa has absolutely crossed the line. They voted 6-1 in favor of expanding the use of lock boxes on commercial property. Property owners would be forced to place the keys to their businesses in boxes outside their doors so that firefighters, in that one-in-a-million chance, would have easy access to get inside.
We in the computer security world have been here before, over ten years ago.
After analyzing reams of publicly available data on casualties from Iraq, Afghanistan, Pakistan and decades of terrorist attacks, the scientists conclude that "insurgents pretty much seemed to be following a progress curve—or a learning curve—that's very common in the manufacturing literature," says physicist Neil Johnson of the University of Miami in Florida and lead author of the study.
The whole article is interesting, but here's just one bit:
The favoured quick-fix money-making exercise of the average Irish organised crime gang had, for decades, been bank robberies. But a massive investment by banks in branch security has made the traditional armed hold-up raids increasingly difficult.
The presence of CCTV cameras in most banks means any raider would need to be masked to avoid being identified. But security measures at the entrances to many branches, where customers are admitted by staff operating a buzzer, say, means masked men can now not even get through the door.
By the middle of the last decade, cash-in-transit vans delivering money to ATMs were identified by gangs as the weak link in the banks’ operations. This gave rise to a huge number of armed hold-ups on the vans.
However, in recent years the cash-in-transit companies have followed the example of the banks and invested heavily in security technology. Most vans carrying money are now heavily protected by timing devices on safes in the back of the vans, with staff having access to only limited amounts of cash at specific times to facilitate their deliveries.
These security measures have led to a steady decline in robberies on such vans in the past five years.
But having turned from bank robberies to armed hold-ups on cash vans, organised crime gangs have once again changed tack and are now engaging in robberies with hostage-taking.
Known as “tiger raids”, the robberies involve an organised crime gang kidnapping a family member or loved one of a person who has access to cash because of their work in a bank or post office.
Family members are normally taken away at gunpoint, threatened with being shot and or held until the bank or post-office worker goes to their work place, takes a ransom sum and leaves it for the gang at a prearranged drop-off point.
The Garda has worked closely with the main banks in agreeing protocols for such incidents. The main element of that agreement is that banks will not let money leave a branch, no matter how serious the hostage situation, until gardaí have been notified. A reaction operation can then be put in place to try and catch the gang as they collect the ransom.
These protocols have been relatively successful and seem to be deterring tiger raids targeting bank workers.
However, gangs are now increasingly targeting post offices in the belief that security protocols and equipment such as safes are not as robust as in the banking sector.
Most of the tiger raids now occurring are targeting post-office staff, usually in rural areas.
The latest raid occurred just last week, when more than €100,000 was taken from a post office in Newcastle West, Co Limerick, when the post mistress’s adult son was kidnapped at gunpoint and released unharmed when the ransom was paid.
Al Qaeda played all out, spent all its assets in a few years. In my dumb-ass 2005 article, I called the Al Qaeda method "real war" and the IRA's slow-perc campaign "nerf war." That was ignorance talking, boyish war-loving ignorance. I wanted more action, that was all. I saw what an easy target the London transport system made for a few amateur Al Qaeda recruits and just thought that since the IRA had several long-term sleeper teams in place in London, they could have wreaked a million times more havoc. Which was true, they could've. But could've and should've are different things, and a guerrilla group that goes all-out, does everything it can, is doomed.
That's amazing; I've never heard of anything like that. It shows how far they'd come by that stage, away from the simple Al Qaeda maximum-blood crap I bought into in that earlier article. In contemporary urban guerrilla warfare, at least in Western Europe, killing civvies is counterproductive. What you want to do, what the IRA had mastered by the 1990s, was messing with the incredibly fragile and expensive networks that keep a huge city going. Interrupt them and you cost the enemy billions of dollars, and they don't even have any gory corpses to shake in your faces. Fucking brilliant, and I was too dumb to see it!
It's hard for an American to get your head around any of this, but the point, and it's very "counter-intuitive" as they say, is that Al Qaeda did everything wrong, spending all their assets and going for maximum kill, and the IRA, the poster-boy for long, slow, crock-pot guerrilla warfare, did it exactly right.
Last week, I got a bunch of press calls about Olajide Oluwaseun Noibi, who flew from New York to Los Angeles using an expired ticket in someone else's name and a university ID. They all wanted to know what this says about airport security.
It says that airport security isn't perfect, and that people make mistakes. But it's not something that anyone should worry about. It's not like Noibi figured out a new hole in the airport security system, one that he was able to exploit repeatedly. He got lucky. He got real lucky. It's not something a terrorist can build a plot around.
I'm even less concerned because I've never thought the photo ID check had any value. Noibi was screened, just like any other passenger. Even the TSA blog makes this point:
In this case, TSA did not properly authenticate the passenger's documentation. That said, it's important to note that this individual received the same thorough physical screening as other passengers, including being screened by advanced imaging technology (body scanner).
Seems like the TSA is regularly downplaying the value of the photo ID check. This is from a Q&A about Secure Flight, their new system to match passengers with watch lists:
Q: This particular "layer" isn't terribly effective. If this "layer" of security can be circumvented by anyone with a printer and a word processor, this doesn't seem to be a terribly useful "layer" ... especially looking at the amount of money being expended on this particular "layer". It might be that this money could be more effectively spent on other "layers".
A: TSA uses layers of security to ensure the security of the traveling public and the Nation's transportation system. Secure Flight's watchlist name matching constitutes only one security layer of the many in place to protect aviation. Others include intelligence gathering and analysis, airport checkpoints, random canine team searches at airports, federal air marshals, federal flight deck officers and more security measures both visible and invisible to the public.
Each one of these layers alone is capable of stopping a terrorist attack. In combination their security value is multiplied, creating a much stronger, formidable system. A terrorist who has to overcome multiple security layers in order to carry out an attack is more likely to be pre-empted, deterred, or to fail during the attempt.
Yes, the answer says that they need to spend millions to ensure that terrorists with a viable plot also need a computer, but you can tell that their heart wasn't in the answer. "Checkpoints! Dogs! Air marshals! Ignore the stupid photo ID requirement."
Noibi is an embarrassment for the TSA and for the airline Virgin America, who are both supposed to catch this kind of thing. But I'm not worried about the security risk, and neither is the TSA.
There's a new version:
The latest TDL-4 version of the rootkit, which is used as a persistent backdoor to install other types of malware, infected 4.52 million machines in the first three months of this year, according to a detailed technical analysis published Wednesday by antivirus firm Kaspersky Lab. Almost a third of the compromised machines were located in the United States. With successful attacks on US-based PCs fetching premium fees, those behind the infections likely earned $250,000 on that demographic alone.
TDL-4 is endowed with an array of improvements over TDL-3 and previous versions of the rootkit, which is also known as Alureon or just TDL. As previously reported, it is now able to infect 64-bit versions of Windows by bypassing the OS's kernel mode code signing policy, which was designed to allow drivers to be installed only when they have been digitally signed by a trusted source. Its ability to create ad-hoc DHCP servers on networks also gives the latest version new propagation powers.
Sidebar photo of Bruce Schneier by Joe MacInnis.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Security.