November 2012 Archives
I've been thinking a lot about how information technology, and the Internet in particular, is becoming a tool for oppressive governments. As Evgeny Morozov describes in his great book The Net Delusion: The Dark Side of Internet Freedom, repressive regimes all over the world are using the Internet to more efficiently implement surveillance, censorship, and propaganda. And they're getting really good at it.
For a lot of us who imagined that the Internet would spark an inevitable wave of Internet freedom, this has come as a bit of a surprise. But it turns out that information technology is not just a tool for freedom-fighting rebels under oppressive governments, it's also a tool for those oppressive governments. Basically, IT magnifies power; the more power you have, the more it can be magnified in IT.
I think we got this wrong -- anyone remember John Perry Barlow's 1996 manifesto? -- because, like most technologies, IT technologies are first used by the more agile individuals and groups outside the formal power structures. In the same way criminals can make use of a technological innovation faster than the police can, dissidents in countries all over the world were able to make use of Internet technologies faster than governments could. Unfortunately, and inevitably, governments have caught up.
This is the "security gap" I talk about in the closing chapters of Liars and Outliers.
I thought about all these things as I read this article on how the Syrian government hacked into the computers of dissidents:
The cyberwar in Syria began with a feint. On Feb. 8, 2011, just as the Arab Spring was reaching a crescendo, the government in Damascus suddenly reversed a long-standing ban on websites such as Facebook, Twitter, YouTube, and the Arabic version of Wikipedia. It was an odd move for a regime known for heavy-handed censorship; before the uprising, police regularly arrested bloggers and raided Internet cafes. And it came at an odd time. Less than a month earlier demonstrators in Tunisia, organizing themselves using social networking services, forced their president to flee the country after 23 years in office. Protesters in Egypt used the same tools to stage protests that ultimately led to the end of Hosni Mubarak's 30-year rule. The outgoing regimes in both countries deployed riot police and thugs and tried desperately to block the websites and accounts affiliated with the revolutionaries. For a time, Egypt turned off the Internet altogether.
Syria, however, seemed to be taking the opposite tack. Just as protesters were casting about for the means with which to organize and broadcast their messages, the government appeared to be handing them the keys.
The first documented attack in the Syrian cyberwar took place in early May 2011, some two months after the start of the uprising. It was a clumsy one. Users who tried to access Facebook in Syria were presented with a fake security certificate that triggered a warning on most browsers. People who ignored it and logged in would be giving up their user name and password, and with them, their private messages and contacts.
I dislike this being called a "cyberwar," but that's my only complaint with the article.
There are no easy solutions here, especially because technologies that defend against one of those three things -- surveillance, censorship, and propaganda -- often make one of the others easier. But this is an important problem to solve if we want the Internet to be a vehicle of freedom and not control.
EDITED TO ADD (12/13): This is a good 90-minute talk about how governments have tried to block Tor.
Cash traps and card traps are the new thing:
[Card traps] involve devices that fit over the card acceptance slot and include a razor-edged spring trap that prevents the customer’s card from being ejected from the ATM when the transaction is completed.
"Spring traps are still being widely used," EAST wrote in its most recently European Fraud Update. "Once the card has been inserted, these prevent the card being returned to the customer and also stop the ATM from retracting it. According to reports from one country despite warning messages that appear on the ATM screen or are displayed on the ATM fascia customers are still not reporting when their cards are captured, leading to substantial losses from ATM or point-of-sale transactions."
More descriptions, and photos of the devices, in the article.
Remember that Ohio was not the deciding state in the election. Neither was Florida or Virginia. It was Colorado. So even if there was this magic election-stealing software running in Ohio, it wouldn't have made any difference.
For my part, I'd like a little -- you know -- evidence.
I've been reading lots of articles discussing how little e-mail and Internet privacy we actually have in the U.S. This is a good one to start with:
The FBI obliged -- apparently obtaining subpoenas for Internet Protocol logs, which allowed them to connect the sender’s anonymous Google Mail account to others accessed from the same computers, accounts that belonged to Petraeus biographer Paula Broadwell. The bureau could then subpoena guest records from hotels, tracking the WiFi networks, and confirm that they matched Broadwell’s travel history. None of this would have required judicial approval -- let alone a Fourth Amendment search warrant based on probable cause.
While we don't know the investigators’ other methods, the FBI has an impressive arsenal of tools to track Broadwell’s digital footprints -- all without a warrant. On a mere showing of "relevance," they can obtain a court order for cell phone location records, providing a detailed history of her movements, as well as all people she called. Little wonder that law enforcement requests to cell providers have exploded -- with a staggering 1.3 million demands for user data just last year, according to major carriers.
An order under this same weak standard could reveal all her e-mail correspondents and Web surfing activity. With the rapid decline of data storage costs, an ever larger treasure trove is routinely retained for ever longer time periods by phone and Internet companies.
Had the FBI chosen to pursue this investigation as a counterintelligence inquiry rather than a cyberstalking case, much of that data could have been obtained without even a subpoena. National Security Letters, secret tools for obtaining sensitive financial and telecommunications records, require only the say-so of an FBI field office chief.
While the details of this investigation that have leaked thus far provide us all a fascinating glimpse into the usually sensitive methods used by FBI agents, this should also serve as a warning, by demonstrating the extent to which the government can pierce the veil of communications anonymity without ever having to obtain a search warrant or other court order from a neutral judge.
The guest lists from hotels, IP login records, as well as the creative request to email providers for "information about other accounts that have logged in from this IP address" are all forms of data that the government can obtain with a subpoena. There is no independent review, no check against abuse, and further, the target of the subpoena will often never learn that the government obtained data (unless charges are filed, or, as in this particular case, government officials eagerly leak details of the investigation to the press). Unfortunately, our existing surveillance laws really only protect the "what" being communicated; the government's powers to determine "who" communicated remain largely unchecked.
This is good, too.
The EFF tries to explain the relevant laws. Summary: they're confusing, and they don't protect us very much.
My favorite quote is from the New York Times:
Marc Rotenberg, executive director of the Electronic Privacy Information Center in Washington, said the chain of unexpected disclosures was not unusual in computer-centric cases.
"It's a particular problem with cyberinvestigations -- they rapidly become open-ended because there’s such a huge quantity of information available and it’s so easily searchable," he said, adding, "If the C.I.A. director can get caught, it’s pretty much open season on everyone else."
And a day later:
"If the director of central intelligence isn't able to successfully keep his emails private, what chance do I have?" said Kurt Opsahl, a senior staff attorney at the Electronic Frontier Foundation, a digital-liberties advocacy group.
In more words:
But there's another, more important lesson to be gleaned from this tale of a biographer run amok. Broadwell's debacle confirms something that some privacy experts have been warning about for years: Government surveillance of ordinary citizens is now cheaper and easier than ever before. Without needing to go before a judge, the government can gather vast amounts of information about us with minimal expenditure of manpower. We used to be able to count on a certain amount of privacy protection simply because invading our privacy was hard work. That is no longer the case. Our always-on, Internet-connected, cellphone-enabled lives are an open door to Big Brother.
Remember that this problem is bigger than Petraeus. The FBI goes after electronic records all the time:
In Google’s semi-annual transparency report released Tuesday, the company stated that it received 20,938 requests from governments around the world for its users’ private data in the first six months of 2012. Nearly 8,000 of those requests came from the U.S. government, and 7,172 of them were fulfilled to some degree, an increase of 26% from the prior six months, according to Google’s stats.
So what's the answer? Would they have been safe if they'd used Tor or a regular old VPN? Silent Circle? Something else? This article attempts to give advice; this is the article's most important caveat:
DON'T MESS UP It is hard to pull off one of these steps, let alone all of them all the time. It takes just one mistake -- forgetting to use Tor, leaving your encryption keys where someone can find them, connecting to an airport Wi-Fi just once -- to ruin you.
"Robust tools for privacy and anonymity exist, but they are not integrated in a way that makes them easy to use," Mr. Blaze warned. "We've all made the mistake of accidentally hitting 'Reply All.' Well, if you're trying to hide your e-mails or account or I.P. address, there are a thousand other mistakes you can make."
In the end, Mr. Kaminsky noted, if the F.B.I. is after your e-mails, it will find a way to read them. In that case, any attempt to stand in its way may just lull you into a false sense of security.
Some people think that if something is difficult to do, "it has security benefits, but that’s all fake -- everything is logged," said Mr. Kaminsky. "The reality is if you don't want something to show up on the front page of The New York Times, then don't say it."
The real answer is to rein in the FBI, of course:
If we don't take steps to rein in the burgeoning surveillance state now, there’s no guarantee we'll even be aware of the ways in which control is exercised through this information architecture. We will all remain exposed but the extent of our exposure, and the potential damage done to democracy, is likely to remain invisible.
"Hopefully this [case] will be a wake-up call for Congress that the Stored Communications Act is old and busted," Mr Fakhoury says.
I don't see any chance of that happening anytime soon.
EDITED TO ADD (12/12): E-mail security might not have mattered.
I noticed this in an article about how increased security and a general risk aversion is harming US diplomatic missions:
"Barbara Bodine, who was the U.S. ambassador to Yemen during the Qaeda bombing of the U.S.S. Cole in 2000, told me she believes that much of the security American diplomats are forced to travel with is counterproductive. "There's this idea that if we just throw more security guys at the problem, it will go away," she said. "These huge convoys they force you to travel in, with a bristling personal security detail, give you the illusion of security, not real security. They just draw a lot of attention and make you a target. It's better to fly under the radar."
It's a good article overall.
Research into one VM stealing crypto keys from another VM running on the same hardware.
ABSTRACT: This paper details the construction of an access-driven side-channel attack by which a malicious virtual machine (VM) extracts fine-grained information from a victim VM running on the same physical computer. This attack is the first such attack demonstrated on a symmetric multiprocessing system virtualized using a modern VMM (Xen). Such systems are very common today, ranging from desktops that use virtualization to sandbox application or OS compromises, to clouds that co-locate the workloads of mutually distrustful customers. Constructing such a side-channel requires overcoming challenges including core migration, numerous sources of channel noise, and the difficulty of preempting the victim with sufficient frequency to extract fine-grained information from it. This paper addresses these challenges and demonstrates the attack in a lab setting by extracting an ElGamal decryption key from a victim using the most recent version of the libgcrypt cryptographic library.
This is idiotic:
Public Intelligence recently posted a Powerpoint presentation from the NYC fire department (FDNY) discussing the unique safety issues mobile food trucks present. Along with some actual concerns (many food trucks use propane and/or gasoline-powered generators to cook; some *gasp* aren't properly licensed food vendors), the presenter decided to toss in some DHS speculation on yet another way terrorists might be killing us in the near future.
The rest of the article explains why the DHS believes we should be terrified of food trucks. And then it says:
The DHS' unfocused "terrorvision" continues to see a threat in every situation and the department seems to be busying itself crafting a response to every conceivable "threat." The problem with this "method" is that it turns any slight variation of "everyday activity" into something suspicious. The number of "terrorist implications" grows exponentially while the number of solutions remains the same. This Powerpoint is another example of good, old-fashioned fear mongering, utilizing public servants to spread the message.
Someone needs to do something; the DHS is out of control.
I noticed this amongst the details of the Petraeus scandal:
Petraeus and Broadwell apparently used a trick, known to terrorists and teenagers alike, to conceal their email traffic, one of the law enforcement officials said.
Rather than transmitting emails to the other's inbox, they composed at least some messages and instead of transmitting them, left them in a draft folder or in an electronic "dropbox," the official said. Then the other person could log onto the same account and read the draft emails there. This avoids creating an email trail that is easier to trace.
I remember that the 9/11 terrorists did this.
At least, that's the story:
The locks at the Tower of London, home to the Crown Jewels, had to be changed after a burglar broke in and stole keys.
The intruder scaled gates and took the keys from a sentry post.
Guards spotted him but couldn't give chase as they are not allowed to leave their posts.
But the story has been removed from the Mirror's website. This is the only other link I have. Anyone have any idea if this story is true or not?
ETA (11/14): According to this BBC article, keys for a restaurant, conference rooms, and an internal lock to the drawbridges were on the stolen key set, but the Crown Jewels were never at risk.
Dan Boneh of Stanford University is offering a free online cryptography course. The course runs for six weeks, and has five to seven hours of coursework per week. It just started last week.
ETA 11/14: A second part of the course will be starting on 21 January 2013.
Mother fairy wrens teach their chicks passwords while they're still in their eggs to tell them from cuckoo impostors:
She kept 15 nests under constant audio surveillance, and discovered that fairy-wrens call to their unhatched chicks, using a two-second trill with 19 separate elements to it. They call once every four minutes while sitting on their eggs, starting on the 9th day of incubation and carrying on for a week until the eggs hatch.
When Colombelli-Negrel recorded the chicks after they hatched, she heard that their begging call included a single unique note lifted from mum's incubation call. This note varies a lot between different fairy-wren broods. It's their version of a surname, a signature of identity that unites a family. The females even teach these calls to their partners, by using them in their own begging calls when the males return to the nest with food.
These signature calls aren't innate. The chicks' calls more precisely matched those of their mother if she sang more frequently while she was incubating. And when Colombelli-Negrel swapped some eggs between different clutches, she found that the chicks made signature calls that matches those of their foster parents rather than those of their biological ones. It's something they learn while still in their eggs.
It's worth noting that this is primarily of use to the chicks' parents, so they know not to expend time and energy on the impostor cuckoo chick. Cuckoo chicks, as part of their evolutionary adaptation, kick the real chicks out of the nest, so they're lost in any case. It's the fact that the signal allows the parents to identify impostors and start a new brood that's of evolutionary advantage.
This article makes the important argument that encryption -- where the user and not the cloud provider holds the keys -- is critical to protect cloud data. The problem is, it upsets cloud providers' business models:
In part it is because encryption with customer controlled keys is inconsistent with portions of their business model. This architecture limits a cloud provider's ability to data mine or otherwise exploit the users' data. If a provider does not have access to the keys, they lose access to the data for their own use. While a cloud provider may agree to keep the data confidential (i.e., they won't show it to anyone else) that promise does not prevent their own use of the data to improve search results or deliver ads. Of course, this kind of access to the data has huge value to some cloud providers and they believe that data access in exchange for providing below-cost cloud services is a fair trade.
Also, providing onsite encryption at rest options might require some providers to significantly modify their existing software systems, which could require a substantial capital investment.
That second reason is actually very important, too. A lot of cloud providers don't just store client data, they do things with that data. If the user encrypts the data, it's an opaque blob to the cloud provider -- and a lot of cloud services would be impossible.
Burger King introduces a black burger with ketchup that includes squid ink. Only in Japan, of course.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
From the Department of Homeland Security, a handy list of 19 suspicious behaviors that could indicate that a hotel guest is actually a terrorist.
I myself have done several of these.
More generally, this is another example of why all the "see something say something" campaigns fail: "If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security."
Interesting research from RAND:
Abstract: How do terrorist groups end? The evidence since 1968 indicates that terrorist groups rarely cease to exist as a result of winning or losing a military campaign. Rather, most groups end because of operations carried out by local police or intelligence agencies or because they join the political process. This suggests that the United States should pursue a counterterrorism strategy against al Qa'ida that emphasizes policing and intelligence gathering rather than a "war on terrorism" approach that relies heavily on military force.
Good essay, making the point that cyberattack and counterattack aren't very useful -- actual cyberdefense is what's wanted.
Creating a cyber-rock is cheap. Buying a cyber-rock is even cheaper since zero-day attacks exist on the open market for sale to the highest bidder. In fact, if the bad guy is willing to invest time rather than dollars and become an insider, cyber-rocks may in fact be free of charge, but that is a topic for another time.
Given these price tags, it is safe to assume that some nations have already developed a collection of cyber-rocks, and that many other nations will develop a handful of specialized cyber-rocks (e.g., as an extension of many-year-old regional conflicts). If we follow the advice of Hayden and Chabinsky, we may even distribute cyber-rocks to private corporations.
Obviously, active defense is folly if all it means is unleashing the cyber-rocks from inside of our glass houses since everyone can or will have cyber-rocks. Even worse, unlike very high explosives, or nuclear materials, or other easily trackable munitions (part of whose deterrence value lies in others knowing about them), no one will ever know just how many or what kind of cyber-rocks a particular group actually has.
Now that we have established that cyber-offense is relatively easy and can be accomplished on the cheap, we can see why reliance on offense alone is inadvisable. What are we going to do to stop cyberwar from starting in the first place? The good news is that war has both defensive and offensive aspects, and understanding this fundamental dynamic is central to understanding cyberwar and deterrence.
The kind of defense I advocate (called "passive defense" or "protection" above) involves security engineering -- building security in as we create our systems, knowing full well that they will be attacked in the future. One of the problems to overcome is that exploits are sexy and engineering is, well, not so sexy.
Here's a great concept: a micromort:
Shopping for coffee you would not ask for 0.00025 tons (unless you were naturally irritating), you would ask for 250 grams. In the same way, talking about a 1/125,000 or 0.000008 risk of death associated with a hang-gliding flight is rather awkward. With that in mind. Howard coined the term "microprobability" (μp) to refer to an event with a chance of 1 in 1 million and a 1 in 1 million chance of death he calls a "micromort" (μmt). We can now describe the risk of hang-gliding as 8 micromorts and you would have to drive around 3,000km in a car before accumulating a risk of 8 μmt, which helps compare these two remote risks.
There's a related term, microlife, for things that reduce your lifespan. A microlife is 30 minutes off your life expectancy. So smoking two cigarettes has a cost of one microlife.
Many popular applications, HTTP(S) and WebSocket transport libraries, and SOAP and REST Web-services middleware use SSL/TLS libraries incorrectly, breaking or disabling certificate validation. Their SSL and TLS connections are not authenticated, thus they -- and any software using them -- are completely insecure against a man-in-the-middle attacker.
Great research, and -- yes -- the vulnerability should be fixed, but it doesn't feel like a crisis issue.
This is the sort of thing I wrote about in my latest book.
The Prisoners Dilemma as outlined above can be seen in action in two variants within regulatory activities, and offers a clear insight into why those involved in regulation act as they do. The first relationship is that between the various people and organisations being regulated banks, nuclear power stations, council departments, police agencies, journalists, etc, and the clear lessons from history are that even for those organisations that are theoretically in competition with each other, it is beneficial to both/all sides in the long run to use mutual cooperation in order to maximise their personal benefit. Whether it was Virgin and British Airways forming an illegal cartel to fix the price of fuel surcharges (a benefit to themselves which was paid for in increased prices for passengers); football shirt retailers (and Manchester United) being fined £16m for fixing the price of replica football shirts, or Barclays (and undoubtedly other banks) working together to fix the LIBOR rate, the reason why they do it is simple and unanswerable -- it is in their benefit to do so.
However, when it comes down to the relationship between the regulators and those being regulated, then a completely different strategic dynamic comes into play. The ability of the regulated organisation to maximise personal benefit is then based on the ability to predict what the other side will do in response to the two options cooperate (play nicely) or betray (screw the customer). Given that in almost all cases the regulatory body has less funds, personnel, resources and expertise than the organisation it is regulating, then it becomes clear that there is little to be gained in the long run by cooperating / playing nicely, and much to be gained by ignoring the regulator and developing a strategy that focuses purely on maximising its own personal benefit. This is not an issue of 'right' or 'wrong,' but purely, in its own terms at least (maximisation of profit, increased market share, annual bonuses, career prospects), of whether it is 'effective' or 'ineffective.'
Is anyone out there interested in buying a pile of copies of my Liars and Outliers for a giveaway and book signing at the RSA Conference? I can guarantee enormous crowds at your booth for as long as there are books to give away. This could also work for an after-hours event.
Please let me know. I can get you a great bulk order price with my publisher.
These are often called SCADA vulnerabilities, although it isn't SCADA that's involved here. They're against programmable logic controllers (PLCs): the same industrial controllers that Stuxnet attacked.
EDITED TO ADD (11/13): More info.
I'd sure like to know more about this:
Government code-breakers are working on deciphering a message that has remained a secret for 70 years.
It was found on the remains of a carrier pigeon that was discovered in a chimney, in Surrey, having been there for decades.
It is thought the contents of the note, once decoded, could provide fresh information from World War II.
It was a British pigeon, presumed to have died while heading back to Bletchley Park.
ETA (11/6): And another.
I look forward to seeing the decryption.
I've written about it before, but not half as well as this story:
"That search was absolutely useless." I said. "And just shows how much of all of this is security theatre. You guys are just feeling up passengers for no good effect, which means that you get all the downsides of a search -- such as annoyed travellers who feel like they have had their privacy violated -- without any of the benefits. I could have hidden half a dozen items on my person that you wouldn't have had a snowball's chance in a supernova of finding. That's what I meant."
"Sir, are you hiding something?" he said, and as he did, I saw three other security guys coming our way. Oh dear.
"Of course not." I said. "But if I had wanted to, I could have."
"Why do you have such a problem with being searched?" another security guy said, presumably the first guy's supervisor.
"Look, I have absolutely no problem with being searched. But if you're going to do it, do it properly -- the plane is no safer at all after this gentleman half-heartedly stroked me for a couple of seconds" I said.
"How do you mean?" the supervisor asked.
"He was stroking me as if he was trying to get me to sleep with him, not as if he was trying to find anything on me." I said. "I've been searched many, many times, and in this case, I could have hidden things in my socks, taped to my thigh, taped to the small of my back, the insides of my upper arms, under my testicles or anywhere on my buttocks."
"Why have you been searched so many times?" the supervisor asked sharply.
"I'm a police officer. I help train other police officers. When we search someone, we assume that the person who searches us may have a knife or something else they can use to harm us, so we search properly. And yes, this means that you have to take a firm grip of somebody's groin, yes, this means that you search even the parts that are less comfortable to have searched, and yes, this means that you're probably going to incur a couple of sexual harassment accusations along the way." I nodded at the security guard who had searched me. "This fellow here did by far the most useless search I have ever been subjected to, and if I wanted to, I could have smuggled half a dozen knives onto the flight. I don't have a problem with being searched at all -- in fact, if you guys think it's necessary, I'd be the first to admit that I look a little bit suspicious before I've had my first cup of coffee in the morning -- but if you're going to stroke me gently in front of hundreds of people, you'd better buy me a fucking drink first, is all I am saying."
The security supervisor was standing there, frozen at my rant.
Really nice profile in the New York Times. It includes a discussion of the Clean Slate program:
Run by Dr. Howard Shrobe, an M.I.T. computer scientist who is now a Darpa program manager, the effort began with a premise: If the computer industry got a do-over, what should it do differently?
The program includes two separate but related efforts: Crash, for Clean-Slate Design of Resilient Adaptive Secure Hosts; and MRC, for Mission-Oriented Resilient Clouds. The idea is to reconsider computing entirely, from the silicon wafers on which circuits are etched to the application programs run by users, as well as services that are placing more private and personal data in remote data centers.
Clean Slate is financing research to explore how to design computer systems that are less vulnerable to computer intruders and recover more readily once securityis breached.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..