Schneier on Security
A blog covering security and security technology.
December 2012 Archives
After the Instagram debacle, where it changed its terms of service to give itself greater rights over user photos and reversed itself after a user backlash, it's worth thinking about the security threat stemming from terms of service in general.
As cloud computing becomes the norm, as Internet security becomes more feudal, these terms of service agreements define what our service providers can do, both with the data we post and with the information they gather about how we use their service. The agreements are very one-sided -- most of the time, we're not even paying customers of these providers -- and can change without warning. And, of course, none of us ever read them.
With respect to Public User Content, you hereby do and shall grant to Prezi (and its successors, assigns, and third party service providers) a worldwide, non-exclusive, perpetual, irrevocable, royalty-free, fully paid, sublicensable, and transferable license to use, reproduce, modify, create derivative works from, distribute, publicly display, publicly perform, and otherwise exploit the content on and in connection with the manufacture, sale, promotion, marketing and distribution of products sold on, or in association with, the Service, or for purposes of providing you with the Service and promoting the same, in any medium and by any means currently existing or yet to be devised.
I don't mean to pick on Prezi; it's just an example. How many other of these Trojan horses are hiding in commonly used cloud provider agreements: both from providers that companies decide to use as a matter of policy, and providers that company employees use in violation of policy, for reasons of convenience?
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
From "The Insider's TSA Dictionary":
Bruce Schneiered: (V, ints) When a passenger uses logic in order to confound and perplex an officer into submission. Ex: "A TSA officer took my Swiss army knife, but let my scissors go. I then asked him wouldn't it be more dangerous if I were to make my scissors into two blades, or to go into the bathroom on the secure side and sharpen my grandmother's walking stick with one of the scissor blades into a terror spear. Then after I pointed out that all of our bodies contain a lot more than 3.4 ounces of liquids, the TSA guy got all pissed and asked me if I wanted to fly today. I totally Schneirered [sic] his ass."
Snitching has become so commonplace that in the past five years at least 48,895 federal convicts -- one of every eight -- had their prison sentences reduced in exchange for helping government investigators, a USA TODAY examination of hundreds of thousands of court cases found. The deals can chop a decade or more off of their sentences.
There are all sorts of complexities and unintended consequences in this system. This is just a small part of it:
The risks are obvious. If the government rewards paid-for information, wealthy defendants could potentially buy early freedom. Because such a system further muddies the question of how informants -- already widely viewed as untrustworthy -- know what they claim to know, "individual cases can be undermined and the system itself is compromised," U.S. Justice Department lawyers said in a 2010 court filing.
Plea bargaining is illegal in many countries precisely because of the perverse incentives it sets up. I talk about this more in Liars and Outliers.
Elcomsoft Forensic Disk Decryptor acquires the necessary decryption keys by analyzing memory dumps and/or hibernation files obtained from the target PC. You'll thus need to get a memory dump from a running PC (locked or unlocked) with encrypted volumes mounted, via a standard forensic product or via a FireWire attack. Alternatively, decryption keys can also be derived from hibernation files if a target PC is turned off.
This isn't new. I wrote about AccessData doing the same thing in 2007:
Even so, none of this might actually matter. AccessData sells another program, Forensic Toolkit, that, among other things, scans a hard drive for every printable character string. It looks in documents, in the Registry, in e-mail, in swap files, in deleted space on the hard drive ... everywhere. And it creates a dictionary from that, and feeds it into PRTK.
It's getting harder and harder to maintain good file security.
In Liars and Outliers, I talk a lot about the more social forms of security. One of them is reputational. This post is about that squishy sociological security measure: public shaming as a way to punish bigotry (and, by extension, to reduce the incidence of bigotry).
It's a pretty rambling post, first listing some of the public shaming sites, then trying to figure out whether they're a good idea or not, and finally coming to the conclusion that shaming doesn't do very much good and -- in many cases -- unjustly rewards the shamer.
I disagree with a lot of this. I do agree with:
I do think that shame has a role in the way we control our social norms. Shame is a powerful tool, and it's something that we use to keep our own actions in check all the time. The source of that shame varies immensely. Maybe we are shamed before God, or our parents, or our boss.
But I disagree with the author's insistence that "shame, ultimately, has to come from ourselves. We cannot be forced to feel shame." While technically it's true, operationally it's not. Shame comes from others' reactions to our actions. Yes, we feel it inside -- but it originates from out lifelong inculcation into the norms of our social group. And throughout the history of our species, social groups have used shame to effectively punish those who violate social norms. No one wants a bad reputation.
It's also true that we all have defenses against shame. One of them is to have an alternate social group for whom the shameful behavior is not shameful at all. Another is to simply not care what the group thinks. But none of this makes shame a less valuable tool of societal pressure.
Like all forms of security that society uses to control its members, shame is both useful and valuable. And I'm sure it is effective against bigotry. It might not be obvious how to deploy it effectively in the international and sometimes anonymous world of the Internet, but that's another discussion entirely.
Finally, Cryptography Engineering is available as an ebook. Even better, it's today's deal of the day at O'Reilly: $27.50 (50% off) and no copy protection. (The discount won't show until you add the book to your cart.)
Industrial control system comes with a backdoor:
Although the system was password protected in general, the backdoor through the IP address apparently required no password and allowed direct access to the control system. "[Th]e published backdoor URL provided the same level of access to the company's control system as the password-protected administrator login," said the memo.
The security of this backdoor is secrecy. Of course, that never lasts:
Hackers broke into the industrial control system of a New Jersey air conditioning company earlier this year, using a backdoor vulnerability in the system, according to an FBI memo made public this week.
Clyclosa spiders create decoys to fool predators.
Interesting firsthand phishing story:
A few nights ago, I got a Twitter direct message (DM) from a friend saying that someone was saying nasty things about me, with a link. The link was a shortened (t.co) link, so it was hard to see exactly what it pointed to. I followed the link on my cell phone, and got to a website that certainly looked legit, and I was foolish enough to login. Pwnd. A few minutes later, my Twitter account was spewing tweetspam about the latest pseudo-scientific weight loss fad.
The small San Francisco film and video company is celebrating its 17th anniversary.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Schools go into lockdown over a thermometer, a car backfiring, a bank robbery a few blocks away, a student alone in a gym, a neighbor on the street, and some vague unfounded rumors. And one high-school kid was arrested for drawing pictures of guns. Everywhere else, post-traumatic stupidity syndrome. (It's not a new phrase -- Google shows hits back to 2001 -- but it's new to me. It reminds me of this.) I think of it as: "Something must be done. This is something. Therefore, we must do it."
I'm not going to write about the Newtown school massacre. I wrote this earlier this year after the Aurora shooting, which was a rewrite of this about the 2007 Virginia Tech shootings. I feel as if I'm endlessly repeating myself. This essay, also from 2007, on the anti-terrorism "War on the Unexpected," is also relevant. Just remember, we're the safest we've been in 40 years.
Chris Cardinal discovered someone running such a scam on Amazon using his account: the scammer contacted Amazon pretending to be Chris, supplying his billing address (this is often easy to guess by digging into things like public phone books, credit reports, or domain registration records). Then the scammer secured the order numbers of items Chris recently bought on Amazon. In a separate transaction, the scammer reported that the items were never delivered and requested replacement items to be sent to a remailer/freight forwarder in Portland.
If you've used Amazon.com at all, you'll notice something very quickly: they require your password. For pretty much anything. Want to change an address? Password. Add a billing method? Password. Check your order history? Password. Amazon is essentially very secure as a web property. But as you can see from my chat transcript above, the CSR team falls like dominoes with just a few simple data points and a little bit of authoritative prying.
EDITED TO ADD (1/14): Comments from the original author of the piece.
The "Great Firewall of China" is now able to detect and block encryption:
A number of companies providing "virtual private network" (VPN) services to users in China say the new system is able to "learn, discover and block" the encrypted communications methods used by a number of different VPN systems.
This is an interesting blog post:
Buried inside a recent United Nations Office on Drugs and Crime report titled Use of Internet for Terrorist Purposes one can carve out details and examples of law enforcement electronic surveillance techniques that are normally kept secret.
There's more at the above link. Here's the final report.
There's a new exploit against Samsung Galaxy phones that allows a rogue app access to all memory. A hacker could copy all of your data, erase all of your data, and basically brick your phone. I haven't found an official Samsung response, but there is a quick fix.
A Canadian claims that the message is based on a WWII codebook. A spokesman from GCHQ remains dubious, but says they'll be happy to look at the proposed solution.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Against Security: How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger, by Harvey Molotch, Princeton University Press, 278 pages, $35.
Security is both a feeling and a reality, and the two are different things. People can feel secure when they’re actually not, and they can be secure even when they believe otherwise.
This discord explains much of what passes for our national discourse on security policy. Security measures often are nothing more than security theater, making people feel safer without actually increasing their protection.
A lot of psychological research has tried to make sense out of security, fear, risk, and safety. But however fascinating the academic literature is, it often misses the broader social dynamics. New York University’s Harvey Molotch helpfully brings a sociologist’s perspective to the subject in his new book Against Security.
Molotch delves deeply into a few examples and uses them to derive general principles. He starts Against Security with a mundane topic: the security of public restrooms. It’s a setting he knows better than most, having authored Toilet: The Public Restroom and the Politics of Sharing (New York University Press) in 2010. It turns out the toilet is not a bad place to begin a discussion of the sociology of security.
People fear various things in public restrooms: crime, disease, embarrassment. Different cultures either ignore those fears or address them in culture-specific ways. Many public lavatories, for example, have no-touch flushing mechanisms, no-touch sinks, no-touch towel dispensers, and even no-touch doors, while some Japanese commodes play prerecorded sounds of water running, to better disguise the embarrassing tinkle.
Restrooms have also been places where, historically and in some locations, people could do drugs or engage in gay sex. Sen. Larry Craig (R-Idaho) was arrested in 2007 for soliciting sex in the bathroom at the Minneapolis-St. Paul International Airport, suggesting that such behavior is not a thing of the past. To combat these risks, the managers of some bathrooms—men’s rooms in American bus stations, in particular—have taken to removing the doors from the toilet stalls, forcing everyone to defecate in public to ensure that no one does anything untoward (or unsafe) behind closed doors.
Subsequent chapters discuss security in subways, at airports, and on airplanes; at Ground Zero in lower Manhattan; and after Hurricane Katrina in New Orleans. Each of these chapters is an interesting sociological discussion of both the feeling and reality of security, and all of them make for fascinating reading. Molotch has clearly done his homework, conducting interviews on the ground, asking questions designed to elicit surprising information.
Molotch demonstrates how complex and interdependent the factors that comprise security are. Sometimes we implement security measures against one threat, only to magnify another. He points out that more people have died in car crashes since 9/11 because they were afraid to fly—or because they didn’t want to deal with airport security—than died during the terrorist attacks. Or to take a more prosaic example, special “high-entry” subway turnstiles make it much harder for people to sneak in for a free ride but also make platform evacuations much slower in the case of an emergency.
The common thread in Against Security is that effective security comes less from the top down and more from the bottom up. Molotch’s subtitle telegraphs this conclusion: “How We Go Wrong at Airports, Subways, and Other Sites of Ambiguous Danger.” It’s the word ambiguous that’s important here. When we don’t know what sort of threats we want to defend against, it makes sense to give the people closest to whatever is happening the authority and the flexibility to do what is necessary. In many of Molotch’s anecdotes and examples, the authority figure—a subway train driver, a policeman—has to break existing rules to provide the security needed in a particular situation. Many security failures are exacerbated by a reflexive adherence to regulations.
Molotch is absolutely right to home in on this kind of individual initiative and resilience as a critical source of true security. Current U.S. security policy is overly focused on specific threats. We defend individual buildings and monuments. We defend airplanes against certain terrorist tactics: shoe bombs, liquid bombs, underwear bombs. These measures have limited value because the number of potential terrorist tactics and targets is much greater than the ones we have recently observed. Does it really make sense to spend a gazillion dollars just to force terrorists to switch tactics? Or drive to a different target? In the face of modern society’s ambiguous dangers, it is flexibility that makes security effective.
We get much more bang for our security dollar by not trying to guess what terrorists are going to do next. Investigation, intelligence, and emergency response are where we should be spending our money. That doesn’t mean mass surveillance of everyone or the entrapment of incompetent terrorist wannabes; it means tracking down leads—the sort of thing that caught the 2006 U.K. liquid bombers. They chose their tactic specifically to evade established airport security at the time, but they were arrested in their London apartments well before they got to the airport on the strength of other kinds of intelligence.
In his review of Against Security in Times Higher Education, aviation security expert Omar Malik takes issue with the book’s seeming trivialization of the airplane threat and Molotch’s failure to discuss terrorist tactics. “Nor does he touch on the multitude of objects and materials that can be turned into weapons,” Malik laments. But this is precisely the point. Our fears of terrorism are wildly out of proportion to the actual threat, and an analysis of various movie-plot threats does nothing to make us safer.
In addition to urging people to be more reasonable about potential threats, Molotch makes a strong case for optimism and kindness. Treating every air traveler as a potential terrorist and every Hurricane Katrina refugee as a potential looter is dehumanizing. Molotch argues that we do better as a society when we trust and respect people more. Yes, the occasional bad thing will happen, but 1) it happens less often, and is less damaging, than you probably think, and 2) individuals naturally organize to defend each other. This is what happened during the evacuation of the Twin Towers and in the aftermath of Katrina before official security took over. Those in charge often do a worse job than the common people on the ground.
While that message will please skeptics of authority, Molotch sees a role for government as well. In fact, many of his lessons are primarily aimed at government agencies, to help them design and implement more effective security systems. His final chapter is invaluable on that score, discussing how we should focus on nurturing the good in most people—by giving them the ability and freedom to self-organize in the event of a security disaster, for example—rather than focusing solely on the evil of the very few. It is a hopeful yet realistic message for an irrationally anxious time. Whether those government agencies will listen is another question entirely.
This review was originally published at reason.com.
How Internet censorship works in North Korea.
There's a rise in QR codes that point to fraudulent sites. One of the warning signs seems to be a sticker with the code, rather than a code embedded in an advertising poster.
This brings up another question: does anyone actually use these things?
Interesting development in forensic analysis:
Comparing the unique pattern of the frequencies on an audio recording with a database that has been logging these changes for 24 hours a day, 365 days a year provides a digital watermark: a date and time stamp on the recording.
The EFF has been prying data out of the government and analyzing it.
This book is available as a free pdf download:
The National Cyber Security Framework Manual provides detailed background information and in-depth theoretical frameworks to help the reader understand the various facets of National Cyber Security, according to different levels of public policy formulation. The four levels of government -- political, strategic, operational and tactical/technical -- each have their own perspectives on National Cyber Security, and each is addressed in individual sections within the Manual. Additionally, the Manual gives examples of relevant institutions in National Cyber Security, from top-level policy coordination bodies down to cyber crisis management structures and similar institutions.
It's by the NATO Cooperative Cyber Defense Center of Excellence in Tallinn. A paper copy will be published in January.
Excellent article: "How to Shut Down Internets."
First, he describes what just happened in Syria. Then:
Egypt turned off the internet by using the Border Gateway Protocol trick, and also by switching off DNS. This has a similar effect to throwing bleach over a map. The location of every street and house in the country is blotted out. All the Egyptian ISPs were, and probably still are, government licensees. It took nothing but a short series of phone calls to effect the shutdown.
Cory Doctorow asks: "Why would a basket-case dictator even allow his citizenry to access the Internet in the first place?" and "Why not shut down the Internet the instant trouble breaks out?" The reason is that the Internet is a valuable tool for social control. Dictators can use the Internet for surveillance and propaganda as well as censorship, and they only resort to extreme censorship when the value of that outweighs the value of doing all three in some sort of totalitarian balance.
Yet another way two-factor authentication has been bypassed:
I have no idea if this is real. If I had to guess, I would say no.
Four squids on the cover of this week's Economist represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.
Interestingly, these are the same four companies I've been listing as the new corporate threat to the Internet.
The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn't make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.
Notice that Microsoft didn't make the Economist's cut either.
I gave that talk at the RSA Conference in February of this year. The link in the article is from another conference the week before, where I test-drove the talk.
Not the sort of pairing I normally think of, but:
Robin Ince and Brian Cox are joined on stage by comedian Dave Gorman, author and Enigma Machine owner Simon Singh and Bletchley Park enthusiast Dr Sue Black as they discuss secret science, code-breaking and the extraordinary achievements of the team working at Bletchley during WW II.
Another historical cipher, this one from the 1600s, has been cracked:
Senior math major Lucas Mason-Brown, who has done the majority of the decoding, said his first instinct was to develop a statistical tool. The 21-year-old from Belmont, Mass., used frequency analysis, which looks at the frequency of letters or groups of letters in a text, but initially didn't get far.
It’s a feudal world out there.
Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.
These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them -- or to a particular one we don't like. Or we can spread our allegiance around. But either way, it's becoming increasingly difficult to not pledge allegiance to at least one of them.
Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.
Of course, I'm romanticizing here; European history was never this simple, and the description is based on stories of that time, but that's the general model.
And it's this model that's starting to permeate computer security today.
I Pledge Allegiance to the United States of Convenience
Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.
This model is breaking, largely due to two developments:
Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.
We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we've lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.
In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won't be exposed to hackers, criminals, and malware. We trust that governments won't be allowed to illegally spy on us.
Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don't know what sort of security methods they're using, or how they're configured. We mostly can't install our own security products on iPhones or Android phones; we certainly can't install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates -- iPhone, for example -- but we rarely know what they're about or whether they'll break anything else. (On the Kindle, we don't even have that freedom.)
The Good, the Bad, and the Ugly
I'm not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.
Feudalism is good for the individual, for small startups, and for medium-sized businesses that can't afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.
For large organizations, however, it's more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They've been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don't allow vassals to audit them, even if those vassals are themselves large and powerful.
Yet feudal security isn't without its risks.
Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.
Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off -- again, like serfs -- to rival lords...or turn us in to the authorities.
Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn't do to each other).
Today's internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.
This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.
Like everything else in security, it's a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.
Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent -- or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems -- it's time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.