March 15, 2013
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1303.html>. These same essays and news items appear in the "Schneier on Security" blog at <http://www.schneier.com/blog>, along with a lively comment section. An RSS feed is available.
In this issue:
For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We've started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.
I see this as an effect of all the cyberwar saber-rattling that's going on right now. The major nations of the world are in the early years of a cyberwar arms race, and we're all being hurt by the collateral damage.
A commentator on Al Jazeera makes a similar point.
Our nationalist worries have recently been fueled by a media frenzy surrounding attacks from China. These attacks aren't new -- cyber-security experts have been writing about them for at least a decade, and the popular media reported about similar attacks in 2009 and again in 2010 -- and the current allegations aren't even very different than what came before. This isn't to say that the Chinese attacks aren't serious. The country's espionage campaign is sophisticated, and ongoing. And because they're in the news, people are understandably worried about them.
But it's not just China. International espionage works in both directions, and I'm sure we are giving just as good as we're getting. China is certainly worried about the U.S. Cyber Command's recent announcement that it was expanding from 900 people to almost 5,000, and the NSA's massive new data center in Utah. The U.S. even admits that it can spy on non-U.S. citizens freely.
The fact is that governments and militaries have discovered the Internet; everyone is spying on everyone else, and countries are ratcheting up offensive actions against other countries.
At the same time, many nations are demanding more control over the Internet within their own borders. They reserve the right to spy and censor, and to limit the ability of others to do the same. This idea is now being called the "cyber sovereignty movement," and gained traction at the International Telecommunications Union meeting last December in Dubai. One analyst called that meeting the "Internet Yalta," where the Internet split between liberal-democratic and authoritarian countries. I don't think he's exaggerating.
Not that this is new, either. Remember 2010, when the governments of the UAE, Saudi Arabia, and India demanded that RIM give them the ability to spy on BlackBerry PDAs within their borders? Or last year, when Syria used the Internet to surveil its dissidents? Information technology is a surprisingly powerful tool for oppression: not just surveillance, but censorship and propaganda as well. And countries are getting better at using that tool.
But remember: none of this is cyberwar. It's all espionage, something that's been going on between countries ever since countries were invented. What moves public opinion is less the facts and more the rhetoric, and the rhetoric of war is what we're hearing.
The result of all this saber-rattling is a severe loss of trust, not just amongst nation-states but between people and nation-states. We know we're nothing more than pawns in this game, and we figure we'll be better off sticking with our own country.
Unfortunately, both the reality and the rhetoric play right into the hands of the military and corporate interests that are behind the cyberwar arms race in the first place. There is an enormous amount of power at stake here: not only power within governments and militaries, but power and profit amongst the corporations that supply the tools and infrastructure for cyber-attack and cyber-defense. The more we believe we are "at war" and believe the jingoistic rhetoric, the more willing we are to give up our privacy, freedoms, and control over how the Internet is run.
Arms races are fueled by two things: ignorance and fear. We don't know the capabilities of the other side, and we fear that they are more capable than we are. So we spend more, just in case. The other side, of course, does the same. That spending will result in more cyber weapons for attack and more cyber-surveillance for defense. It will result in more government control over the protocols of the Internet, and less free-market innovation over the same. At its worst, we might be about to enter an information-age Cold War: one with more than two "superpowers." Aside from this being a bad future for the Internet, this is inherently destabilizing. It's just too easy for this amount of antagonistic power and advanced weaponry to get used: for a mistaken attribution to be reacted to with a counterattack, for a misunderstanding to become a cause for offensive action, or for a minor skirmish to escalate into a full-fledged cyberwar.
Nationalism is rife on the Internet, and it's getting worse. We need to damp down the rhetoric and-more importantly-stop believing the propaganda from those who profit from this Internet nationalism. Those who are beating the drums of cyberwar don't have the best interests of society, or the Internet, at heart.
This essay previously appeared at "Technology Review."
Fears of hardware from China:
The cyberwar arms race:
Al Jazeera essay:
U.S. CyberCommand's hiring spree:
NSA's new data center:
International Telecommunications Union meeting:
The 2010 RIM incident:
The Internet as a tool for oppression:
Tesla Motors gave one of its electric cars to John Broder, a very outspoken electric-car skeptic from the "New York Times," for a test drive. After a negative review, Tesla revealed that it logged a dizzying amount of data from that test drive. The company then matched the reporter's claims against its logs and published a rebuttal. Broder rebutted the rebuttal, and others have tried to figure out who is lying and who is not.
What's interesting to me is the sheer amount of data Tesla Motors automatically collected about the test drive. From the rebuttal: "After a negative experience several years ago with Top Gear, a popular automotive show, where they pretended that our car ran out of energy and had to be pushed back to the garage, we always carefully data log media drives."
Read the article to see what they logged: power consumption, speed, ambient temperature, control settings, location, and so on.
The stakes are high here. Broder and the "New York Times" are concerned about their journalistic integrity, which affects their brand. And Tesla Motors wants to sell cars.
The implication is that Tesla Motors only does this for media test drives, but it gives you an idea of the sort of things that will be collected once automobile black boxes become the norm. We're used to airplane black boxes, which only collected a small amount of data from the minutes just before an incident. But that was back when data was expensive. Now that it's cheap, expect black boxes to collect everything all the time. And once it's collected, it'll be used. By auto manufacturers, by insurance companies, by car rental companies, by marketers. The list will be long.
But as we're learning from this particular back-and-forth between Broder and Tesla Motors, even intense electronic surveillance of the actions of a person in an enclosed space did not succeed in providing an unambiguous record of what happened. To know that, the car company would have had to have someone in the car with the journalist.
This will increasingly be a problem as we are judged by our data. And in most cases, neither side will spend this sort of effort trying to figure out what really happened.
Recently, Elon Musk and the "New York Times" took to Twitter and the Internet to argue the data -- and their grievances -- over a failed road test and car review. Meanwhile, an Applebee's server is part of a Change.org petition to get her job back after posting a pastor's no-tip receipt comment online. And when he wasn't paid quickly enough, a local Fitness SF web developer rewrote the company's webpage to air his complaint.
All of these "cases" are seeking their judgments in the court of public opinion. The court of public opinion has a full docket; even brick-and-mortar establishments aren't immune.
More and more individuals -- and companies -- are augmenting, even bypassing entirely, traditional legal process hoping to get a more favorable hearing in public.
Every day we have to interact with thousands of strangers, from people we pass on the street to people who touch our food to people we enter short-term business relationships with. Even though most of us don't have the ability to protect our interests with physical force, we can all be confident when dealing with these strangers because -- at least in part -- we trust that the legal system will intervene on our behalf in case of a problem. Sometimes that problem involves people who break the rules of society, and the criminal courts deal with them; when the problem is a disagreement between two parties, the civil courts will. Courts are an ancient system of justice, and modern society cannot function without them.
What matters in this system are the facts and the laws. Courts are intended to be impartial and fair in doling out their justice, and societies flourish based on the extent to which we approach this ideal. When courts are unfair -- when judges can be bribed, when the powerful are treated better, when more expensive lawyers produce more favorable outcomes -- society is harmed. We become more fearful and less able to trust each other. We are less willing to enter into agreement with strangers, and we spend more effort protecting our own because we don't believe the system is there to back us up.
The court of public opinion is an alternate system of justice. It's very different from the traditional court system: This court is based on reputation, revenge, public shaming, and the whims of the crowd. Having a good story is more important than having the law on your side. Being a sympathetic underdog is more important than being fair. Facts matter, but there are no standards of accuracy. The speed of the Internet exacerbates this; a good story spreads faster than a bunch of facts.
This court delivers *reputational* justice. Arguments are measured in relation to reputation. If one party makes a claim against another that seems plausible, based on both of their reputations, then that claim is likely to be received favorably. If someone makes a claim that clashes with the reputations of the parties, then it's likely to be disbelieved. Reputation is, of course, a commodity, and loss of reputation is the penalty this court imposes. In that respect, it less often recompenses the injured party and more often exacts revenge or retribution. And while those losses may be brutal, the effects are usually short-lived.
The court of public opinion has significant limitations. It works better for revenge and justice than for dispute resolution. It can punish a company for unfairly firing one of its employees or lying in an automobile test drive, but it's less effective at unraveling a complicated patent litigation or navigating a bankruptcy proceeding.
In many ways, this is a return to a medieval notion of "fama," or reputation. In other ways, it's like mob justice: sometimes benign and beneficial, sometimes terrible (think French Revolution). Trial by public opinion isn't new; remember Rodney King and O.J. Simpson?
Mass media has enabled this system for centuries. But the Internet, and social media in particular, has changed how it's being used.
Now it's being used more deliberately, more often, by more and more powerful entities as a redress mechanism. Perhaps because it's perceived to be more efficient or perhaps because one of the parties feels they can get a more favorable hearing in this new court, but it's being used instead of lawsuits. Instead of a sideshow to actual legal proceedings, it is turning into an alternate system of dispute resolution and justice.
Part of this trend is because the Internet makes taking a case in front of the court of public opinion so much easier. It used to be that the injured party had to convince a traditional media outlet to make his case public; now he can take his case directly to the people. And while it's still a surprise when some cases go viral while others languish in obscurity, it's simply more effective to present your case on Facebook or Twitter.
Another reason is that the traditional court system is increasingly viewed as unfair. Today, money *can* buy justice: not by directly bribing judges, but by hiring better lawyers and forcing the other side to spend more money than they are able to. We know that the courts treat the rich and the poor differently, that corporations can get away with crimes individuals cannot, and that the powerful can lobby to get the specific laws and regulations they want -- irrespective of any notions of fairness.
Smart companies have already prepared for battles in the court of public opinion. They've hired policy experts. They've hired firms to monitor Facebook, Twitter, and other Internet venues where these battles originate. They have response strategies and communications plans in place. They've recognized that while this court is very different from the traditional legal system, money and power does count and that there are ways to tip the outcomes in their favor: For example, fake grassroots movements can be just as effective on the Internet as they can in the offline world.
It's time we recognize the court of public opinion for what it is -- an alternative crowd-enabled system of justice. We need to start discussing its merits and flaws; we need to understand when it results in justice, and how it can be manipulated by the powerful. We also need to have a frank conversation about the failings of the traditional justice scheme, and why people are motivated to take their grievances to the public. Despite 24-hour PR firms and incident-response plans, this is a court where corporations and governments are at an inherent disadvantage. And because the weak will continue to run ahead of the powerful, those in power will prefer to use the more traditional mechanisms of government: police, courts, and laws.
Social-media researcher danah boyd had it right when she wrote in "Wired": "In a networked society, who among us gets to decide where the moral boundaries lie? This isn't an easy question and it's at the root of how we, as a society, conceptualize justice." It's not an easy question, but it's the key question. The moral and ethical issues surrounding the court of public opinion are the real ones, and ones that society will have to tackle in the decades to come.
This essay originally appeared on Wired.com.
Rawls on justice:
Stories vs. facts:
The concept of fama:
Daniel Solove on reputation:
Fake grassroots movements.
danah boyd on the issue:
After the "New York Times" broke the story of what seemed to be a state-sponsored hack from China against the newspaper, the "Register" had stories of two similar attacks: one from Burma and another from China.
Hacking citation counts using Google Scholar.
There's a nice example of traffic analysis in the book "No Name," by Wilkie Collins (1862). The attacker, Captain Wragge, needs to know whether a letter has been placed in the mail. He knows who it will have been addressed to if it has been mailed, and with that information, is able to convince the postmaster to tell him that it has, in fact, been mailed.
How international soccer matches are fixed.
Interesting research on age biases in perceptions of trust: "Older adults are disproportionately vulnerable to fraud, and federal agencies have speculated that excessive trust explains their greater vulnerability." I think this result reflects the fact that older people discount the future more than young ones, and therefore are more willing to gamble on a good outcome. It makes sense biologically; they have less future ahead of them. We see the same thing in pregnancy; older mothers have a higher threshold for spontaneous abortion of a risky embryo than younger mothers.
I would have liked to participate in this hearing: Committee on Homeland Security, Subcommittee on Oversight and Management Efficiency: "Assessing DHS 10 Years Later: How Wisely is DHS Spending Taxpayer Dollars?" February 15, 2013.
Someone has analyzed the security mistakes in the Battle of Hoth, from the movie "The Empire Strikes Back".
Good summary of how complex systems fail. It's not directly about security, but it's all fundamentally about security. Any real-world security system is inherently complex. I wrote about this long ago in "Beyond Fear."
I hadn't heard of this one before. In New Zealand, people viewing adult websites -- it's unclear whether these are honeypot sites, or malware that notices the site being viewed -- get a pop-up message claiming it's from the NZ Police and demanding payment of an instant fine for viewing illegal pornography. http://www.offsettingbehaviour.blogspot.co.nz/2013/...
Marcus Ranum has an interesting screed on "booth babes" in the RSA Conference exhibition hall.
Al Qaeda document on avoiding drone strikes.
"The NSA's Ragtime Surveillance Program and the Need for Leaks." I wrote this for my blog:
Three brazen robberies are in the news. I don't have anywhere near enough data to call this a trend, but the similarities are striking. In all cases, the robbers barreled straight through security, relying on surprise and speed. In all cases, security based on response wasn't fast enough to do any good. And in all cases, there's surveillance video that -- at least so far -- isn't very useful. It's important to remember that, even in our high-tech Internet world, sometimes smash-and-grab still works.
Interesting discussion on browser security from "Communications of the ACM." http://queue.acm.org/detail.cfm?id=2399757
Good article on "Stingrays," which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network -- as opposed to computers which inherently do not trust the Internet -- it's easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they're on shaky legal ground.
Google Docs is being used for phishing. Oxford University felt that it had to block the service because Google isn't responding to takedown requests quickly enough. Think about this in light of my essay on feudal security. Oxford University has to trust that Google will act in its best interest, and has no other option if it doesn't.
The second edition of Ross Anderson's fantastic book, "Security Engineering," is now free online. Required reading for any security engineer.
I worry that comments about the value of software security made at the RSA Conference last week will be taken out of context. John Viega did not say that software security wasn't important. He said: "For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that's probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security. Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security." I agree with that. For small companies, it's not worth worrying much about software security. But for large software companies, it's vital.
A dead drop from the 1870s: a hat.
Interesting essay: "The Logic of Surveillance"
It turns out that you can buy a position for your book on best-seller lists.
Here's some security theater from the Wells Fargo website. Click on the "Establishing secure connection" link at the top of this page. It's a Wells Fargo page that displays a progress bar with a bunch of security phrases -- "Establishing Secure Connection," "Sending credentials," "Building Secure Environment," and so on -- and closes after a few seconds. It's complete security theater; it doesn't actually do anything but make account holders feel better.
Interesting law paper: "The Implausibility of Secrecy," by Mark Fenster.
Symantec has found evidence of Stuxnet variants from way back in 2005. That's much older than the 2009 creation date we originally thought it had. What's impressive is how advanced the cyberattack capabilities of the U.S. and/or Israel were back then.
xkcd on PGP.
Wow, is this a crazy media frenzy. We should know better. These attacks happen all the time, and just because the media is reporting about them with greater frequency doesn't mean that they're happening with greater frequency.
Hype aside, the Mandiant report on the hackers is very good, especially the part where the Chinese hackers outed themselves through poor opsec: they logged into Facebook from their work computers.
But this is not cyberwar. This is not war of any kind. This is espionage, and the difference is important. Calling it war just feeds our fears and fuels the cyberwar arms race.
In a private e-mail, Gary McGraw made an important point about attribution that matters a lot in this debate.
Because espionage unfolds over months or years in realtime, we can triangulate the origin of an exfiltration attack with some certainty. During the fog of a real cyber war attack, which is more likely to happen in milliseconds, the kind of forensic work that Mandiant did would not be possible. (In fact, we might just well be "Gandalfed" and pin the attack on the wrong enemy.)
Sadly, policymakers seem to think we have completely solved the attribution problem. We have not. This article published in "Computerworld" does an adequate job of stating my position.
Those of us who work on security engineering and software security can help educate policymakers and others so that we don't end up pursuing the folly of active defense.
This media frenzy is going to be used by the U.S. military to grab more power in cyberspace. They're already ramping up the U.S. Cyber Command. President Obama is issuing vague executive orders that will result in we-don't-know what. I don't see any good coming of this.
The Mandiant report:
Critical commentary on the Mandiant report:
Poor opsec on the part of the Chinese hackers:
U.S. CyberCommand ramps up:
Obama's vague executive order:
It's a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.
For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the "Daily News" back in January, If something is obstructing the flow of that radiation -- a weapon, for example -- the device will highlight that object.
Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won't make a difference. We're scared of both terrorism and crime, even as the risks decrease; and when we're scared, we're willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.
A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD's Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?
For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone's clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.
The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they're marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.
The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they're too embedded in our security culture to be removed.
The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don't work very well and, ironically, wouldn't have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn't sufficiently blur the images so they didn't show travelers naked. Now, only millimeter-wave full-body scanners remain.
Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don't.
Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It's the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system -- this time by a government instead of a company -- despite decades of evidence that it doesn't work and makes us all less safe.
It's not that these technologies are totally useless. It's that they're expensive, and none of them is a panacea. Maybe there's a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there's a profit motive at work, too.
An edited version of this essay, without links, appeared in the "New York Daily News."
Domain Awareness System:
A similar system:
This isn't phishing; it's not even spear phishing. It's laser-guided precision phishing:
One of the leaked diplomatic cables referred to one attack via email on US officials who were on a trip in Copenhagen to debate issues surrounding climate change.
"The message had the subject line 'China and Climate Change' and was spoofed to appear as if it were from a legitimate international economics columnist at the National Journal."
The cable continued: "In addition, the body of the email contained comments designed to appeal to the recipients as it was specifically aligned with their job function."
One example which demonstrates the group's approach is that of Coca-Cola, which towards the end was revealed in media reports to have been the victim of a hack.
And not just any hack, it was a hack which industry experts said may have derailed an acquisition effort to the tune of $2.4bn (£1.5bn).
The US giant was looking into taking over China Huiyuan Juice Group, China's largest soft drinks company -- but a hack, believed to be by the Comment Group, left Coca-Cola exposed.
How was it done? Bloomberg reported that one executive -- deputy president of Coca-Cola's Pacific Group, Paul Etchells -- opened an email he thought was from the company's chief executive.
In it, a link which when clicked downloaded malware onto Mr Etchells' machine. Once inside, hackers were able to snoop about the company's activity for over a month.
Also, a new technique:
"It is known as waterholing," he explained. "Which basically involves trying to second guess where the employees of the business might actually go on the web.
"If you can compromise a website they're likely to go to, hide some malware on there, then whether someone goes to that site, that malware will then install on that person's system."
These sites could be anything from the website of an employee's child's school - or even a page showing league tables for the corporate five-a-side football team.
I wrote this over a decade ago: "Only amateurs attack machines; professionals target people." And the professionals are getting better and better.
This is the problem. Against a sufficiently skilled, funded, and motivated adversary, no network is secure. Period. Attack is *much* easier than defense, and the reason we've been doing so well for so long is that most attackers are content to attack the most insecure networks and leave the rest alone.
It's a matter of motive. To a criminal, all files of credit card numbers are equally good, so your security depends in part on how much better or worse you are than those around you. If the attacker wants you specifically -- as in the examples above -- relative security is irrelevant. What matters is whether or not your security is better than the attackers' skill. And so often it's not.
I am reminded of this great quote from former NSA Information Assurance Director Brian Snow: "Your cyber systems continue to function and serve you not due to the expertise of your security staff but solely due to the sufferance of your opponents."
That line was quoted in an essay on thinkst.com, which is well worth reading. It says much of what I've been saying, but it's nice to read someone else say it.
One of the often unspoken truths of security is that large areas of it are currently unsolved problems. We don't know how to write large applications securely yet. We don't know how to secure entire organizations with reasonable cost effective measures yet. The honest answer to almost any security question is: "it's complicated!". But there is no shortage of gungho salesmen in expensive suits peddling their security wares and no shortage of clients willing to throw money at the problem (because doing something must be better than doing nothing, right?)
Wrong. Peddling hard in the wrong direction doesn't help just because you want it to.
For a long time, anti virus vendors sold the idea that using their tools would keep users safe. Some pointed out that anti virus software could be described as "necessary but not sufficient" at best, and horribly ineffective snake oil at the least, but AV vendors have big PR budgets and customers need to feel like they are doing something. Examining the AV industry is a good proxy for the security industry in general. Good arguments can be made for the industry and indulging it certainly seems safer than not, but the truth is that none of the solutions on offer from the AV industry give us any hope against a determined targeted attack. While the AV companies all gave talks around the world dissecting the recent publicly discovered attacks like Stuxnet or Flame, most glossed over the simple fact that none of them discovered the virus till after it had done it's work. Finally after many repeated public spankings, this truth is beginning to emerge and even die hards like the charismatic chief research officer of anti virus firm FSecure (Mikko Hypponen) have to concede their utility (or lack thereof). In a recent post he wrote: "What this means is that all of us had missed detecting this malware for two years, or more. That's a spectacular failure for our company, and for the antivirus industry in general.. This story does not end with Flame. It's highly likely there are other similar attacks already underway that we haven't detected yet. Put simply, attacks like these work.. Flame was a failure for the anti-virus industry. We really should have been able to do better. But we didn't. We were out of our league, in our own game."
Laser-guided precision phishing:
My old essay:
Essay with the Brian Snow quote:
I was a guest on "Inventing the Future," for an episode on surveillance technology. The video is here.
The "Montréal Review" asked me to write an essay about my latest book. Not much that regular readers haven't seen before.
Me at the RSA Conference:
Last month I was on "Virtually Speaking."
As the College of Cardinals prepares to elect a new pope, security people like me wonder about the process. How does it work, and just how hard would it be to hack the vote?
The rules for papal elections are steeped in tradition. John Paul II last codified them in 1996, and Benedict XVI left the rules largely untouched. The "Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff" is surprisingly detailed.
Every cardinal younger than 80 is eligible to vote. We expect 117 to be voting. The election takes place in the Sistine Chapel, directed by the church chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is open.
First, there's the "pre-scrutiny" phase.
"At least two or three" paper ballots are given to each cardinal, presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected from the cardinals: three "scrutineers" who count the votes; three "revisers" who verify the results of the scrutineers; and three "infirmarii" who collect the votes from those too sick to be in the chapel. Different sets of officials are chosen randomly for each ballot.
Each cardinal, including the nine officials, writes his selection for pope on a rectangular ballot paper "as far as possible in handwriting that cannot be identified as his." He then folds the paper lengthwise and holds it aloft for everyone to see.
When everyone has written his vote, the "scrutiny" phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten -- the shallow metal plate used to hold communion wafers during Mass -- resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.
If a cardinal cannot walk to the altar, one of the scrutineers -- in full view of everyone -- does this for him.
If any cardinals are too sick to be in the chapel, the scrutineers give the infirmarii a locked empty box with a slot, and the three infirmarii together collect those votes. If a cardinal is too sick to write, he asks one of the infirmarii to do it for him. The box is opened, and the ballots are placed onto the paten and into the chalice, one at a time.
When all the ballots are in the chalice, the first scrutineer shakes it several times to mix them. Then the third scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.
To count the votes, each ballot is opened, and the vote is read by each scrutineer in turn, the third one aloud. Each scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals.
The total number of votes cast for each person is written on a separate sheet of paper. Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded as well.
Then there's the "post-scrutiny" phase. The scrutineers tally the votes and determine whether there's a winner. We're not done yet, though.
The revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. That's where the smoke comes from: white if a pope has been elected, black if not -- the black smoke is created by adding water or a special chemical to the ballots.
Being elected pope requires a two-thirds plus one vote majority. This is where Pope Benedict made a change. Traditionally a two-thirds majority had been required for election. Pope John Paul II changed the rules so that after roughly 12 days of fruitless votes, a simple majority was enough to elect a pope. Benedict reversed this rule.
How hard would this be to hack?
First, the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky.
Second, the small group of voters -- all of whom know each other -- makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In short, the voter verification process is about as good as you're ever going to find.
A cardinal can't stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once -- his ballot is visible -- and also keeps his hand out of the chalice holding the other votes. Not that they haven't thought about this: The cardinals are in "choir dress" during the voting, which has translucent lace sleeves under a short red cape, making sleight-of-hand tricks much harder. Additionally, the total would be wrong.
The rules anticipate this in another way: "If during the opening of the ballots the scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name, they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled." This surprises me, as if it seems more likely to happen by accident and result in two cardinals' votes not being counted.
Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there's one wrinkle: "If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote." I assume that's done so there's only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting.
The scrutineers are in the best position to modify votes, but it's difficult. The counting is conducted in public, and there are multiple people checking every step. It'd be possible for the first scrutineer, if he were good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third scrutineer to swap ballots during the counting process. Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.
There's so much checking and rechecking that it's just not possible for a scrutineer to misrecord the votes. And since they're chosen randomly for each ballot, the probability of a cabal being selected is extremely low. More interesting would be to try to attack the system of selecting scrutineers, which isn't well-defined in the document. Influencing the selection of scrutineers and revisers seems a necessary first step toward influencing the election.
If there's a weak step, it's the counting of the ballots.
There's no real reason to do a precount, and it gives the scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. Shaking the chalice to randomize the ballots is smart, but putting the ballots in a wire cage and spinning it around would be more secure -- albeit less reverent.
I would also add some kind of white-glove treatment to prevent a scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate's name in full provides some resistance against this sort of attack.
Probably the biggest risk is complacency. What might seem beautiful in its tradition and ritual during the first ballot could easily become cumbersome and annoying after the twentieth ballot, and there will be a temptation to cut corners to save time. If the Cardinals do that, the election process becomes more vulnerable.
A 1996 change in the process lets the cardinals go back and forth from the chapel to their dorm rooms, instead of being locked in the chapel the whole time, as was done previously. This makes the process slightly less secure but a lot more comfortable.
Of course, one of the infirmarii could do what he wanted when transcribing the vote of an infirm cardinal. There's no way to prevent that. If the infirm cardinal were concerned about that but not privacy, he could ask all three infirmarii to witness the ballot.
There are also enormous social -- religious, actually -- disincentives to hacking the vote. The election takes place in a chapel and at an altar. The cardinals swear an oath as they are casting their ballot -- further discouragement. The chalice and paten are the implements used to celebrate the Eucharist, the holiest act of the Catholic Church. And the scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election, under pain of excommunication.
The other major security risk in the process is eavesdropping from the outside world. The election is supposed to be a completely closed process, with nothing communicated to the world except a winner. In today's high-tech world, this is very difficult. The rules explicitly state that the chapel is to be checked for recording and transmission devices "with the help of trustworthy individuals of proven technical ability." That was a lot easier in 2005 than it will be in 2013.
What are the lessons here?
First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything.
Second, small and simple elections are easier to secure. This kind of process works to elect a pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems could work for a larger group would be through a pyramid-like mechanism, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.
And third: When an election process is left to develop over the course of a couple of thousand years, you end up with something surprisingly good.
This essay previously appeared on CNN.com, and is an update of an essay I wrote for the previous papal election in 2005.
My previous essay:
John Paul II's rules:
Benedict XVI's rules:
One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn't matter how much management tells employees that security is important, employees know when it really isn't -- when getting the job done cheaply and on schedule is much more important.
It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren't serious.
Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That's what the company rewards, and that's what the company actually wants.
"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it.
Similarly, there's a supposedly an old Chinese proverb that goes "hang one, warn a thousand." Or to put it another way, we're really good at risk management. And there's John Byng, whose execution gave rise to the Voltaire quote (in French): "in this country, it is good to kill an admiral from time to time, in order to encourage the others."
I thought of all this when I read about the new security procedures surrounding the upcoming papal election:
According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.
"I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff," the oath reads.
"I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication 'latae sententiae', which is reserved to the Apostolic See," it continues.
Excommunication is like being fired, only it lasts for eternity.
I'm not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots -- a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while -- there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.
Me on security incentives:
Me on risk management:
Security procedures ensuring the secrecy of the papal election.
This is interesting:
In the security practice, we have our own version of no-man's land, and that's midsize companies. Wendy Nather refers to these folks as being below the "Security Poverty Line." These folks have a couple hundred to a couple thousand employees. That's big enough to have real data interesting to attackers, but not big enough to have a dedicated security staff and the resources they need to really protect anything. These folks are caught between the baseline and the service box. They default to compliance mandates like PCI-DSS because they don't know any better. And the attackers seem to sneak those passing shots by them on a seemingly regular basis.
Back when I was on the vendor side, I'd joke about how 800 security companies chased 1,000 customers -- meaning most of the effort was focus on the 1,000 largest customers in the world. But I wasn't joking. Every VP of sales talks about how it takes the same amount of work to sell to a Fortune-class enterprise as it does to sell into the midmarket. They aren't wrong, and it leaves a huge gap in the applicable solutions for the midmarket.
To be clear, folks in security no-man's land don't go to the RSA Conference, probably don't read security pubs, or follow the security echo chamber on Twitter. They are too busy fighting fires and trying to keep things operational. And that's fine. But all of the industry gatherings just remind me that the industry's machinery is geared toward the large enterprise, not the unfortunate 5 million other companies in the world that really need the help.
I've seen this trend, and I think it's a result of the increasing sophistication of the IT industry. Today, it's increasingly rare for organizations to have bespoke security, just as it's increasingly rare for them to have bespoke IT. It's only the larger organizations that can afford it. Everyone else is increasingly outsourcing its IT to cloud providers. These providers are taking care of security -- although we can certainly argue about how good a job they're doing -- so that the organizations themselves don't have to. A company whose email consists entirely of Gmail accounts, whose payroll is entirely outsourced to Paychex, whose customer tracking system is entirely on Salesforce.com, and so on -- and who increasingly accesses those systems using specialized devices like iPads and Android tablets -- simply doesn't have any IT infrastructure to secure anymore.
To be sure, I think we're a long way off from this future being a secure one, but it's the one the industry is headed toward. Yes, vendors at the RSA conference are only selling to the largest organizations. And, as I wrote back in 2008, soon they will only be selling to IT outsourcing companies (the term "cloud provider" hadn't been invented yet):
For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure -- power, water, cleaning service, tax preparation -- customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.
The RSA Conference won't die, of course. Security is too important for that. There will still be new technologies, new products and new startups. But it will become inward-facing, slowly turning into an industry conference. It'll be security companies selling to the companies who sell to corporate and home users -- and will no longer be a 17,000-person user conference.
The security no-man's land:
The security poverty line:
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Liars and Outliers," "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2013 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.