The Costs of NSA Surveillance
New America Foundation has a new paper on the costs of NSA surveillance: economic costs to US business, costs to US foreign policy, and costs to security.
News article.
Page 2 of 5
New America Foundation has a new paper on the costs of NSA surveillance: economic costs to US business, costs to US foreign policy, and costs to security.
News article.
Ross Anderson has an important new paper on the economics that drive government-on-population bulk surveillance:
My first big point is that all the three factors which lead to monopoly – network effects, low marginal costs and technical lock-in – are present and growing in the national-intelligence nexus itself. The Snowden papers show that neutrals like Sweden and India are heavily involved in information sharing with the NSA, even though they have tried for years to pretend otherwise. A non-aligned country such as India used to be happy to buy warplanes from Russia; nowadays it still does, but it shares intelligence with the NSA rather then the FSB. If you have a choice of joining a big spy network like America’s or a small one like Russia’s then it’s like choosing whether to write software for the PC or the Mac back in the 1990s. It may be partly an ideological choice, but the economics can often be stronger than the ideology.
Second, modern warfare, like the software industry, has seen the bulk of its costs turn from variable costs into fixed costs. In medieval times, warfare was almost entirely a matter of manpower, and society was organised appropriately; as well as rent or produce, tenants owed their feudal lord forty days’ service in peacetime, and sixty days during a war. Barons held their land from the king in return for an oath of fealty, and a duty to provide a certain size of force on demand; priests and scholars paid a tax in lieu of service, so that a mercenary could be hired in their place. But advancing technology brought steady industrialisation. When the UK and the USA attacked Germany in 1944, we did not send millions of men to Europe, as in the first world war, but a combat force of a couple of hundred thousand troops – though with thousands of tanks and backed by larger numbers of men in support roles in tens of thousands of aircraft and ships. Nowadays the transition from labour to capital has gone still further: to kill a foreign leader, we could get a drone fire a missile that costs $30,000. But that’s backed by colossal investment – the firms whose data are tapped by PRISM have a combined market capitalisation of over $1 trillion.
Third is the technical lock-in, which operates at a number of levels. First, there are lock-in effects in the underlying industries, where (for example) Cisco dominates the router market: those countries that have tried to build US-free information infrastructures (China) or even just government information infrastructures (Russia, Germany) find it’s expensive. China went to the trouble of sponsoring an indigenous vendor, Huawei, but it’s unclear how much separation that buys them because of the common code shared by router vendors: a vulnerability discovered in one firm’s products may affect another. Thus the UK government lets BT buy Huawei routers for all but its network’s most sensitive parts (the backbone and the lawful-intercept functions). Second, technical lock-in affects the equipment used by the intelligence agencies themselves, and is in fact promoted by the agencies via ETSI standards for functions such as lawful intercept.
Just as these three factors led to the IBM network dominating the mainframe age, the Intel/Microsoft network dominating the PC age, and Facebook dominating the social networking scene, so they push strongly towards global surveillance becoming a single connected ecosystem.
These are important considerations when trying to design national policies around surveillance.
Ross’s blog post.
Bessemer Venture Partners partner David Cowan has an interesting article on the opportunities for cloud security companies.
Richard Stiennnon, an industry analyst, has a similar article.
And Zscaler comments on a 451 Research report on the cloud security business.
Lavabit, the more-secure e-mail service that Edward Snowden—among others—used, has abruptly shut down. From the message on their homepage:
I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit. After significant soul searching, I have decided to suspend operations. I wish that I could legally share with you the events that led to my decision. I cannot….
This experience has taught me one very important lesson: without congressional action or a strong judicial precedent, I would strongly recommend against anyone trusting their private data to a company with physical ties to the United States.
In case something happens to the homepage, the full message is recorded here.
More about the public/private surveillance partnership. And another news article.
Also yesterday, Silent Circle shut down its email service:
We see the writing the wall, and we have decided that it is best for us to shut down Silent Mail now. We have not received subpoenas, warrants, security letters, or anything else by any government, and this is why we are acting now.
This illustrates the difference between a business owned by a person, and a public corporation owned by shareholders. Ladar Levison can decide to shutter Lavabit—a move that will personally cost him money—because he believes it’s the right thing to do. I applaud that decision, but it’s one he’s only able to make because he doesn’t have to answer to public shareholders. Could you imagine what would happen if Mark Zuckerberg or Larry Page decided to shut down Facebook or Google rather than answer National Security Letters? They couldn’t. They would be fired.
When the small companies can no longer operate, it’s another step in the consolidation of the surveillance society.
Companies allow US intelligence to exploit vulnerabilities before it patches them:
Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.
Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.
No word on whether these companies would delay a patch if asked nicely—or if there’s any way the government can require them to. Anyone feel safer because of this?
Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.
If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.
Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.
Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.
Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.
The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.
There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.
There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.
To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.
Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.
The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.
It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.
So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.
On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.
In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.
We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.
This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.
EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.
Any essay on hiring hackers quickly gets bogged down in definitions. What is a hacker, and how is he different from a cracker? I have my own definitions, but I’d rather define the issue more specifically: Would you hire someone convicted of a computer crime to fill a position of trust in your computer network? Or, more generally, would you hire someone convicted of a crime for a job related to that crime?
The answer, of course, is “it depends.” It depends on the specifics of the crime. It depends on the ethics involved. It depends on the recidivism rate of the type of criminal. It depends a whole lot on the individual.
Would you hire a convicted pedophile to work at a day care center? Would you hire Bernie Madoff to manage your investment fund? The answer is almost certainly no to those two—but you might hire a convicted bank robber to consult on bank security. You might hire someone who was convicted of false advertising to write ad copy for your next marketing campaign. And you might hire someone who ran a chop shop to fix your car. It depends on the person and the crime.
It can get even murkier. Would you hire a CIA-trained assassin to be a bodyguard? Would you put a general who led a successful attack in charge of defense? What if they were both convicted of crimes in whatever country they were operating in? There are different legal and ethical issues, to be sure, but in both cases the people learned a certain set of skills regarding offense that could be transferable to defense.
Which brings us back to computers. Hacking is primarily a mindset: a way of thinking about security. Its primary focus is in attacking systems, but it’s invaluable to the defense of those systems as well. Because computer systems are so complex, defending them often requires people who can think like attackers.
Admittedly, there’s a difference between thinking like an attacker and acting like a criminal, and between researching vulnerabilities in fielded systems and exploiting those vulnerabilities for personal gain. But there is a huge variability in computer crime convictions, and—at least in the early days—many hacking convictions were unjust and unfair. And there’s also a difference between someone’s behavior as a teenager and his behavior later in life. Additionally, there might very well be a difference between someone’s behavior before and after a hacking conviction. It all depends on the person.
An employer’s goal should be to hire moral and ethical people with the skill set required to do the job. And while a hacking conviction is certainly a mark against a person, it isn’t always grounds for complete non-consideration.
“We don’t hire hackers” and “we don’t hire felons” are coarse generalizations, in the same way that “we only hire people with this or that security certification” is. They work—you’re less likely to hire the wrong person if you follow them—but they’re both coarse and flawed. Just as all potential employees with certifications aren’t automatically good hires, all potential employees with hacking convictions aren’t automatically bad hires. Sure, it’s easier to hire people based on things you can learn from checkboxes, but you won’t get the best employees that way. It’s far better to look at the individual, and put those check boxes into context. But we don’t always have time to do that.
Last winter, a Minneapolis attorney who works to get felons a fair shake after they served their time told of a sign he saw: “Snow shovelers wanted. Felons need not apply.” It’s not good for society if felons who have served their time can’t even get jobs shoveling snow.
This essay previously appeared in Information Security as the first half of a point-counterpoint with Marcus Ranum. Marcus’s half is here.
LIGATT Security certainly hopes to scare people.
The editor of the Freakonomics blog asked me to write about this topic. The idea was that they would get several opinions, and publish them all. They spiked the story, but I already wrote my piece. So here it is.
In deciding what to do with Gray Powell, the Apple employee who accidentally left a secret prototype 4G iPhone in a California bar, Apple needs to figure out how much of the problem is due to an employee not following the rules, and how much of the problem is due to unclear, unrealistic, or just plain bad rules.
If Powell sneaked the phone out of the Apple building in a flagrant violation of the rules—maybe he wanted to show it to a friend—he should be disciplined, perhaps even fired. Some military installations have rules like that. If someone wants to take something classified out of a top secret military compound, he might have to secrete it on his person and deliberately sneak it past a guard who searches briefcases and purses. He might be committing a crime by doing so, by the way. Apple isn’t the military, of course, but if their corporate security policy is that strict, it may very well have rules like that. And the only way to ensure rules are followed is by enforcing them, and that means severe disciplinary action against those who bypass the rules.
Even if Powell had authorization to take the phone out of Apple’s labs—presumably someone has to test drive the new toys sooner or later—the corporate rules might have required him to pay attention to it at all times. We’ve all heard of military attachés who carry briefcases chained to their wrists. It’s an extreme example, but demonstrates how a security policy can allow for objects to move around town—or around the world—without getting lost. Apple almost certainly doesn’t have a policy as rigid as that, but its policy might explicitly prohibit Powell from taking that phone into a bar, putting it down on a counter, and participating in a beer tasting. Again, if Apple’s rules and Powell’s violation were both that clear, Apple should enforce them.
On the other hand, if Apple doesn’t have clear-cut rules, if Powell wasn’t prohibited from taking the phone out of his office, if engineers routinely ignore or bypass security rules and—as long as nothing bad happens—no one complains, then Apple needs to understand that the system is more to blame than the individual. Most corporate security policies have this sort of problem. Security is important, but it’s quickly jettisoned when there’s an important job to be done. A common example is passwords: people aren’t supposed to share them, unless it’s really important and they have to. Another example is guest accounts. And doors that are supposed to remain locked but rarely are. People routinely bypass security policies if they get in the way, and if no one complains, those policies are effectively meaningless.
Apple’s unfortunately public security breach has given the company an opportunity to examine its policies and figure out how much of the problem is Powell and how much of it is the system he’s a part of. Apple needs to fix its security problem, but only after it figures out where the problem is.
Information technology is increasingly everywhere, and it’s the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping. The same networking technologies are used in every country. The same digital infrastructure underpins the small and the large, the important and the trivial, the local and the global; the same vendors, the same standards, the same protocols, the same applications.
With all of this sameness, you’d think these technologies would be designed to the highest security standard, but they’re not. They’re designed to the lowest or, at best, somewhere in the middle. They’re designed sloppily, in an ad hoc manner, with efficiency in mind. Security is a requirement, more or less, but it’s a secondary priority. It’s far less important than functionality, and security is what gets compromised when schedules get tight.
Should the government—ours, someone else’s?—stop outsourcing code development? That’s the wrong question to ask. Code isn’t magically more secure when it’s written by someone who receives a government paycheck than when it’s written by someone who receives a corporate paycheck. It’s not magically less secure when it’s written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn’t even a viable option anymore; we’re all stuck with software written by who-knows-whom in who-knows-which-country. And we need to figure out how to get security from that.
The traditional solution has been defense in depth: layering one mediocre security measure on top of another mediocre security measure. So we have the security embedded in our operating system and applications software, the security embedded in our networking protocols, and our additional security products such as antivirus and firewalls. We hope that whatever security flaws—either found and exploited, or deliberately inserted—there are in one layer are counteracted by the security in another layer, and that when they’re not, we can patch our systems quickly enough to avoid serious long-term damage. That is a lousy solution when you think about it, but we’ve been more-or-less managing with it so far.
Bringing all software—and hardware, I suppose—development in-house under some misconception that proximity equals security is not a better solution. What we need is to improve the software development process, so we can have some assurance that our software is secure—regardless of what coder, employed by what company, and living in what country, writes it. The key word here is “assurance.”
Assurance is less about developing new security techniques than about using the ones we already have. It’s all the things described in books on secure coding practices. It’s what Microsoft is trying to do with its Security Development Lifecycle. It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it fields a piece of avionics software. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems. But most of the time, we don’t care; commercial software, as insecure as it is, is good enough for most purposes.
Assurance is expensive, in terms of money and time, for both the process and the documentation. But the NSA needs assurance for critical military systems and Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be more common in government IT contracts.
The software used to run our critical infrastructure—government, corporate, everything—isn’t very secure, and there’s no hope of fixing it anytime soon. Assurance is really our only option to improve this, but it’s expensive and the market doesn’t care. Government has to step in and spend the money where its requirements demand it, and then we’ll all benefit when we buy the same software.
This essay first appeared in Information Security, as the second part of a point-counterpoint with Marcus Ranum. You can read Marcus’s essay there as well.
Sidebar photo of Bruce Schneier by Joe MacInnis.