Entries Tagged "Apple"

Page 13 of 17

iPhone Fingerprint Authentication

When Apple bought AuthenTec for its biometrics technology—reported as one of its most expensive purchases—there was a lot of speculation about how the company would incorporate biometrics in its product line. Many speculate that the new Apple iPhone to be announced tomorrow will come with a fingerprint authentication system, and there are several ways it could work, such as swiping your finger over a slit-sized reader to have the phone recognize you.

Apple would be smart to add biometric technology to the iPhone. Fingerprint authentication is a good balance between convenience and security for a mobile device.

Biometric systems are seductive, but the reality isn’t that simple. They have complicated security properties. For example, they are not keys. Your fingerprint isn’t a secret; you leave it everywhere you touch.

And fingerprint readers have a long history of vulnerabilities as well. Some are better than others. The simplest ones just check the ridges of a finger; some of those can be fooled with a good photocopy. Others check for pores as well. The better ones verify pulse, or finger temperature. Fooling them with rubber fingers is harder, but often possible. A Japanese researcher had good luck doing this over a decade ago with the gelatin mixture that’s used to make Gummi bears.

The best system I’ve ever seen was at the entry gates of a secure government facility. Maybe you could have fooled it with a fake finger, but a Marine guard with a big gun was making sure you didn’t get the opportunity to try. Disney World uses a similar system at its park gates—but without the Marine guards.

A biometric system that authenticates you and you alone is easier to design than a biometric system that is supposed to identify unknown people. That is, the question “Is this the finger belonging to the owner of this iPhone?” is a much easier question for the system to answer than “Whose finger is this?”

There are two ways an authentication system can fail. It can mistakenly allow an unauthorized person access, or it can mistakenly deny access to an authorized person. In any consumer system, the second failure is far worse than the first. Yes, it can be problematic if an iPhone fingerprint system occasionally allows someone else access to your phone. But it’s much worse if you can’t reliably access your own phone—you’d junk the system after a week.

If it’s true that Apple’s new iPhone will have biometric security, the designers have presumably erred on the side of ensuring that the user can always get in. Failures will be more common in cold weather, when your shriveled fingers just got out of the shower, and so on. But there will certainly still be the traditional PIN system to fall back on.

So…can biometric authentication be hacked?

Almost certainly. I’m sure that someone with a good enough copy of your fingerprint and some rudimentary materials engineering capability—or maybe just a good enough printer—can authenticate his way into your iPhone. But, honestly, if some bad guy has your iPhone and your fingerprint, you’ve probably got bigger problems to worry about.

The final problem with biometric systems is the database. If the system is centralized, there will be a large database of biometric information that’s vulnerable to hacking. A system by Apple will almost certainly be local—you authenticate yourself to the phone, not to any network—so there’s no requirement for a centralized fingerprint database.

Apple’s move is likely to bring fingerprint readers into the mainstream. But all applications are not equal. It’s fine if your fingers unlock your phone. It’s a different matter entirely if your fingerprint is used to authenticate your iCloud account. The centralized database required for that application would create an enormous security risk.

This essay previously appeared on Wired.com.

EDITED TO ADD: The new iPhone does have a fingerprint reader.

Posted on September 11, 2013 at 6:43 AMView Comments

Restoring Trust in Government and the Internet

In July 2012, responding to allegations that the video-chat service Skype—owned by Microsoft—was changing its protocols to make it possible for the government to eavesdrop on users, Corporate Vice President Mark Gillett took to the company’s blog to deny it.

Turns out that wasn’t quite true.

Or at least he—or the company’s lawyers—carefully crafted a statement that could be defended as true while completely deceiving the reader. You see, Skype wasn’t changing its protocols to make it possible for the government to eavesdrop on users, because the government was already able to eavesdrop on users.

At a Senate hearing in March, Director of National Intelligence James Clapper assured the committee that his agency didn’t collect data on hundreds of millions of Americans. He was lying, too. He later defended his lie by inventing a new definition of the word “collect,” an excuse that didn’t even pass the laugh test.

As Edward Snowden’s documents reveal more about the NSA’s activities, it’s becoming clear that we can’t trust anything anyone official says about these programs.

Google and Facebook insist that the NSA has no “direct access” to their servers. Of course not; the smart way for the NSA to get all the data is through sniffers.

Apple says it’s never heard of PRISM. Of course not; that’s the internal name of the NSA database. Companies are publishing reports purporting to show how few requests for customer-data access they’ve received, a meaningless number when a single Verizon request can cover all of their customers. The Guardian reported that Microsoft secretly worked with the NSA to subvert the security of Outlook, something it carefully denies. Even President Obama’s justifications and denials are phrased with the intent that the listener will take his words very literally and not wonder what they really mean.

NSA Director Gen. Keith Alexander has claimed that the NSA’s massive surveillance and data mining programs have helped stop more than 50 terrorist plots, 10 inside the U.S. Do you believe him? I think it depends on your definition of “helped.” We’re not told whether these programs were instrumental in foiling the plots or whether they just happened to be of minor help because the data was there. It also depends on your definition of “terrorist plots.” An examination of plots that that FBI claims to have foiled since 9/11 reveals that would-be terrorists have commonly been delusional, and most have been egged on by FBI undercover agents or informants.

Left alone, few were likely to have accomplished much of anything.

Both government agencies and corporations have cloaked themselves in so much secrecy that it’s impossible to verify anything they say; revelation after revelation demonstrates that they’ve been lying to us regularly and tell the truth only when there’s no alternative.

There’s much more to come. Right now, the press has published only a tiny percentage of the documents Snowden took with him. And Snowden’s files are only a tiny percentage of the number of secrets our government is keeping, awaiting the next whistle-blower.

Ronald Reagan once said “trust but verify.” That works only if we can verify. In a world where everyone lies to us all the time, we have no choice but to trust blindly, and we have no reason to believe that anyone is worthy of blind trust. It’s no wonder that most people are ignoring the story; it’s just too much cognitive dissonance to try to cope with it.

This sort of thing can destroy our country. Trust is essential in our society. And if we can’t trust either our government or the corporations that have intimate access into so much of our lives, society suffers. Study after study demonstrates the value of living in a high-trust society and the costs of living in a low-trust one.

Rebuilding trust is not easy, as anyone who has betrayed or been betrayed by a friend or lover knows, but the path involves transparency, oversight and accountability. Transparency first involves coming clean. Not a little bit at a time, not only when you have to, but complete disclosure about everything. Then it involves continuing disclosure. No more secret rulings by secret courts about secret laws. No more secret programs whose costs and benefits remain hidden.

Oversight involves meaningful constraints on the NSA, the FBI and others. This will be a combination of things: a court system that acts as a third-party advocate for the rule of law rather than a rubber-stamp organization, a legislature that understands what these organizations are doing and regularly debates requests for increased power, and vibrant public-sector watchdog groups that analyze and debate the government’s actions.

Accountability means that those who break the law, lie to Congress or deceive the American people are held accountable. The NSA has gone rogue, and while it’s probably not possible to prosecute people for what they did under the enormous veil of secrecy it currently enjoys, we need to make it clear that this behavior will not be tolerated in the future. Accountability also means voting, which means voters need to know what our leaders are doing in our name.

This is the only way we can restore trust. A market economy doesn’t work unless consumers can make intelligent buying decisions based on accurate product information. That’s why we have agencies like the FDA, truth-in-packaging laws and prohibitions against false advertising.

In the same way, democracy can’t work unless voters know what the government is doing in their name. That’s why we have open-government laws. Secret courts making secret rulings on secret laws, and companies flagrantly lying to consumers about the insecurity of their products and services, undermine the very foundations of our society.

Since the Snowden documents became public, I have been receiving e-mails from people seeking advice on whom to trust. As a security and privacy expert, I’m expected to know which companies protect their users’ privacy and which encryption programs the NSA can’t break. The truth is, I have no idea. No one outside the classified government world does. I tell people that they have no choice but to decide whom they trust and to then trust them as a matter of faith. It’s a lousy answer, but until our government starts down the path of regaining our trust, it’s the only thing we can do.

This essay originally appeared on CNN.com.

EDITED TO ADD (8/7): Two more links describing how the US government lies about NSA surveillance.

Posted on August 7, 2013 at 6:29 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Details of NSA Data Requests from US Corporations

Facebook (here), Apple (here), and Yahoo (here) have all released details of US government requests for data. They each say that they’ve turned over user data for about 10,000 people, although the time frames are different. The exact number isn’t important; what’s important is that it’s much lower than the millions implied by the PRISM document.

Now the big question: do we believe them? If we don’t, what would it take before we did believe them?

Posted on June 18, 2013 at 4:00 PMView Comments

More on Feudal Security

Facebook regularly abuses the privacy of its users. Google has stopped supporting its popular RSS feeder. Apple prohibits all iPhone apps that are political or sexual. Microsoft might be cooperating with some governments to spy on Skype calls, but we don’t know which ones. Both Twitter and LinkedIn have recently suffered security breaches that affected the data of hundreds of thousands of their users.

If you’ve started to think of yourself as a hapless peasant in a Game of Thrones power struggle, you’re more right than you may realize. These are not traditional companies, and we are not traditional customers. These are feudal lords, and we are their vassals, peasants, and serfs.

Power has shifted in IT, in favor of both cloud-service providers and closed-platform vendors. This power shift affects many things, and it profoundly affects security.

Traditionally, computer security was the user’s responsibility. Users purchased their own antivirus software and firewalls, and any breaches were blamed on their inattentiveness. It’s kind of a crazy business model. Normally we expect the products and services we buy to be safe and secure, but in IT we tolerated lousy products and supported an enormous aftermarket for security.

Now that the IT industry has matured, we expect more security “out of the box.” This has become possible largely because of two technology trends: cloud computing and vendor-controlled platforms. The first means that most of our data resides on other networks: Google Docs, Salesforce.com, Facebook, Gmail. The second means that our new Internet devices are both closed and controlled by the vendors, giving us limited configuration control: iPhones, ChromeBooks, Kindles, BlackBerry PDAs. Meanwhile, our relationship with IT has changed. We used to use our computers to do things. We now use our vendor-controlled computing devices to go places. All of these places are owned by someone.

The new security model is that someone else takes care of it—without telling us any of the details. I have no control over the security of my Gmail or my photos on Flickr. I can’t demand greater security for my presentations on Prezi or my task list on Trello, no matter how confidential they are. I can’t audit any of these cloud services. I can’t delete cookies on my iPad or ensure that files are securely erased. Updates on my Kindle happen automatically, without my knowledge or consent. I have so little visibility into the security of Facebook that I have no idea what operating system they’re using.

There are a lot of good reasons why we’re all flocking to these cloud services and vendor-controlled platforms. The benefits are enormous, from cost to convenience to reliability to security itself. But it is inherently a feudal relationship. We cede control of our data and computing platforms to these companies and trust that they will treat us well and protect us from harm. And if we pledge complete allegiance to them—if we let them control our email and calendar and address book and photos and everything—we get even more benefits. We become their vassals; or, on a bad day, their serfs.

There are a lot of feudal lords out there. Google and Apple are the obvious ones, but Microsoft is trying to control both user data and the end-user platform as well. Facebook is another lord, controlling much of the socializing we do on the Internet. Other feudal lords are smaller and more specialized—Amazon, Yahoo, Verizon, and so on—but the model is the same.

To be sure, feudal security has its advantages. These companies are much better at security than the average user. Automatic backup has saved a lot of data after hardware failures, user mistakes, and malware infections. Automatic updates have increased security dramatically. This is also true for small organizations; they are more secure than they would be if they tried to do it themselves. For large corporations with dedicated IT security departments, the benefits are less clear. Sure, even large companies outsource critical functions like tax preparation and cleaning services, but large companies have specific requirements for security, data retention, audit, and so on—and that’s just not possible with most of these feudal lords.

Feudal security also has its risks. Vendors can, and do, make security mistakes affecting hundreds of thousands of people. Vendors can lock people into relationships, making it hard for them to take their data and leave. Vendors can act arbitrarily, against our interests; Facebook regularly does this when it changes peoples’ defaults, implements new features, or modifies its privacy policy. Many vendors give our data to the government without notice, consent, or a warrant; almost all sell it for profit. This isn’t surprising, really; companies should be expected to act in their own self-interest and not in their users’ best interest.

The feudal relationship is inherently based on power. In Medieval Europe, people would pledge their allegiance to a feudal lord in exchange for that lord’s protection. This arrangement changed as the lords realized that they had all the power and could do whatever they wanted. Vassals were used and abused; peasants were tied to their land and became serfs.

It’s the Internet lords’ popularity and ubiquity that enable them to profit; laws and government relationships make it easier for them to hold onto power. These lords are vying with each other for profits and power. By spending time on their sites and giving them our personal information—whether through search queries, e-mails, status updates, likes, or simply our behavioral characteristics—we are providing the raw material for that struggle. In this way we are like serfs, toiling the land for our feudal lords. If you don’t believe me, try to take your data with you when you leave Facebook. And when war breaks out among the giants, we become collateral damage.

So how do we survive? Increasingly, we have little alternative but to trust someone, so we need to decide who we trust—and who we don’t—and then act accordingly. This isn’t easy; our feudal lords go out of their way not to be transparent about their actions, their security, or much of anything. Use whatever power you have—as individuals, none; as large corporations, more—to negotiate with your lords. And, finally, don’t be extreme in any way: politically, socially, culturally. Yes, you can be shut down without recourse, but it’s usually those on the edges that are affected. Not much solace, I agree, but it’s something.

On the policy side, we have an action plan. In the short term, we need to keep circumvention—the ability to modify our hardware, software, and data files—legal and preserve net neutrality. Both of these things limit how much the lords can take advantage of us, and they increase the possibility that the market will force them to be more benevolent. The last thing we want is the government—that’s us—spending resources to enforce one particular business model over another and stifling competition.

In the longer term, we all need to work to reduce the power imbalance. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today’s Internet feudalism is both ad hoc and one-sided. We have no choice but to trust the lords, but we receive very few assurances in return. The lords have a lot of rights, but few responsibilities or limits. We need to balance this relationship, and government intervention is the only way we’re going to get it. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people.

We need a similar process to rein in our Internet lords, and it’s not something that market forces are likely to provide. The very definition of power is changing, and the issues are far bigger than the Internet and our relationships with our IT providers.

This essay originally appeared on the Harvard Business Review website. It is an update of this earlier essay on the same topic. “Feudal security” is a metaphor I have been using a lot recently; I wrote this essay without rereading my previous essay.

EDITED TO ADD (6/13): There is another way the feudal metaphor applies to the Internet. There is no commons; every part of the Internet is owned by someone. This article explores that aspect of the metaphor.

Posted on June 13, 2013 at 11:34 AMView Comments

Bluetooth-Controlled Door Lock

Here is a new lock that you can control via Bluetooth and an iPhone app.

That’s pretty cool, and I can imagine all sorts of reasons to get one of those. But I’m sure there are all sorts of unforeseen security vulnerabilities in this system. And even worse, a single vulnerability can affect all the locks. Remember that vulnerability found last year in hotel electronic locks?

Anyone care to guess how long before some researcher finds a way to hack this one? And how well the maker anticipated the need to update the firmware to fix the vulnerability once someone finds it?

I’m not saying that you shouldn’t use this lock, only that you understand that new technology brings new security risks, and electronic technology brings new kinds of security risks. Security is a trade-off, and the trade-off is particularly stark in this case.

Posted on May 16, 2013 at 8:45 AMView Comments

Apple's iMessage Encryption Seems to Be Pretty Good

The U.S. Drug Enforcement Agency has complained (in a classified report, not publicly) that Apple’s iMessage end-to-end encryption scheme can’t be broken. On the one hand, I’m not surprised; end-to-end encryption of a messaging system is a fairly easy cryptographic problem, and it should be unbreakable. On the other hand, it’s nice to have some confirmation that Apple is looking out for the users’ best interests and not the governments’.

Still, it’s impossible for us to know if iMessage encryption is actually secure. It’s certainly possible that Apple messed up somewhere, and since we have no idea how their encryption actually works, we can’t verify its functionality. It would be really nice if Apple would release the specifications of iMessage security.

EDITED TO ADD (4/8): There’s more to this story:

The DEA memo simply observes that, because iMessages are encrypted and sent via the Internet through Apple’s servers, a conventional wiretap installed at the cellular carrier’s facility isn’t going to catch those iMessages along with conventional text messages. Which shouldn’t exactly be surprising: A search of your postal mail isn’t going to capture your phone calls either; they’re just different communications channels. But the CNET article strongly implies that this means encrypted iMessages cannot be accessed by law enforcement at all. That is almost certainly false.

The question is whether iMessage uses true end-to-end encryption, or whether Apple has copies of the keys.

Another article.

Posted on April 5, 2013 at 1:05 PMView Comments

Using the iWatch for Authentication

Usability engineer Bruce Tognazzini talks about how an iWatch—which seems to be either a mythical Apple product or one actually in development—can make authentication easier.

Passcodes. The watch can and should, for most of us, eliminate passcodes altogether on iPhones, and Macs and, if Apple’s smart, PCs: As long as my watch is in range, let me in! That, to me, would be the single-most compelling feature a smartwatch could offer: If the watch did nothing but release me from having to enter my passcode/password 10 to 20 times a day, I would buy it. If the watch would just free me from having to enter pass codes, I would buy it even if it couldn’t tell the right time! I would happily strap it to my opposite wrist! This one is a must. Yes, Apple is working on adding fingerprint reading for iDevices, and that’s just wonderful, but it will still take time and trouble for the device to get an accurate read from the user. I want in now! Instantly! Let me in, let me in, let me in!

Apple must ensure, however, that, if you remove the watch, you must reestablish authenticity. (Reauthorizing would be an excellent place for biometrics.) Otherwise, we’ll have a spate of violent “watchjackings” replacing the non-violent iPhone-grabs going on today.

Posted on February 14, 2013 at 11:42 AMView Comments

Squids on the Economist Cover

Four squids on the cover of this week’s Economist represent the four massive (and intrusive) data-driven Internet giants: Google, Facebook, Apple, and Amazon.

Interestingly, these are the same four companies I’ve been listing as the new corporate threat to the Internet.

The first of three pillars propping up this outside threat are big data collectors, which in addition to Apple and Google, Schneier identified as Amazon and Facebook. (Notice Microsoft didn’t make the cut.) The goal of their data collection is for marketers to be able to make snap decisions about the product tastes, credit worthiness, and employment suitability of millions of people. Often, this information is fed into systems maintained by governments.

Notice that Microsoft didn’t make the Economist’s cut either.

I gave that talk at the RSA Conference in February of this year. The link in the article is from another conference the week before, where I test-drove the talk.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Posted on December 7, 2012 at 4:04 PMView Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

1 11 12 13 14 15 17

Sidebar photo of Bruce Schneier by Joe MacInnis.