Entries Tagged "iPhone"

Page 8 of 11

Using the iWatch for Authentication

Usability engineer Bruce Tognazzini talks about how an iWatch—which seems to be either a mythical Apple product or one actually in development—can make authentication easier.

Passcodes. The watch can and should, for most of us, eliminate passcodes altogether on iPhones, and Macs and, if Apple’s smart, PCs: As long as my watch is in range, let me in! That, to me, would be the single-most compelling feature a smartwatch could offer: If the watch did nothing but release me from having to enter my passcode/password 10 to 20 times a day, I would buy it. If the watch would just free me from having to enter pass codes, I would buy it even if it couldn’t tell the right time! I would happily strap it to my opposite wrist! This one is a must. Yes, Apple is working on adding fingerprint reading for iDevices, and that’s just wonderful, but it will still take time and trouble for the device to get an accurate read from the user. I want in now! Instantly! Let me in, let me in, let me in!

Apple must ensure, however, that, if you remove the watch, you must reestablish authenticity. (Reauthorizing would be an excellent place for biometrics.) Otherwise, we’ll have a spate of violent “watchjackings” replacing the non-violent iPhone-grabs going on today.

Posted on February 14, 2013 at 11:42 AMView Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

Apple Turns on iPhone Tracking in iOS6

This is important:

Previously, Apple had all but disabled tracking of iPhone users by advertisers when it stopped app developers from utilizing Apple mobile device data via UDID, the unique, permanent, non-deletable serial number that previously identified every Apple device.

For the last few months, iPhone users have enjoyed an unusual environment in which advertisers have been largely unable to track and target them in any meaningful way.

In iOS 6, however, tracking is most definitely back on, and it’s more effective than ever, multiple mobile advertising executives familiar with IFA tell us. (Note that Apple doesn’t mention IFA in its iOS 6 launch page).

EDITED TO ADD (10/15): Apple has provided a way to opt out of the targeted ads and also to disable the location information being sent.

Posted on October 15, 2012 at 1:21 PMView Comments

Scary Android Malware Story

This story sounds pretty scary:

Developed by Robert Templeman at the Naval Surface Warfare Center in Indiana and a few buddies from Indiana University, PlaceRader hijacks your phone’s camera and takes a series of secret photographs, recording the time, and the phone’s orientation and location with each shot. Using that information, it can reliably build a 3D model of your home or office, and let cyber-intruders comb it for personal information like passwords on sticky notes, bank statements laying out on the coffee table, or anything else you might have lying around that could wind up the target of a raid on a later date.

It’s just a demo, of course. but it’s easy to imagine what this could mean in the hands of criminals.

Yes, I get that this is bad. But it seems to be a mashup of two things. One, the increasing technical capability to stitch together a series of photographs into a three-dimensional model. And two, an Android bug that allows someone to remotely and surreptitiously take pictures and then upload them. The first thing isn’t a problem, and it isn’t going away. The second is bad, irrespective of what else is going on.

EDITED TO ADD (10/1): I mistakenly wrote this up as an iPhone story. It’s about the Android phone. Apologies.

Posted on October 1, 2012 at 6:52 AMView Comments

The NSA and the Risk of Off-the-Shelf Devices

Interesting article on how the NSA is approaching risk in the era of cool consumer devices. There’s a discussion of the president’s network-disabled iPad, and the classified cell phone that flopped because it took so long to develop and was so clunky. Turns out that everyone wants to use iPhones.

Levine concluded, “Using commercial devices to process classified phone calls, using commercial tablets to talk over wifi—that’s major game-changer for NSA to put classified information over wifi networks, but that’s what we’re going to do.” One way that would be done, he said, was by buying capability from cell carriers that have networks of cell towers in much the way small cell providers and companies like Onstar do.

Interestingly, Levine described an agency that is being forced to adopt a more realistic and practical attitude toward risk. “It used to be that the NSA squeezed all risk out of everything,” he said. Even lower-levels of sensitivity were covered by Top Secret-level crypto. “We don’t do that now—it’s levels of risk. We say we can give you this, but can ensure only this level of risk.” Partly this came about, he suggested, because the military has an inherent understanding that nothing is without risk, and is used to seeing things in terms of tradeoffs: “With the military, everything is a risk decision. If this is the communications capability I need, I’ll have to take that risk.”

Posted on September 20, 2012 at 6:02 AMView Comments

Database of 12 Million Apple UDIDs Leaked

In this story, we learn that hackers got their hands on a database of 12 million Apple Unique Device Identifiers (UDIDs) by hacking an FBI laptop.

My question isn’t about the hack, but about the data. Why does an FBI agent have user identification information about 12 million iPhone users on his laptop? And how did the FBI get their hands on this data in the first place?

For its part, the FBI denies everything:

In a statement released Tuesday afternoon, the FBI said, “The FBI is aware of published reports alleging that an FBI laptop was compromised and private data regarding Apple UDIDs was exposed. At this time there is no evidence indicating that an FBI laptop was compromised or that the FBI either sought or obtained this data.”

Apple also denies giving the database to the FBI.

Okay, so where did the database come from? And are there really 12 million, or only one million?

EDITED TO ADD (9/12): A company called BlueToad is the source of the leak.

If you’ve been hacked, you’re not going to be informed:

DeHart said his firm would not be contacting individual consumers to notify them that their information had been compromised, instead leaving it up to individual publishers to contact readers as they see fit.

Posted on September 6, 2012 at 6:48 AMView Comments

Is iPhone Security Really this Good?

Simson Garfinkel writes that the iPhone has such good security that the police can’t use it for forensics anymore:

Technologies the company has adopted protect Apple customers’ content so well that in many situations it’s impossible for law enforcement to perform forensic examinations of devices seized from criminals. Most significant is the increasing use of encryption, which is beginning to cause problems for law enforcement agencies when they encounter systems with encrypted drives.

“I can tell you from the Department of Justice perspective, if that drive is encrypted, you’re done,” Ovie Carroll, director of the cyber-crime lab at the Computer Crime and Intellectual Property Section in the Department of Justice, said during his keynote address at the DFRWS computer forensics conference in Washington, D.C., last Monday. “When conducting criminal investigations, if you pull the power on a drive that is whole-disk encrypted you have lost any chance of recovering that data.”

Yes, I believe that full-disk encryption—whether Apple’s FileVault or Microsoft’s BitLocker (I don’t know what the iOS system is called)—is good; but its security is only as good as the user is at choosing a good password.

The iPhone always supported a PIN lock, but the PIN wasn’t a deterrent to a serious attacker until the iPhone 3GS. Because those early phones didn’t use their hardware to perform encryption, a skilled investigator could hack into the phone, dump its flash memory, and directly access the phone’s address book, e-mail messages, and other information. But now, with Apple’s more sophisticated approach to encryption, investigators who want to examine data on a phone have to try every possible PIN. Examiners perform these so-called brute-force attacks with special software, because the iPhone can be programmed to wipe itself if the wrong PIN is provided more than 10 times in a row. This software must be run on the iPhone itself, limiting the guessing speed to 80 milliseconds per PIN. Trying all four-digit PINs therefore requires no more than 800 seconds, a little more than 13 minutes. However, if the user chooses a six-digit PIN, the maximum time required would be 22 hours; a nine-digit PIN would require 2.5 years, and a 10-digit pin would take 25 years. That’s good enough for most corporate secrets—and probably good enough for most criminals as well.

Leaving aside the user practice questions—my guess is that very few users, even those with something to hide, use a ten-digit PIN—could this possibly be true? In the introduction to Applied Cryptography, almost 20 years ago, I wrote: “There are two kinds of cryptography in this world: cryptography that will stop your kid sister from reading your files, and cryptography that will stop major governments from reading your files.”

Since then, I’ve learned two things: 1) there are a lot of gradients to kid sister cryptography, and 2) major government cryptography is very hard to get right. It’s not the cryptography; it’s everything around the cryptography. I said as much in the preface to Secrets and Lies in 2000:

Cryptography is a branch of mathematics. And like all mathematics, it involves numbers, equations, and logic. Security, palpable security that you or I might find useful in our lives, involves people: things people know, relationships between people, people and how they relate to machines. Digital security involves computers: complex, unstable, buggy computers.

Mathematics is perfect; reality is subjective. Mathematics is defined; computers are ornery. Mathematics is logical; people are erratic, capricious, and barely comprehensible.

If, in fact, we’ve finally achieved something resembling this level of security for our computers and handheld computing devices, this is something to celebrate.

But I’m skeptical.

Another article.

Slashdot has a thread on the article.

EDITED TO ADD: More analysis. And Elcomsoft can crack iPhones.

Posted on August 21, 2012 at 1:42 PMView Comments

Law Enforcement Forensics Tools Against Smart Phones

Turns out the password can be easily bypassed:

XRY works by first jailbreaking the handset. According to Micro Systemation, no ‘backdoors’ created by Apple used, but instead it makes use of security flaws in the operating system the same way that regular jailbreakers do.

Once the iPhone has been jailbroken, the tool then goes on to ‘brute-force’ the passcode, trying every possible four digit combination until the correct password has been found. Given the limited number of possible combinations for a four-digit passcode—10,000, ranging from 0000 to 9999—this doesn’t take long.

Once the handset has been jailbroken and the passcode guessed, all the data on the handset, including call logs, messages, contacts, GPS data and even keystrokes, can be accessed and examined.

One of the morals is to use an eight-digit passcode.

EDITED TO ADD (4/13): This has been debunked. The 1Password blog has a fairly lengthy post discussing the details of the XRY tool.

Posted on April 3, 2012 at 2:01 PMView Comments

Liars and Outliers: Interview on The Browser

I was asked to talk about five books related to privacy.

You’re best known as a security expert but our theme today is "trust". How would you describe the connection between the two?

Security exists to facilitate trust. Trust is the goal, and security is how we enable it. Think of it this way: As members of modern society, we need to trust all sorts of people, institutions and systems. We have to trust that they’ll treat us honestly, won’t take advantage of us and so on – in short, that they’ll behave in a trustworthy manner. Security is how we induce trustworthiness, and by extension enable trust.

An example might make this clearer. For commerce to work smoothly, merchants and customers need to trust each other. Customers need to trust that merchants won’t misrepresent the goods they’re selling. Merchants need to trust that customers won’t steal stuff without paying. Each needs to trust that the other won’t cheat somehow. Security is how we make that work, billions of times a day. We do that through obvious measures like alarm systems that prevent theft and anti-counterfeiting measures in currency that prevent fraud, but I mean a lot of other things as well. Consumer protection laws prevent merchants from cheating. Other laws prevent burglaries. Less formal measures like reputational considerations help keep merchants, and customers in less anonymous communities, from cheating. And our inherent moral compass keeps most of us honest most of the time.

In my new book Liars and Outliers, I call these societal pressures. None of them are perfect, but all of them – working together – are what keeps society functioning. Of course there is, and always will be, the occasional merchant or customer who cheats. But as long as they’re rare enough, society thrives.

How has the nature of trust changed in the information age?

These notions of trust and trustworthiness are as old as our species. Many of the specific societal pressures that induce trust are as old as civilisation. Morals and reputational considerations are certainly that old, as are laws. Technical security measures have changed with technology, as well as details around reputational and legal systems, but by and large they’re basically the same.

What has changed in modern society is scale. Today we need to trust more people than ever before, further away – whether politically, ethnically or socially – than ever before. We need to trust larger corporations, more diverse institutions and more complicated systems. We need to trust via computer networks. This all makes trust, and inducing trust, harder. At the same time, the scaling of technology means that the bad guys can do more damage than ever before. That also makes trust harder. Navigating all of this is one of the most fundamental challenges of our society in this new century.

Given the dangers out there, should we trust anyone? Isn’t "trust no one" the first rule of security?

It might be the first rule of security, but it’s the worst rule of society. I don’t think I could even total up all the people, institutions and systems I trusted today. I trusted that the gas company would continue to provide the fuel I needed to heat my house, and that the water coming out of my tap was safe to drink. I trusted that the fresh and packaged food in my refrigerator was safe to eat – and that certainly involved trusting people in several countries. I trusted a variety of websites on the Internet. I trusted my automobile manufacturer, as well as all the other drivers on the road.

I am flying to Boston right now, so that requires trusting several major corporations, hundreds of strangers – either working for those corporations, sitting on my plane or just standing around in the airport – and a variety of government agencies. I even had to trust the TSA [US Transportation Security Administration], even though I know it’s doing a lousy job – and so on. And it’s not even 9:30am yet! The number of people each of us trusts every day is astounding. And we trust them so completely that we often don’t even think about it.

We don’t walk into a restaurant and think: "The food vendors might have sold the restaurant tainted food, the cook might poison it, the waiter might clone my credit card, other diners might steal my wallet, the building constructor might have weakened the roof, and terrorists might bomb the place." We just sit down and eat. And the restaurant trusts that we won’t steal anyone else’s wallet or leave a bomb under our chair, and will pay when we’re done. Without trust, society collapses. And without societal pressures, there’s no trust. The devil is in the details, of course, and that’s what my book is about.

As an individual, what security threats scare you the most?

My primary concerns are threats from the powerful. I’m not worried about criminals, even organised crime. Or terrorists, even organised terrorists. Those groups have always existed, always will, and they’ll always operate on the fringes of society. Societal pressures have done a good job of keeping them that way. It’s much more dangerous when those in power use that power to subvert trust. Specifically, I am thinking of governments and corporations.

Let me give you a few examples. The global financial crisis was not a result of criminals, it was perpetrated by legitimate financial institutions pursuing their own self-interest. The major threats against our privacy are not from criminals, they’re from corporations trying to more accurately target advertising. The most significant threat to the freedom of the Internet is from large entertainment companies, in their misguided attempt to stop piracy. And the cyberwar rhetoric is likely to cause more damage to the Internet than criminals could ever dream of.

What scares me the most is that today, in our hyper-connected, hyper-computed, high-tech world, we will get societal pressures wrong to catastrophic effect.

The Penguin and the Leviathan

By Yochai Benkler

Let’s get stuck into the books you’ve chosen on this theme on trust. Beginning with Yochai Benkler’s The Penguin and the Leviathan.

This could be considered a companion book to my own. I write from the perspective of security – how society induces cooperation. Benkler takes the opposite perspective – how does this cooperation work and what is its value? More specifically, what is its value in the 21st century information-age economy? He challenges the pervasive economic view that people are inherently selfish creatures, and shows that actually we are naturally cooperative. More importantly, he discusses the enormous value of cooperation in society, and the new ways it can be harnessed over the Internet.

I think this view is important. Our culture is pervaded with the idea that individualism is paramount – Thomas Hobbes’s notion that we are all autonomous individuals who willingly give up some of our freedom to the government in exchange for safety. It’s complete nonsense. Humans have never lived as individuals. We have always lived in communities, and we have always succeeded or failed as cooperative groups. The fact that people who separate themselves and live alone – think of Henry David Thoreau in Walden – is so remarkable indicates how rare it is.

Benkler understands this, and wants us to accept the cooperative nature of ourselves and our societies. He also gives the same advice for the future that I do – that we need to build social mechanisms that encourage cooperation over control. That is, we need to facilitate trust in society.

What’s next on your list?

The Folly of Fools, by the biologist Robert Trivers. Trivers has studied self-deception in humans, and asks how it evolved to be so pervasive. Humans are masters at self-deception. We regularly deceive ourselves in a variety of different circumstances. But why? How is it possible for self-deception – perceiving reality to be different than it really is – to have survival value? Why is it that genetic tendencies for self-deception are likely to propagate to the next generation?

Trivers’s book-long answer is fascinating. Basically, deception can have enormous evolutionary benefits. In many circumstances, especially those involving social situations, individuals who are good at deception are better able to survive and reproduce. And self-deception makes us better at deception. For example, there is value in my being able to deceive you into thinking I am stronger than I really am. You’re less likely to pick a fight with me, I’m more likely to win a dominance struggle without fighting, and so on. I am better able to bluff you if I actually believe I am stronger than I really am. So we deceive ourselves in order to be better able to deceive others.

The psychology of deception is fundamental to my own writing on trust. It’s much easier for me to cheat you if you don’t believe I am cheating you.

The Murderer Next Door

By David M Buss

Third up, The Murderer Next Door by David M Buss.

There have been a number of books about the violent nature of humans, particularly men. I chose The Murderer Next Door both because it is well-written and because it is relatively new, published in 2005. David M Buss is a psychologist, and he writes well about the natural murderousness of our species. There’s a lot of data to support natural human murderousness, and not just murder rates in modern societies. Anthropological evidence indicates that between 15% and 25% of prehistoric males died in warfare.

This murderousness resulted in an evolutionary pressure to be clever. Here’s Buss writing about it:

"As the motivations to murder evolved in our minds, a set of counterinclinations also developed. Killing is a risky business. It can be dangerous and inflict horrible costs on the victim. Because it’s so bad to be dead, evolution has fashioned ruthless defences to prevent being killed, including killing the killer. Potential victims are therefore quite dangerous themselves. In the evolutionary arms race, homicide victims have played a critical and unappreciated role – they pave the way for the evolution of anti-homicide defences."

Those defences involved trust and societal pressures to induce trust.

The Better Angels of Our Nature

By Steven Pinker

Your fourth book is by psychologist, science writer and previous FiveBooks interviewee Steven Pinker.

The Better Angels of Our Nature is Steven Pinker’s explanation as to why, despite the selection pressures for murderousness in our evolutionary past, violence has declined in so many cultures around the world. It’s a fantastic book, and I recommend that everyone read it. From my perspective, I could sum up his argument very simply: Societal pressures have worked.

Of course it’s more complicated than that, and Pinker does an excellent job of leading the reader through his analysis and conclusions. First, he spends six chapters documenting the fact that violence has in fact declined. In the next two chapters, he does his best to figure out exactly what has caused the "better angels of our nature" to prevail over our more natural demons. His answers are complicated, and expand greatly on the interplay among the various societal pressures which I talk about myself. It’s not things like bigger jails and more secure locks that are making society safer. It’s things like the invention of printing and the resultant rise of literacy, the empowerment of women and the rise of universal moral and ethical principles.

Braintrust

By Patricia S Churchland

What is your final selection?

Braintrust, by the neuroscientist Patricia Churchland. This book is about the neuroscience of morality. It’s brand new – published in 2011 – which is good because this is a brand new field of science, and new discoveries are happening all the time. Morality is the most basic of societal pressures, and Churchland explains how it works.

This book tries to understand the neuroscience behind trust and trustworthiness. In her own words:

"The hypothesis on offer is that what we humans call ethics or morality is a four dimensional scheme for social behavior that is shaped by interlocking brain processes: (1) caring (rooted in attachment to kin and kith and care for their well-being), (2) recognition of other’s psychological states (rooted in the benefits of predicting the behavior of others) (3) problem-solving in a social context (e.g., how we should distribute scarce goods, settle land disputes; how we should punish the miscreants) and (4) learning social practices (by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy)."

Those are our innate human societal pressures. They are the security systems that keep us mostly trustworthy most of the time – enough for most of us to be trusting enough for society to survive.

Are we safer for all the security theatre of airport checks?

Of course not. There are two parts to the question. One: Are we doing the right thing? That is, does it make sense for America to focus its anti-terrorism security efforts on airports and airplanes? And two: Are we doing things right? In other words, are the anti-terrorism measures at airports doing the job and preventing terrorism? I say the answer to both of those questions is no. Focusing on airports, and specific terrorist tactics like shoes and liquids, is a poor use of our money because it’s easy for terrorists to switch targets and tactics. And the current TSA security measures don’t keep us safe because it’s too easy to bypass them.

There are two basic kinds of terrorists – random idiots and professionals. Pretty much any airport security, even the pre-9/11 measures, will protect us against random idiots. They will get caught. And pretty much nothing will protect us against professionals. They’ve researched our security and know the weaknesses. By the time the plot gets to the airport, it’s too late. Much more effective is for the US to spend its money on intelligence, investigation and emergency response. But this is a shorter answer than your readers deserve, and I suggest they read more of my writings on the topic.

How does the rise of cloud computing affect personal risk?

Like everything else, cloud computing is all about trust. Trust isn’t new in computing. I have to trust my computer’s manufacturer. I have to trust my operating system and software. I have to trust my Internet connection and everything associated with that. I have to trust all sorts of data I receive from other sources.

So on the one hand, cloud computing just adds another level of trust. But it’s an important level of trust. For most of us, it reduces our risk. If I have my email on Google, my photos on Flickr, my friends on Facebook and my professional contacts on LinkedIn, then I don’t have to worry much about losing my data. If my computer crashes I’ll still have all my email, photos and contacts. This is the way the iPhone works with iCloud – if I lose my phone, I can get a new one and all my data magically reappears.

On the other hand, I have to trust my cloud providers. I have to trust that Facebook won’t misuse the personal information it knows about me. I have to trust that my data won’t get shipped off to a server in a foreign country with lax privacy laws, and that the companies who have my data will not hand it over to the police without a court order. I’m not able to implement my own security around my data; I have to take what the cloud provider offers. And I must trust that’s good enough, often without knowing anything about it.

Finally, how many Bruce Schneier Facts are true?

Seven.

This Q&A originally appeared on TheBrowser.com

Posted on February 27, 2012 at 12:30 PMView Comments

Improving the Security of Four-Digit PINs on Cell Phones

The author of this article notices that it’s often easy to guess a cell phone PIN because of smudge marks on the screen. Those smudge marks indicate the four PIN digits, so an attacker knows that the PIN is one of 24 possible permutations of those digits.

Then he points out that if your PIN has only three different digits—1231, for example—the PIN can be one of 36 different possibilities.

So it’s more security, although not much more secure.

Posted on January 6, 2012 at 6:30 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.