Entries Tagged "iPhone"

Page 10 of 12

Liars and Outliers: Interview on The Browser

I was asked to talk about five books related to privacy.

You’re best known as a security expert but our theme today is "trust". How would you describe the connection between the two?

Security exists to facilitate trust. Trust is the goal, and security is how we enable it. Think of it this way: As members of modern society, we need to trust all sorts of people, institutions and systems. We have to trust that they’ll treat us honestly, won’t take advantage of us and so on – in short, that they’ll behave in a trustworthy manner. Security is how we induce trustworthiness, and by extension enable trust.

An example might make this clearer. For commerce to work smoothly, merchants and customers need to trust each other. Customers need to trust that merchants won’t misrepresent the goods they’re selling. Merchants need to trust that customers won’t steal stuff without paying. Each needs to trust that the other won’t cheat somehow. Security is how we make that work, billions of times a day. We do that through obvious measures like alarm systems that prevent theft and anti-counterfeiting measures in currency that prevent fraud, but I mean a lot of other things as well. Consumer protection laws prevent merchants from cheating. Other laws prevent burglaries. Less formal measures like reputational considerations help keep merchants, and customers in less anonymous communities, from cheating. And our inherent moral compass keeps most of us honest most of the time.

In my new book Liars and Outliers, I call these societal pressures. None of them are perfect, but all of them – working together – are what keeps society functioning. Of course there is, and always will be, the occasional merchant or customer who cheats. But as long as they’re rare enough, society thrives.

How has the nature of trust changed in the information age?

These notions of trust and trustworthiness are as old as our species. Many of the specific societal pressures that induce trust are as old as civilisation. Morals and reputational considerations are certainly that old, as are laws. Technical security measures have changed with technology, as well as details around reputational and legal systems, but by and large they’re basically the same.

What has changed in modern society is scale. Today we need to trust more people than ever before, further away – whether politically, ethnically or socially – than ever before. We need to trust larger corporations, more diverse institutions and more complicated systems. We need to trust via computer networks. This all makes trust, and inducing trust, harder. At the same time, the scaling of technology means that the bad guys can do more damage than ever before. That also makes trust harder. Navigating all of this is one of the most fundamental challenges of our society in this new century.

Given the dangers out there, should we trust anyone? Isn’t "trust no one" the first rule of security?

It might be the first rule of security, but it’s the worst rule of society. I don’t think I could even total up all the people, institutions and systems I trusted today. I trusted that the gas company would continue to provide the fuel I needed to heat my house, and that the water coming out of my tap was safe to drink. I trusted that the fresh and packaged food in my refrigerator was safe to eat – and that certainly involved trusting people in several countries. I trusted a variety of websites on the Internet. I trusted my automobile manufacturer, as well as all the other drivers on the road.

I am flying to Boston right now, so that requires trusting several major corporations, hundreds of strangers – either working for those corporations, sitting on my plane or just standing around in the airport – and a variety of government agencies. I even had to trust the TSA [US Transportation Security Administration], even though I know it’s doing a lousy job – and so on. And it’s not even 9:30am yet! The number of people each of us trusts every day is astounding. And we trust them so completely that we often don’t even think about it.

We don’t walk into a restaurant and think: "The food vendors might have sold the restaurant tainted food, the cook might poison it, the waiter might clone my credit card, other diners might steal my wallet, the building constructor might have weakened the roof, and terrorists might bomb the place." We just sit down and eat. And the restaurant trusts that we won’t steal anyone else’s wallet or leave a bomb under our chair, and will pay when we’re done. Without trust, society collapses. And without societal pressures, there’s no trust. The devil is in the details, of course, and that’s what my book is about.

As an individual, what security threats scare you the most?

My primary concerns are threats from the powerful. I’m not worried about criminals, even organised crime. Or terrorists, even organised terrorists. Those groups have always existed, always will, and they’ll always operate on the fringes of society. Societal pressures have done a good job of keeping them that way. It’s much more dangerous when those in power use that power to subvert trust. Specifically, I am thinking of governments and corporations.

Let me give you a few examples. The global financial crisis was not a result of criminals, it was perpetrated by legitimate financial institutions pursuing their own self-interest. The major threats against our privacy are not from criminals, they’re from corporations trying to more accurately target advertising. The most significant threat to the freedom of the Internet is from large entertainment companies, in their misguided attempt to stop piracy. And the cyberwar rhetoric is likely to cause more damage to the Internet than criminals could ever dream of.

What scares me the most is that today, in our hyper-connected, hyper-computed, high-tech world, we will get societal pressures wrong to catastrophic effect.

The Penguin and the Leviathan

By Yochai Benkler

Let’s get stuck into the books you’ve chosen on this theme on trust. Beginning with Yochai Benkler’s The Penguin and the Leviathan.

This could be considered a companion book to my own. I write from the perspective of security – how society induces cooperation. Benkler takes the opposite perspective – how does this cooperation work and what is its value? More specifically, what is its value in the 21st century information-age economy? He challenges the pervasive economic view that people are inherently selfish creatures, and shows that actually we are naturally cooperative. More importantly, he discusses the enormous value of cooperation in society, and the new ways it can be harnessed over the Internet.

I think this view is important. Our culture is pervaded with the idea that individualism is paramount – Thomas Hobbes’s notion that we are all autonomous individuals who willingly give up some of our freedom to the government in exchange for safety. It’s complete nonsense. Humans have never lived as individuals. We have always lived in communities, and we have always succeeded or failed as cooperative groups. The fact that people who separate themselves and live alone – think of Henry David Thoreau in Walden – is so remarkable indicates how rare it is.

Benkler understands this, and wants us to accept the cooperative nature of ourselves and our societies. He also gives the same advice for the future that I do – that we need to build social mechanisms that encourage cooperation over control. That is, we need to facilitate trust in society.

What’s next on your list?

The Folly of Fools, by the biologist Robert Trivers. Trivers has studied self-deception in humans, and asks how it evolved to be so pervasive. Humans are masters at self-deception. We regularly deceive ourselves in a variety of different circumstances. But why? How is it possible for self-deception – perceiving reality to be different than it really is – to have survival value? Why is it that genetic tendencies for self-deception are likely to propagate to the next generation?

Trivers’s book-long answer is fascinating. Basically, deception can have enormous evolutionary benefits. In many circumstances, especially those involving social situations, individuals who are good at deception are better able to survive and reproduce. And self-deception makes us better at deception. For example, there is value in my being able to deceive you into thinking I am stronger than I really am. You’re less likely to pick a fight with me, I’m more likely to win a dominance struggle without fighting, and so on. I am better able to bluff you if I actually believe I am stronger than I really am. So we deceive ourselves in order to be better able to deceive others.

The psychology of deception is fundamental to my own writing on trust. It’s much easier for me to cheat you if you don’t believe I am cheating you.

The Murderer Next Door

By David M Buss

Third up, The Murderer Next Door by David M Buss.

There have been a number of books about the violent nature of humans, particularly men. I chose The Murderer Next Door both because it is well-written and because it is relatively new, published in 2005. David M Buss is a psychologist, and he writes well about the natural murderousness of our species. There’s a lot of data to support natural human murderousness, and not just murder rates in modern societies. Anthropological evidence indicates that between 15% and 25% of prehistoric males died in warfare.

This murderousness resulted in an evolutionary pressure to be clever. Here’s Buss writing about it:

"As the motivations to murder evolved in our minds, a set of counterinclinations also developed. Killing is a risky business. It can be dangerous and inflict horrible costs on the victim. Because it’s so bad to be dead, evolution has fashioned ruthless defences to prevent being killed, including killing the killer. Potential victims are therefore quite dangerous themselves. In the evolutionary arms race, homicide victims have played a critical and unappreciated role – they pave the way for the evolution of anti-homicide defences."

Those defences involved trust and societal pressures to induce trust.

The Better Angels of Our Nature

By Steven Pinker

Your fourth book is by psychologist, science writer and previous FiveBooks interviewee Steven Pinker.

The Better Angels of Our Nature is Steven Pinker’s explanation as to why, despite the selection pressures for murderousness in our evolutionary past, violence has declined in so many cultures around the world. It’s a fantastic book, and I recommend that everyone read it. From my perspective, I could sum up his argument very simply: Societal pressures have worked.

Of course it’s more complicated than that, and Pinker does an excellent job of leading the reader through his analysis and conclusions. First, he spends six chapters documenting the fact that violence has in fact declined. In the next two chapters, he does his best to figure out exactly what has caused the "better angels of our nature" to prevail over our more natural demons. His answers are complicated, and expand greatly on the interplay among the various societal pressures which I talk about myself. It’s not things like bigger jails and more secure locks that are making society safer. It’s things like the invention of printing and the resultant rise of literacy, the empowerment of women and the rise of universal moral and ethical principles.

Braintrust

By Patricia S Churchland

What is your final selection?

Braintrust, by the neuroscientist Patricia Churchland. This book is about the neuroscience of morality. It’s brand new – published in 2011 – which is good because this is a brand new field of science, and new discoveries are happening all the time. Morality is the most basic of societal pressures, and Churchland explains how it works.

This book tries to understand the neuroscience behind trust and trustworthiness. In her own words:

"The hypothesis on offer is that what we humans call ethics or morality is a four dimensional scheme for social behavior that is shaped by interlocking brain processes: (1) caring (rooted in attachment to kin and kith and care for their well-being), (2) recognition of other’s psychological states (rooted in the benefits of predicting the behavior of others) (3) problem-solving in a social context (e.g., how we should distribute scarce goods, settle land disputes; how we should punish the miscreants) and (4) learning social practices (by positive and negative reinforcement, by imitation, by trial and error, by various kinds of conditioning, and by analogy)."

Those are our innate human societal pressures. They are the security systems that keep us mostly trustworthy most of the time – enough for most of us to be trusting enough for society to survive.

Are we safer for all the security theatre of airport checks?

Of course not. There are two parts to the question. One: Are we doing the right thing? That is, does it make sense for America to focus its anti-terrorism security efforts on airports and airplanes? And two: Are we doing things right? In other words, are the anti-terrorism measures at airports doing the job and preventing terrorism? I say the answer to both of those questions is no. Focusing on airports, and specific terrorist tactics like shoes and liquids, is a poor use of our money because it’s easy for terrorists to switch targets and tactics. And the current TSA security measures don’t keep us safe because it’s too easy to bypass them.

There are two basic kinds of terrorists – random idiots and professionals. Pretty much any airport security, even the pre-9/11 measures, will protect us against random idiots. They will get caught. And pretty much nothing will protect us against professionals. They’ve researched our security and know the weaknesses. By the time the plot gets to the airport, it’s too late. Much more effective is for the US to spend its money on intelligence, investigation and emergency response. But this is a shorter answer than your readers deserve, and I suggest they read more of my writings on the topic.

How does the rise of cloud computing affect personal risk?

Like everything else, cloud computing is all about trust. Trust isn’t new in computing. I have to trust my computer’s manufacturer. I have to trust my operating system and software. I have to trust my Internet connection and everything associated with that. I have to trust all sorts of data I receive from other sources.

So on the one hand, cloud computing just adds another level of trust. But it’s an important level of trust. For most of us, it reduces our risk. If I have my email on Google, my photos on Flickr, my friends on Facebook and my professional contacts on LinkedIn, then I don’t have to worry much about losing my data. If my computer crashes I’ll still have all my email, photos and contacts. This is the way the iPhone works with iCloud – if I lose my phone, I can get a new one and all my data magically reappears.

On the other hand, I have to trust my cloud providers. I have to trust that Facebook won’t misuse the personal information it knows about me. I have to trust that my data won’t get shipped off to a server in a foreign country with lax privacy laws, and that the companies who have my data will not hand it over to the police without a court order. I’m not able to implement my own security around my data; I have to take what the cloud provider offers. And I must trust that’s good enough, often without knowing anything about it.

Finally, how many Bruce Schneier Facts are true?

Seven.

This Q&A originally appeared on TheBrowser.com

Posted on February 27, 2012 at 12:30 PMView Comments

Improving the Security of Four-Digit PINs on Cell Phones

The author of this article notices that it’s often easy to guess a cell phone PIN because of smudge marks on the screen. Those smudge marks indicate the four PIN digits, so an attacker knows that the PIN is one of 24 possible permutations of those digits.

Then he points out that if your PIN has only three different digits—1231, for example—the PIN can be one of 36 different possibilities.

So it’s more security, although not much more secure.

Posted on January 6, 2012 at 6:30 AMView Comments

Android Malware

The Android platform is where the malware action is:

What happens when anyone can develop and publish an application to the Android Market? A 472% increase in Android malware samples since July 2011. These days, it seems all you need is a developer account, that is relatively easy to anonymize, pay $25 and you can post your applications.

[…]

In addition to an increase in the volume, the attackers continue to become more sophisticated in the malware they write. For instance, in the early spring, we began seeing Android malware that was capable of leveraging one of several platform vulnerabilities that allowed malware to gain root access on the device, in the background, and then install additional packages to the device to extend the functionality of the malware. Today, just about every piece of malware that is released contains this capability, simply because the vulnerabilities remain prevalent in nearly 90% of Android devices being carried around today.

I believe that smart phones are going to become the primary platform of attack for cybercriminals in the coming years. As the phones become more integrated into people’s lives—smart phone banking, electronic wallets—they’re simply going to become the most valuable device for criminals to go after. And I don’t believe the iPhone will be more secure because of Apple’s rigid policies for the app store.

EDITED TO ADD (11/26): This article is a good debunking of the data I quoted above. And also this:

“A virus of the traditional kind is possible, but not probable. The barriers to spreading such a program from phone to phone are large and difficult enough to traverse when you have legitimate access to the phone, but this isn’t Independence Day, a virus that might work on one device won’t magically spread to the other.”

DiBona is right. While some malware and viruses have tried to make use of Bluetooth and Wi-Fi radios to hop from device to device, it simply doesn’t happen the way security companies want you to think it does.

Of course he’s right. Malware on portable devices isn’t going to look or act the same way as malware on traditional computers. It isn’t going to spread from phone to phone. I’m more worried about Trojans, either on legitimate or illegitimate apps, malware embedded in webpages, fake updates, and so on. A lot of this will involve social engineering the user, but I don’t see that as much of a problem.

But I do see mobile devices as the new target of choice. And I worry much more about privacy violations. Your phone knows your location. Your phone knows who you talk to and—with a recorder—what you say. And when your phone becomes your digital wallet, your phone is going to know a lot more intimate things about you. All of this will be useful to both criminals and marketers, and we’re going to see all sorts of illegal and quasi-legal ways both of those groups will go after that information.

And securing those devices is going to be hard, because we don’t have the same low-level access to these devices we have with computers.

Anti-virus companies are using FUD to sell their products, but there are real risks here. And the time to start figuring out how to solve them is now.

Posted on November 25, 2011 at 6:06 AMView Comments

Friday Squid Blogging: SQUIDS Game

It’s coming to the iPhone and iPad, then to other platforms:

In SQUIDS, players will command a small army of stretchy, springy sea creatures to protect an idyllic underwater kingdom from a sinister emerging threat. An infectious black ooze is spreading through the lush seascape, turning ordinary crustaceans into menacing monsters. Now a plucky team of Squids­each with unique personalities, skills, and ability-boosting attire­must defend their homeland and overturn the evil forces that jeopardize their aquatic utopia.

More:

…which they describe as Angry Birds meets Worms, with RPG elements. “For the universe, Audrey and I share a passion for cephalopods of all sorts, and that was a perfect match with the controls I had in mind,” Thoa said.

As before, use the comments to this post to write about and discuss security stories that don’t have their own post.

Posted on September 2, 2011 at 4:44 PMView Comments

Developments in Facial Recognition

Eventually, it will work. You’ll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it’s just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

The police want this sort of system. I already blogged about MORIS, an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.

The system can compare biometric data at 46,000 points on a face and will immediately signal any matches to known criminals or people wanted by police.

In the future, this sort of thing won’t be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)

Of course, there are false positives—as there are with any system like this. That’s not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age—and eventually the identity—of the person looking at it, but is more problematic if the application is a legal one.

In Boston, someone erroneously had his driver’s licence revoked:

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.

And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity.

[…]

At least 34 states are using such systems. They help authorities verify a person’s claimed identity and track down people who have multiple licenses under different aliases, such as underage people wanting to buy alcohol, people with previous license suspensions, and people with criminal records trying to evade the law.

The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.

Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual’s “burden’” to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.

“A driver’s license is not a matter of civil rights. It’s not a right. It’s a privilege,” she said. “Yes, it is an inconvenience [to have to clear your name], but lots of people have their identities stolen, and that’s an inconvenience, too.”

IEEE Spectrum and The Economist have published similar articles.

EDITED TO ADD (8/3): Here’s a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?

EDITED TO ADD: CV Dazzle is camouflage from face detection.

Posted on August 2, 2011 at 1:33 PMView Comments

iPhone Iris Scanning Technology

No indication about how well it works:

The smartphone-based scanner, named Mobile Offender Recognition and Information System, or MORIS, is made by BI2 Technologies in Plymouth, Massachusetts, and can be deployed by officers out on the beat or back at the station.

An iris scan, which detects unique patterns in a person’s eyes, can reduce to seconds the time it takes to identify a suspect in custody. This technique also is significantly more accurate than results from other fingerprinting technology long in use by police, BI2 says.

When attached to an iPhone, MORIS can photograph a person’s face and run the image through software that hunts for a match in a BI2-managed database of U.S. criminal records. Each unit costs about $3,000.

[…]

Roughly 40 law enforcement units nationwide will soon be using the MORIS, including Arizona’s Pinal County Sheriff’s Office, as well as officers in Hampton City in Virginia and Calhoun County in Alabama.

Posted on July 26, 2011 at 6:51 AMView Comments

Protecting Private Information on Smart Phones

AppFence is a technology—with a working prototype—that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.

The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don’t want applications to have. Many applications take advantage of this to gain access to users’ device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.

The problem is that the mobile OS providers might not like AppFence. Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.

Posted on June 24, 2011 at 6:37 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.