Entries Tagged "usability"

Page 3 of 8

Hacking Consumer Devices

Last weekend, a Texas couple apparently discovered that the electronic baby monitor in their children’s bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child’s father unplugged the monitor.

What does this mean for the rest of us? How secure are consumer electronic systems, now that they’re all attached to the Internet?

The answer is not very, and it’s been this bad for many years. Security vulnerabilities have been found in all types of webcams, cameras of all sorts, implanted medical devices, cars, and even smart toilets — not to mention yachts, ATM machines, industrial control systems and military drones.

All of these things have long been hackable. Those of us who work in security are often amazed that most people don’t know about it.

Why are they hackable? Because security is very hard to get right. It takes expertise, and it takes time. Most companies don’t care because most customers buying security systems and smart appliances don’t know enough to care. Why should a baby monitor manufacturer spend all sorts of money making sure its security is good when the average customer won’t even notice?

Even worse, that consumer will look at two competing baby monitors — a more expensive one with better security, and a cheaper one with minimal security — and buy the cheaper. Without the expertise to make an informed buying decision, cheaper wins.

A lot of hacks happen because the users don’t configure or install their devices properly, but that’s really the fault of the manufacturer. These are supposed to be consumer devices, not specialized equipment for security experts only.

This sort of thing is true in other aspects of society, and we have a variety of mechanisms to deal with it. Government regulation is one of them. For example, few of us can differentiate real pharmaceuticals from snake oil, so the FDA regulates what can be sold and what sorts of claims vendors can make. Independent product testing is another. You and I might not be able to tell a well-made car from a poorly-made one at a glance, but we can both read the reports from a variety of testing agencies.

Computer security has resisted these mechanisms, both because the industry changes so quickly and because this sort of testing is hard and expensive. But the effect is that we’re all being sold a lot of insecure consumer products with embedded computers. And as these computers get connected to the Internet, the problems will get worse.

The moral here isn’t that your baby monitor could be hacked. The moral is that pretty much every “smart” everything can be hacked, and because consumers don’t care, the market won’t fix the problem.

This essay previously appeared on CNN.com. I wrote it in about half an hour, on request, and I’m not really happy with it. I should have talked more about the economics of good security, as well as the economics of hacking. The point is that we don’t have to worry about hackers smart enough to figure out these vulnerabilities, but those dumb hackers who just use software tools written and distributed by the smart hackers. Ah well, next time.

Posted on August 23, 2013 at 6:00 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy — only a 1% false positive rate — even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

Trust in IT

Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.

Virtually everything we work with on a day-to-day basis is built by someone else. Avoiding insanity requires trusting those who designed, developed and manufactured the instruments of our daily existence.

All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust. In this light, I find the modern technosphere’s complete disdain for obtaining and retaining trust baffling, arrogant and at times enraging.

Posted on June 11, 2013 at 6:21 AMView Comments

A Really Good Article on How Easy it Is to Crack Passwords

Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break them. The winner got 90% of them, the loser 62% — in a few hours.

The list of “plains,” as many crackers refer to deciphered hashes, contains the usual list of commonly used passcodes that are found in virtually every breach involving consumer websites. “123456,” “1234567,” and “password” are there, as is “letmein,” “Destiny21,” and “pizzapizza.” Passwords of this ilk are hopelessly weak. Despite the additional tweaking, “p@$$word,” “123456789j,” “letmein1!,” and “LETMEin3” are equally awful….

As big as the word lists that all three crackers in this article wielded — close to 1 billion strong in the case of Gosney and Steube — none of them contained “Coneyisland9/,” “momof3g8kids,” or the more than 10,000 other plains that were revealed with just a few hours of effort. So how did they do it? The short answer boils down to two variables: the website’s unfortunate and irresponsible use of MD5 and the use of non-randomized passwords by the account holders.

The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.

Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.

“The combinator attack got it! It’s cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

Great reading, but nothing theoretically new. Ars Technica wrote about this last year, and Joe Bonneau wrote an excellent commentary.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models).

I wrote about this same thing back in 2007. The news in 2013, such as it is, is that this kind of thing is getting easier faster than people think. Pretty much anything that can be remembered can be cracked.

If you need to memorize a password, I still stand by the Schneier scheme from 2008:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence — something personal.

Until this very moment, these passwords were still secure:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst::amazon.cccooommm = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence, some personal memorable tricks to modify that sentence into a password, and create a long-length password.

Better, though, is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to store them. (If anyone wants to port it to the Mac, iPhone, iPad, or Android, please contact me.) This article does a good job of explaining the same thing. David Pogue likes Dashlane, but doesn’t know if it’s secure.

In related news, Password Safe is a candidate for July’s project-of-the-month on SourceForge. Please vote for it.

EDITED TO ADD (6/7): As a commenter noted, none of this is useful advice if the site puts artificial limits on your password.

EDITED TO ADD (6/14): Various ports of Password Safe. I know nothing about them, nor can I vouch for their security.

Analysis of the xkcd scheme.

Posted on June 7, 2013 at 6:41 AMView Comments

The Problems with Managing Privacy by Asking and Giving Consent

New paper from the Harvard Law Review by Daniel Solove: “Privacy Self-Management and the Consent Dilemma“:

Privacy self-management takes refuge in consent. It attempts to be neutral about substance — whether certain forms of collecting, using, or disclosing personal data are good or bad — and instead focuses on whether people consent to various privacy practices. Consent legitimizes nearly any form of collection, use, or disclosure of personal data. Although privacy self-management is certainly a laudable and necessary component of any regulatory regime, I contend that it is being tasked with doing work beyond its capabilities. Privacy self-management does not provide people with meaningful control over their data. First, empirical and social science research demonstrates that there are severe cognitive problems that undermine privacy self-management. These cognitive problems impair individuals’ ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.

Second, and more troubling, even well-informed and rational individuals cannot appropriately self-manage their privacy due to several structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework.

Posted on June 3, 2013 at 6:15 AMView Comments

Security Flaws in Encrypted Police Radios

Why (Special Agent) Johnny (Still) Can’t Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System,” by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, and Matt Blaze.

Abstract: APCO Project 25a (“P25”) is a suite of wireless communications protocols used in the US and elsewhere for public safety two-way (voice) radio systems. The protocols include security options in which voice and data traffic can be cryptographically protected from eavesdropping. This paper analyzes the security of P25 systems against both passive and active adversaries. We found a number of protocol, implementation, and user interface weaknesses that routinely leak information to a passive eavesdropper or that permit highly efficient and difficult to detect active attacks. We introduce new selective subframe jamming attacks against P25, in which an active attacker with very modest resources can prevent specific kinds of traffic (such as encrypted messages) from being received, while emitting only a small fraction of the aggregate power of the legitimate transmitter. We also found that even the passive attacks represent a serious practical threat. In a study we conducted over a two year period in several US metropolitan areas, we found that a significant fraction of the “encrypted” P25 tactical radio traffic sent by federal law enforcement surveillance operatives is actually sent in the clear, in spite of their users’ belief that they are encrypted, and often reveals such sensitive data as the such sensitive data as the names of informants in criminal investigations.

I’ve heard Matt talk about this project several times. It’s great work, and a fascinating insight into the usability problems of encryption in the real world.

News article.

Posted on August 11, 2011 at 6:19 AMView Comments

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

Applications Disclosing Required Authority

This is an interesting piece of research evaluating different user interface designs by which applications disclose to users what sort of authority they need to install themselves. Given all the recent concerns about third-party access to user data on social networking sites (particularly Facebook), this is particularly timely research.

We have provided evidence of a growing trend among application platforms to disclose, via application installation consent dialogs, the resources and actions that applications will be authorized to perform if installed. To improve the design of these disclosures, we have have taken an important first step of testing key design elements. We hope these findings will assist future researchers in creating experiences that leave users feeling better informed and more confident in their installation decisions.

Within the admittedly constrained context of our laboratory study, disclosure design had surprisingly little effect on participants’ ability to absorb and search information. However, the great majority of participants preferred designs that used images or icons to represent resources. This great majority of participants also disliked designs that used paragraphs, the central design element of Facebook’s disclosures, and outlines, the central design element of Android’s disclosures.

Posted on May 21, 2010 at 1:17 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.