Entries Tagged "usability"

Page 4 of 8

Alerting Users that Applications are Using Cameras, Microphones, Etc.

Interesting research: “What You See is What They Get: Protecting users from unwanted use of microphones, cameras, and other sensors,” by Jon Howell and Stuart Schechter.

Abstract: Sensors such as cameras and microphones collect privacy-sensitive data streams without the user’s explicit action. Conventional sensor access policies either hassle users to grant applications access to sensors or grant with no approval at all. Once access is granted, an application may collect sensor data even after the application’s interface suggests that the sensor is no longer being accessed.

We introduce the sensor-access widget, a graphical user interface element that resides within an application’s display. The widget provides an animated representation of the personal data being collected by its corresponding sensor, calling attention to the application’s attempt to collect the data. The widget indicates whether the sensor data is currently allowed to flow to the application. The widget also acts as a control point through which the user can configure the sensor and grant or deny the application access. By building perpetual disclosure of sensor data collection into the platform, sensor-access widgets enable new access-control policies that relax the tension between the user’s privacy needs and applications’ ease of access.

Apple seems to be taking some steps in this direction with the location sensor disclosure in iPhone 4.0 OS.

Posted on May 24, 2010 at 7:32 AMView Comments

Applications Disclosing Required Authority

This is an interesting piece of research evaluating different user interface designs by which applications disclose to users what sort of authority they need to install themselves. Given all the recent concerns about third-party access to user data on social networking sites (particularly Facebook), this is particularly timely research.

We have provided evidence of a growing trend among application platforms to disclose, via application installation consent dialogs, the resources and actions that applications will be authorized to perform if installed. To improve the design of these disclosures, we have have taken an important first step of testing key design elements. We hope these findings will assist future researchers in creating experiences that leave users feeling better informed and more confident in their installation decisions.

Within the admittedly constrained context of our laboratory study, disclosure design had surprisingly little effect on participants’ ability to absorb and search information. However, the great majority of participants preferred designs that used images or icons to represent resources. This great majority of participants also disliked designs that used paragraphs, the central design element of Facebook’s disclosures, and outlines, the central design element of Android’s disclosures.

Posted on May 21, 2010 at 1:17 PMView Comments

Security for Implantable Medical Devices

Interesting study: “Patients, Pacemakers, and Implantable Defibrillators: Human Values and Security for Wireless Implantable Medical Devices,” Tamara Denning, Alan Borning, Batya Friedman, Brian T. Gill, Tadayoshi Kohno, and William H. Maisel.

Abstract: Implantable medical devices (IMDs) improve patients’ quality of life and help sustain their lives. In this study, we explore patient views and values regarding their devices to inform the design of computer security for wireless IMDs. We interviewed 13 individuals with implanted cardiac devices. Key questions concerned the evaluation of 8 mockups of IMD security systems. Our results suggest that some systems that are technically viable are nonetheless undesirable to patients. Patients called out a number of values that affected their attitudes towards the systems, including perceived security, safety, freedom from unwanted cultural and historical associations, and self-image. In our analysis, we extend the Value Sensitive Design value dams and flows technique in order to suggest multiple, complementary systems; in our discussion, we highlight some of the usability, regulatory, and economic complexities that arise from offering multiple options. We conclude by offering design guidelines for future security systems for IMDs.

Posted on April 15, 2010 at 1:55 PMView Comments

Crypto Implementation Failure

Look at this new AES-encrypted USB memory stick. You enter the key directly into the stick via the keypad, thereby bypassing any eavesdropping software on the computer.

The problem is that in order to get full 256-bit entropy in the key, you need to enter 77 decimal digits using the keypad. I can’t imagine anyone doing that; they’ll enter an eight- or ten-digit key and call it done. (Likely, the password encrypts a random key that encrypts the actual data: not that it matters.) And even if you wanted to, is it reasonable to expect someone to enter 77 digits without making an error?

Nice idea, complete implementation failure.

EDITED TO ADD (3/4): According to the manual, the drive locks for two minutes after five unsuccessful attempts. This delay is enough to make brute-force attacks infeasible, even with only ten-digit keys.

So, not nearly as bad as I thought it was. Better would be a much longer delay after 100 or so unsuccessful attempts. Yes, there’s a denial-of-service attack against the thing, but stealing it is an even more effective denial-of-service attack.

Posted on March 4, 2010 at 6:05 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Laissez-Faire Access Control

Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, “Laissez-Faire File Sharing,” tries to formalize the sort of access control.

Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system’s security.

We observe that the failure modes of file systems that enforce centrally-imposed access control policies are similar to the failure modes of centrally-planned economies: individuals either learn to circumvent these restrictions as matters of necessity or desert the system entirely, subverting the goals behind the central policy.

We formalize requirements for laissez-faire sharing, which parallel the requirements of free market economies, to better address the file sharing needs of information workers. Because individuals are less likely to feel compelled to circumvent systems that meet these laissez-faire requirements, such systems have the potential to increase both productivity and security.

Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.

Posted on November 9, 2009 at 6:59 AMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Unauthentication

In computer security, a lot of effort is spent on the authentication problem. Whether it’s passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated—and hopefully more secure—ways for you to prove you are who you say you are over the Internet.

This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you’re no longer there? How do you unauthenticate yourself?

My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.

The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity—five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.

Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.

That’s expensive, though. A research project used a Bluetooth device, like a cellphone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.

Some systems log people out after every transaction. This wouldn’t work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.

There’s a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.

Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.

This essay originally appeared on ThreatPost.

Posted on September 28, 2009 at 1:34 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.