Entries Tagged "passwords"

Page 7 of 27

Dumb Security Survey Questions

According to a Harris poll, 39% of Americans would give up sex for a year in exchange for perfect computer security:

According to an online survey among over 2,000 U.S. adults conducted by Harris Poll on behalf of Dashlane, the leader in online identity and password management, nearly four in ten Americans (39%) would sacrifice sex for one year if it meant they never had to worry about being hacked, having their identity stolen, or their accounts breached. With a new hack or breach making news almost daily, people are constantly being reminded about the importance of secure passwords, yet some are still not following proper password protocol.

Does anyone think that this hypothetical survey question means anything? What, are they bored at Harris? Oh, I see. This is a paid survey by a computer company looking for some publicity.

Four in 10 people (41%) would rather give up their favorite food for a month than go through the password reset process for all their online accounts.

I guess it’s more fun to ask these questions than to poll the election.

Posted on November 21, 2016 at 6:04 AMView Comments

Using Wi-Fi to Detect Hand Motions and Steal Passwords

This is impressive research: “When CSI Meets Public WiFi: Inferring Your Mobile Phone Password via WiFi Signals“:

Abstract: In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user’s number input. WindTalker presents a novel approach to collect the target’s CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user’s CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user’s input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.

That “high successful rate” is 81.7%.

News article.

Posted on November 18, 2016 at 6:40 AMView Comments

Security Design: Stop Trying to Fix the User

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers’ good intentions, these warnings just inure people to them. I’ve read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don’t see “the certificate has expired; are you sure you want to go to this webpage?” They see, “I’m an annoying message preventing you from reading a webpage. Click here to get rid of me.”

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the “I forgot my password” link as a way to bypass the system completely—­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without­—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­—”stress of mind, or knowledge of a long series of rules.”

I’ve been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People—­and developers—­are finally starting to listen. Many security updates happen automatically so users don’t have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user’s system so they don’t have to worry about embedded malware. And programs can run in sandboxes that don’t compromise the entire computer. We’ve come a long way, but we have a lot further to go.

“Blame the victim” thinking is older than the Internet, of course. But that doesn’t make it right. We owe it to our users to make the Information Age a safe place for everyone—­not just those with “security awareness.”

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

EDITED TO ADD (10/13): Commentary.

Posted on October 3, 2016 at 6:12 AMView Comments

Recovering an iPhone 5c Passcode

Remember the San Bernardino killer’s iPhone, and how the FBI maintained that they couldn’t get the encryption key without Apple providing them with a universal backdoor? Many of us computer-security experts said that they were wrong, and there were several possible techniques they could use. One of them was manually removing the flash chip from the phone, extracting the memory, and then running a brute-force attack without worrying about the phone deleting the key.

The FBI said it was impossible. We all said they were wrong. Now, Sergei Skorobogatov has proved them wrong. Here’s his paper:

Abstract: This paper is a short summary of a real world mirroring attack on the Apple iPhone 5c passcode retry counter under iOS 9. This was achieved by desoldering the NAND Flash chip of a sample phone in order to physically access its connection to the SoC and partially reverse engineering its proprietary bus protocol. The process does not require any expensive and sophisticated equipment. All needed parts are low cost and were obtained from local electronics distributors. By using the described and successful hardware mirroring process it was possible to bypass the limit on passcode retry attempts. This is the first public demonstration of the working prototype and the real hardware mirroring process for iPhone 5c. Although the process can be improved, it is still a successful proof-of-concept project. Knowledge of the possibility of mirroring will definitely help in designing systems with better protection. Also some reliability issues related to the NAND memory allocation in iPhone 5c are revealed. Some future research directions are outlined in this paper and several possible countermeasures are suggested. We show that claims that iPhone 5c NAND mirroring was infeasible were ill-advised.

Susan Landau explains why this is important:

The moral of the story? It’s not, as the FBI has been requesting, a bill to make it easier to access encrypted communications, as in the proposed revised Burr-Feinstein bill. Such “solutions” would make us less secure, not more so. Instead we need to increase law enforcement’s capabilities to handle encrypted communications and devices. This will also take more funding as well as redirection of efforts. Increased security of our devices and simultaneous increased capabilities of law enforcement are the only sensible approach to a world where securing the bits, whether of health data, financial information, or private emails, has become of paramount importance.

Or: The FBI needs computer-security expertise, not backdoors.

Patrick Ball writes about the dangers of backdoors.

EDITED TO ADD (9/23): Good article from the Economist.

Posted on September 15, 2016 at 8:54 AMView Comments

Keystroke Recognition from Wi-Fi Distortion

This is interesting research: “Keystroke Recognition Using WiFi Signals.” Basically, the user’s hand positions as they type distorts the Wi-Fi signal in predictable ways.

Abstract: Keystroke privacy is critical for ensuring the security of computer systems and the privacy of human users as what being typed could be passwords or privacy sensitive information. In this paper, we show for the first time that WiFi signals
can also be exploited to recognize keystrokes. The intuition is that while typing a certain key, the hands and fingers of a user move in a unique formation and direction and thus generate a unique pattern in the time-series of Channel State Information (CSI) values, which we call CSI-waveform for that key. In this paper, we propose a WiFi signal based keystroke recognition system called WiKey. WiKey consists of two Commercial Off-The-Shelf (COTS) WiFi devices, a sender (such as a router) and a receiver (such as a laptop). The sender continuously emits signals and the receiver continuously receives signals. When a human subject types on a keyboard, WiKey recognizes the typed keys based on how the CSI values at the WiFi signal receiver end. We implemented the WiKey system using a TP-Link TL-WR1043ND WiFi router and a Lenovo X200 laptop. WiKey achieves more than 97.5% detection rate for detecting the keystroke and 96.4% recognition accuracy for classifying single keys. In real-world experiments, WiKey can recognize keystrokes in a continuously typed sentence with an accuracy of 93.5%.

News article.

Posted on August 30, 2016 at 6:04 AMView Comments

Frequent Password Changes Is a Bad Security Idea

I’ve been saying for years that it’s bad security advice, that it encourages poor passwords. Lorrie Cranor, now the FTC’s chief technologist, agrees:

By studying the data, the researchers identified common techniques account holders used when they were required to change passwords. A password like “tarheels#1”, for instance (excluding the quotation marks) frequently became “tArheels#1” after the first change, “taRheels#1” on the second change and so on. Or it might be changed to “tarheels#11” on the first change and “tarheels#111” on the second. Another common technique was to substitute a digit to make it “tarheels#2”, “tarheels#3”, and so on.

“The UNC researchers said if people have to change their passwords every 90 days, they tend to use a pattern and they do what we call a transformation,” Cranor explained. “They take their old passwords, they change it in some small way, and they come up with a new password.”

The researchers used the transformations they uncovered to develop algorithms that were able to predict changes with great accuracy. Then they simulated real-world cracking to see how well they performed. In online attacks, in which attackers try to make as many guesses as possible before the targeted network locks them out, the algorithm cracked 17 percent of the accounts in fewer than five attempts. In offline attacks performed on the recovered hashes using superfast computers, 41 percent of the changed passwords were cracked within three seconds.

That data refers to this study.

My advice for choosing a secure password is here.

Posted on August 5, 2016 at 7:53 AMView Comments

Password Sharing Is Now a Crime

In a truly terrible ruling, the US 9th Circuit Court ruled that using someone else’s password with their permission but without the permission of the site owner is a federal crime.

The argument McKeown made is that the employee who shared the password with Nosal “had no authority from Korn/Ferry to provide her password to former employees.”

At issue is language in the CFAA that makes it illegal to access a computer system “without authorization.” McKeown said that “without authorization” is “an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission.” The question that legal scholars, groups such as the Electronic Frontier Foundation, and dissenting judge Stephen Reinhardt ask is an important one: Authorization from who?

Reinhardt argues that Nosal’s use of the database was unauthorized by the firm, but was authorized by the former employee who shared it with him. For you and me, this case means that unless Netflix specifically authorizes you to share your password with your friend, you’re breaking federal law.

The EFF:

While the majority opinion said that the facts of this case “bear little resemblance” to the kind of password sharing that people often do, Judge Reinhardt’s dissent notes that it fails to provide an explanation of why that is. Using an analogy in which a woman uses her husband’s user credentials to access his bank account to pay bills, Judge Reinhardt noted: “So long as the wife knows that the bank does not give her permission to access its servers in any manner, she is in the same position as Nosal and his associates.” As a result, although the majority says otherwise, the court turned anyone who has ever used someone else’s password without the approval of the computer owner into a potential felon.

The Computer Fraud and Abuse Act has been a disaster for many reasons, this being one of them. There will be an appeal of this ruling.

Posted on July 13, 2016 at 11:07 AMView Comments

Credential Stealing as an Attack Vector

Traditional computer security concerns itself with vulnerabilities. We employ antivirus software to detect malware that exploits vulnerabilities. We have automatic patching systems to fix vulnerabilities. We debate whether the FBI should be permitted to introduce vulnerabilities in our software so it can get access to systems with a warrant. This is all important, but what’s missing is a recognition that software vulnerabilities aren’t the most common attack vector: credential stealing is.

The most common way hackers of all stripes, from criminals to hacktivists to foreign governments, break into networks is by stealing and using a valid credential. Basically, they steal passwords, set up man-in-the-middle attacks to piggy-back on legitimate logins, or engage in cleverer attacks to masquerade as authorized users. It’s a more effective avenue of attack in many ways: it doesn’t involve finding a zero-day or unpatched vulnerability, there’s less chance of discovery, and it gives the attacker more flexibility in technique.

Rob Joyce, the head of the NSA’s Tailored Access Operations (TAO) group—basically the country’s chief hacker—gave a rare public talk at a conference in January. In essence, he said that zero-day vulnerabilities are overrated, and credential stealing is how he gets into networks: “A lot of people think that nation states are running their operations on zero days, but it’s not that common. For big corporate networks, persistence and focus will get you in without a zero day; there are so many more vectors that are easier, less risky, and more productive.”

This is true for us, and it’s also true for those attacking us. It’s how the Chinese hackers breached the Office of Personnel Management in 2015. The 2014 criminal attack against Target Corporation started when hackers stole the login credentials of the company’s HVAC vendor. Iranian hackers stole US login credentials. And the hacktivist that broke into the cyber-arms manufacturer Hacking Team and published pretty much every proprietary document from that company used stolen credentials.

As Joyce said, stealing a valid credential and using it to access a network is easier, less risky, and ultimately more productive than using an existing vulnerability, even a zero-day.

Our notions of defense need to adapt to this change. First, organizations need to beef up their authentication systems. There are lots of tricks that help here: two-factor authentication, one-time passwords, physical tokens, smartphone-based authentication, and so on. None of these is foolproof, but they all make credential stealing harder.

Second, organizations need to invest in breach detection and—most importantly—incident response. Credential-stealing attacks tend to bypass traditional IT security software. But attacks are complex and multi-step. Being able to detect them in process, and to respond quickly and effectively enough to kick attackers out and restore security, is essential to resilient network security today.

Vulnerabilities are still critical. Fixing vulnerabilities is still vital for security, and introducing new vulnerabilities into existing systems is still a disaster. But strong authentication and robust incident response are also critical. And an organization that skimps on these will find itself unable to keep its networks secure.

This essay originally appeared on Xconomy.

EDITED TO ADD (5/23): Portuguese translation.

Posted on May 4, 2016 at 6:51 AMView Comments

FBI vs. Apple: Who Is Helping the FBI?

On Monday, the FBI asked the court for a two-week delay in a scheduled hearing on the San Bernardino iPhone case, because some “third party” approached it with a way into the phone. It wanted time to test this access method.

Who approached the FBI? We have no idea.

I have avoided speculation because the story makes no sense. Why did this third party wait so long? Why didn’t the FBI go through with the hearing anyway?

Now we have speculation that the third party is the Israeli forensic company Cellebrite. From its website:

Support for Locked iOS Devices Using UFED Physical Analyzer

Using UFED Physical Analyzer, physical and file system extractions, decoding and analysis can be performed on locked iOS devices with a simple or complex passcode. Simple passcodes will be recovered during the physical extraction process and enable access to emails and keychain passwords. If a complex password is set on the device, physical extraction can be performed without access to emails and keychain. However, if the complex password is known, emails and keychain passwords will be available.

My guess is that it’s not them. They have an existing and ongoing relationship with the FBI. If they could crack the phone, they would have done it months ago. This purchase order seems to be coincidental.

In any case, having a company name doesn’t mean that the story makes any more sense, but there it is. We’ll know more in a couple of weeks, although I doubt the FBI will share any more than they absolutely have to.

This development annoys me in every way. This case was never about the particular phone, it was about the precedent and the general issue of security vs. surveillance. This will just come up again another time, and we’ll have to through this all over again—maybe with a company that isn’t as committed to our privacy as Apple is.

EDITED TO ADD: Watch former NSA Director Michael Hayden defend Apple and iPhone security. I’ve never seen him so impassioned before.

EDITED TO ADD (3/26): Marcy Wheeler has written extensively about the Cellebrite possibility

Posted on March 24, 2016 at 12:34 PMView Comments

1 5 6 7 8 9 27

Sidebar photo of Bruce Schneier by Joe MacInnis.