Entries Tagged "Android"

Page 6 of 8

New Secure Smart Phone App

It’s hard not to poke fun at this press release for Safeslinger, a new cell phone security app from Carnegie Mellon.

SafeSlinger provides you with the confidence that the person you are communicating with is actually the person they have represented themselves to be,” said Michael W. Farb, a research programmer at Carnegie Mellon CyLab. “The most important feature is that SafeSlinger provides secure messaging and file transfer without trusting the phone company or any device other than my own smartphone.”

Oddly, Farb believes that he can trust his smart phone.

This headline claims that “even [the] NSA can’t crack” it, but it’s unclear where that claim came from.

Still, it’s good to have encrypted chat programs. This one joins Cryptocat, Silent Circle, and my favorite: OTR.

Posted on October 15, 2013 at 12:37 PMView Comments

Google Knows Every Wi-Fi Password in the World

This article points out that as people are logging into Wi-Fi networks from their Android phones, and backing up those passwords along with everything else into Google’s cloud, that Google is amassing an enormous database of the world’s Wi-Fi passwords. And while it’s not every Wi-Fi password in the world, it’s almost certainly a large percentage of them.

Leaving aside Google’s intentions regarding this database, it is certainly something that the US government could force Google to turn over with a National Security Letter.

Something else to think about.

Posted on September 20, 2013 at 7:05 AMView Comments

A Really Good Article on How Easy it Is to Crack Passwords

Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break them. The winner got 90% of them, the loser 62%—in a few hours.

The list of “plains,” as many crackers refer to deciphered hashes, contains the usual list of commonly used passcodes that are found in virtually every breach involving consumer websites. “123456,” “1234567,” and “password” are there, as is “letmein,” “Destiny21,” and “pizzapizza.” Passwords of this ilk are hopelessly weak. Despite the additional tweaking, “p@$$word,” “123456789j,” “letmein1!,” and “LETMEin3” are equally awful….

As big as the word lists that all three crackers in this article wielded—close to 1 billion strong in the case of Gosney and Steube—none of them contained “Coneyisland9/,” “momof3g8kids,” or the more than 10,000 other plains that were revealed with just a few hours of effort. So how did they do it? The short answer boils down to two variables: the website’s unfortunate and irresponsible use of MD5 and the use of non-randomized passwords by the account holders.

The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.

Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.

“The combinator attack got it! It’s cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

Great reading, but nothing theoretically new. Ars Technica wrote about this last year, and Joe Bonneau wrote an excellent commentary.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models).

I wrote about this same thing back in 2007. The news in 2013, such as it is, is that this kind of thing is getting easier faster than people think. Pretty much anything that can be remembered can be cracked.

If you need to memorize a password, I still stand by the Schneier scheme from 2008:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Until this very moment, these passwords were still secure:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst::amazon.cccooommm = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence, some personal memorable tricks to modify that sentence into a password, and create a long-length password.

Better, though, is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to store them. (If anyone wants to port it to the Mac, iPhone, iPad, or Android, please contact me.) This article does a good job of explaining the same thing. David Pogue likes Dashlane, but doesn’t know if it’s secure.

In related news, Password Safe is a candidate for July’s project-of-the-month on SourceForge. Please vote for it.

EDITED TO ADD (6/7): As a commenter noted, none of this is useful advice if the site puts artificial limits on your password.

EDITED TO ADD (6/14): Various ports of Password Safe. I know nothing about them, nor can I vouch for their security.

Analysis of the xkcd scheme.

Posted on June 7, 2013 at 6:41 AMView Comments

Remotely Hijacking an Aircraft

There is a lot of buzz on the Internet about a talk at the Hack-in-the Box conference by Hugo Teso, who claims he can hack in to remotely control an airplane’s avionics. He even wrote an Android app to do it.

I honestly can’t tell how real this is, and how much of it is the unique configuration of simulators he tested this on. On the one hand, it can’t possibly be true that an aircraft avionics computer accepts outside commands. On the other hand, we’ve seen lots of security vulnerabilities that seem impossible to be true. Right now, I’m skeptical.

EDITED TO ADD (4/12): Three good refutations.

Posted on April 12, 2013 at 10:50 AMView Comments

All Those Companies that Can't Afford Dedicated Security

This is interesting:

In the security practice, we have our own version of no-man’s land, and that’s midsize companies. Wendy Nather refers to these folks as being below the “Security Poverty Line.” These folks have a couple hundred to a couple thousand employees. That’s big enough to have real data interesting to attackers, but not big enough to have a dedicated security staff and the resources they need to really protect anything. These folks are caught between the baseline and the service box. They default to compliance mandates like PCI-DSS because they don’t know any better. And the attackers seem to sneak those passing shots by them on a seemingly regular basis.

[…]

Back when I was on the vendor side, I’d joke about how 800 security companies chased 1,000 customers—meaning most of the effort was focus on the 1,000 largest customers in the world. But I wasn’t joking. Every VP of sales talks about how it takes the same amount of work to sell to a Fortune-class enterprise as it does to sell into the midmarket. They aren’t wrong, and it leaves a huge gap in the applicable solutions for the midmarket.

[…]

To be clear, folks in security no-man’s land don’t go to the RSA Conference, probably don’t read security pubs, or follow the security echo chamber on Twitter. They are too busy fighting fires and trying to keep things operational. And that’s fine. But all of the industry gatherings just remind me that the industry’s machinery is geared toward the large enterprise, not the unfortunate 5 million other companies in the world that really need the help.

I’ve seen this trend, and I think it’s a result of the increasing sophistication of the IT industry. Today, it’s increasingly rare for organizations to have bespoke security, just as it’s increasingly rare for them to have bespoke IT. It’s only the larger organizations that can afford it. Everyone else is increasingly outsourcing its IT to cloud providers. These providers are taking care of security—although we can certainly argue about how good a job they’re doing—so that the organizations themselves don’t have to. A company whose email consists entirely of Gmail accounts, whose payroll is entirely outsourced to Paychex, whose customer tracking system is entirely on Salesforce.com, and so on—and who increasingly accesses those systems using specialized devices like iPads and Android tablets—simply doesn’t have any IT infrastructure to secure anymore.

To be sure, I think we’re a long way off from this future being a secure one, but it’s the one the industry is headed toward. Yes, vendors at the RSA conference are only selling to the largest organizations. And, as I wrote back in 2008, soon they will only be selling to IT outsourcing companies (the term “cloud provider” hadn’t been invented yet):

For a while now I have predicted the death of the security industry. Not the death of information security as a vital requirement, of course, but the death of the end-user security industry that gathers at the RSA Conference. When something becomes infrastructure—power, water, cleaning service, tax preparation—customers care less about details and more about results. Technological innovations become something the infrastructure providers pay attention to, and they package it for their customers.

[…]

The RSA Conference won’t die, of course. Security is too important for that. There will still be new technologies, new products and new startups. But it will become inward-facing, slowly turning into an industry conference. It’ll be security companies selling to the companies who sell to corporate and home users—and will no longer be a 17,000-person user conference.

Posted on February 22, 2013 at 6:03 AMView Comments

Guessing Smart Phone PINs by Monitoring the Accelerometer

Practicality of Accelerometer Side Channels on Smartphones,” by Adam J. Aviv. Benjamin Sapp, Matt Blaze, and Jonathan M. Smith.

Abstract: Modern smartphones are equipped with a plethora of sensors that enable a wide range of interactions, but some of these sensors can be employed as a side channel to surreptitiously learn about user input. In this paper, we show that the accelerometer sensor can also be employed as a high-bandwidth side channel; particularly, we demonstrate how to use the accelerometer sensor to learn user tap and gesture-based input as required to unlock smartphones using a PIN/password or Android’s graphical password pattern. Using data collected from a diverse group of 24 users in controlled (while sitting) and uncontrolled (while walking) settings, we develop sample rate independent features for accelerometer readings based on signal processing and polynomial fitting techniques. In controlled settings, our prediction model can on average classify the PIN entered 43% of the time and pattern 73% of the time within 5 attempts when selecting from a test set of 50 PINs and 50 patterns. In uncontrolled settings, while users are walking, our model can still classify 20% of the PINs and 40% of the patterns within 5 attempts. We additionally explore the possibility of constructing an accelerometer-reading-to-input dictionary and find that such dictionaries would be greatly challenged by movement-noise and cross-user training.

Article.

Posted on February 15, 2013 at 6:48 AMView Comments

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.