Entries Tagged "Android"

Page 7 of 8

Feudal Security

It’s a feudal world out there.

Some of us have pledged our allegiance to Google: We have Gmail accounts, we use Google Calendar and Google Docs, and we have Android phones. Others have pledged allegiance to Apple: We have Macintosh laptops, iPhones, and iPads; and we let iCloud automatically synchronize and back up everything. Still others of us let Microsoft do it all. Or we buy our music and e-books from Amazon, which keeps records of what we own and allows downloading to a Kindle, computer, or phone. Some of us have pretty much abandoned e-mail altogether … for Facebook.

These vendors are becoming our feudal lords, and we are becoming their vassals. We might refuse to pledge allegiance to all of them—or to a particular one we don’t like. Or we can spread our allegiance around. But either way, it’s becoming increasingly difficult to not pledge allegiance to at least one of them.

Feudalism provides security. Classical medieval feudalism depended on overlapping, complex, hierarchical relationships. There were oaths and obligations: a series of rights and privileges. A critical aspect of this system was protection: vassals would pledge their allegiance to a lord, and in return, that lord would protect them from harm.

Of course, I’m romanticizing here; European history was never this simple, and the description is based on stories of that time, but that’s the general model.

And it’s this model that’s starting to permeate computer security today.

I Pledge Allegiance to the United States of Convenience

Traditional computer security centered around users. Users had to purchase and install anti-virus software and firewalls, ensure their operating system and network were configured properly, update their software, and generally manage their own security.

This model is breaking, largely due to two developments:

  1. New Internet-enabled devices where the vendor maintains more control over the hardware and software than we do—like the iPhone and Kindle; and
  2. Services where the host maintains our data for us—like Flickr and Hotmail.

Now, we users must trust the security of these hardware manufacturers, software vendors, and cloud providers.

We choose to do it because of the convenience, redundancy, automation, and shareability. We like it when we can access our e-mail anywhere, from any computer. We like it when we can restore our contact lists after we’ve lost our phones. We want our calendar entries to automatically appear on all of our devices. These cloud storage sites do a better job of backing up our photos and files than we would manage by ourselves; Apple does a great job keeping malware out of its iPhone apps store.

In this new world of computing, we give up a certain amount of control, and in exchange we trust that our lords will both treat us well and protect us from harm. Not only will our software be continually updated with the newest and coolest functionality, but we trust it will happen without our being overtaxed by fees and required upgrades. We trust that our data and devices won’t be exposed to hackers, criminals, and malware. We trust that governments won’t be allowed to illegally spy on us.

Trust is our only option. In this system, we have no control over the security provided by our feudal lords. We don’t know what sort of security methods they’re using, or how they’re configured. We mostly can’t install our own security products on iPhones or Android phones; we certainly can’t install them on Facebook, Gmail, or Twitter. Sometimes we have control over whether or not to accept the automatically flagged updates—iPhone, for example—but we rarely know what they’re about or whether they’ll break anything else. (On the Kindle, we don’t even have that freedom.)

The Good, the Bad, and the Ugly

I’m not saying that feudal security is all bad. For the average user, giving up control is largely a good thing. These software vendors and cloud providers do a lot better job of security than the average computer user would. Automatic cloud backup saves a lot of data; automatic updates prevent a lot of malware. The network security at any of these providers is better than that of most home users.

Feudalism is good for the individual, for small startups, and for medium-sized businesses that can’t afford to hire their own in-house or specialized expertise. Being a vassal has its advantages, after all.

For large organizations, however, it’s more of a mixed bag. These organizations are used to trusting other companies with critical corporate functions: They’ve been outsourcing their payroll, tax preparation, and legal services for decades. But IT regulations often require audits. Our lords don’t allow vassals to audit them, even if those vassals are themselves large and powerful.

Yet feudal security isn’t without its risks.

Our lords can make mistakes with security, as recently happened with Apple, Facebook, and Photobucket. They can act arbitrarily and capriciously, as Amazon did when it cut off a Kindle user for living in the wrong country. They tether us like serfs; just try to take data from one digital lord to another.

Ultimately, they will always act in their own self-interest, as companies do when they mine our data in order to sell more advertising and make more money. These companies own us, so they can sell us off—again, like serfs—to rival lords…or turn us in to the authorities.

Historically, early feudal arrangements were ad hoc, and the more powerful party would often simply renege on his part of the bargain. Eventually, the arrangements were formalized and standardized: both parties had rights and privileges (things they could do) as well as protections (things they couldn’t do to each other).

Today’s internet feudalism, however, is ad hoc and one-sided. We give companies our data and trust them with our security, but we receive very few assurances of protection in return, and those companies have very few restrictions on what they can do.

This needs to change. There should be limitations on what cloud vendors can do with our data; rights, like the requirement that they delete our data when we want them to; and liabilities when vendors mishandle our data.

Like everything else in security, it’s a trade-off. We need to balance that trade-off. In Europe, it was the rise of the centralized state and the rule of law that undermined the ad hoc feudal system; it provided more security and stability for both lords and vassals. But these days, government has largely abdicated its role in cyberspace, and the result is a return to the feudal relationships of yore.

Perhaps instead of hoping that our Internet-era lords will be sufficiently clever and benevolent—or putting our faith in the Robin Hoods who block phone surveillance and circumvent DRM systems—it’s time we step in in our role as governments (both national and international) to create the regulatory environments that protect us vassals (and the lords as well). Otherwise, we really are just serfs.

A version of this essay was originally published on Wired.com.

Posted on December 3, 2012 at 7:24 AMView Comments

Scary Android Malware Story

This story sounds pretty scary:

Developed by Robert Templeman at the Naval Surface Warfare Center in Indiana and a few buddies from Indiana University, PlaceRader hijacks your phone’s camera and takes a series of secret photographs, recording the time, and the phone’s orientation and location with each shot. Using that information, it can reliably build a 3D model of your home or office, and let cyber-intruders comb it for personal information like passwords on sticky notes, bank statements laying out on the coffee table, or anything else you might have lying around that could wind up the target of a raid on a later date.

It’s just a demo, of course. but it’s easy to imagine what this could mean in the hands of criminals.

Yes, I get that this is bad. But it seems to be a mashup of two things. One, the increasing technical capability to stitch together a series of photographs into a three-dimensional model. And two, an Android bug that allows someone to remotely and surreptitiously take pictures and then upload them. The first thing isn’t a problem, and it isn’t going away. The second is bad, irrespective of what else is going on.

EDITED TO ADD (10/1): I mistakenly wrote this up as an iPhone story. It’s about the Android phone. Apologies.

Posted on October 1, 2012 at 6:52 AMView Comments

NSA's Secure Android Spec

The NSA has released its specification for a secure Android.

One of the interesting things it’s requiring is that all data be tunneled through a secure VPN:

Inter-relationship to Other Elements of the Secure VoIP System

The phone must be a commercial device that supports the ability to pass data over a commercial cellular network. Standard voice phone calls, with the exception of emergency 911 calls, shall not be allowed. The phone must function on US CDMA & GSM networks and OCONUS on GSM networks with the same functionality.

All data communications to/from the mobile device must go through the VPN tunnel to the VPN gateway in the infrastructure; no other communications in or out of the mobile device are permitted.

Applications on the phone additionally encrypt their communications to servers in infrastructure, or to other phones; all those communications must be tunneled through the VPN.

The more I look at mobile security, the more I think a secure tunnel is essential.

Posted on March 7, 2012 at 1:35 PMView Comments

Mobile Malware Is Increasing

According to a report by Juniper, mobile malware is increasing dramatically.

In 2011, we saw unprecedented growth of mobile malware attacks with a 155 percent increase across all platforms. Most noteworthy was the dramatic growth in Android Malware from roughly 400 samples in June to over 13,000 samples by the end of 2011. This amounts to a cumulative increase of 3,325 percent. Notable in these findings is a significant number of malware samples obtained from third-party applications stores, which do not enjoy the benefit or protection from Google’s newly announced Android Market scanning techniques.

We also observed a new level of sophistication of many attacks. Malware writers used new and novel ways to exploit vulnerabilities. 2011 saw malware like Droid KungFu, which used encrypted payloads to avoid detection and Droid Dream, which cleverly disguised itself as a legitimate application, are a sign of things to come.

News story.

I don’t think this is surprising at all. Mobile is the new platform. Mobile is a very intimate platform. It’s where the attackers are going to go.

Posted on February 23, 2012 at 6:27 AMView Comments

Carrier IQ Spyware

Spyware on many smart phones monitors your every action, including collecting individual keystrokes. The company that makes and runs this software on behalf of different carriers, Carrier IQ, freaked when a security researcher outed them. It initially claimed it didn’t monitor keystrokes—an easily refuted lie—and threatened to sue the researcher. It took EFF getting involved to get the company to back down. (A good summary of the details is here. This is pretty good, too.)

Carrier IQ is reacting really badly here. Threatening the researcher was a panic reaction, but I think it’s still clinging to the notion that it can keep the details of what it does secret, or hide behind such statements such as:

Our customers select which metrics they need to gather based on their business need—such as network planning, customer care, device performance—within the bounds of the agreement they form with their end users.

Or hair-splitting denials it’s been giving to the press.

In response to some questions from PCMag, a Carrier IQ spokeswoman said “we count and summarize performance; we do not record keystrokes, capture screen shots, SMS, email, or record conversations.”

“Our software does not collect the content of messages,” she said.

How then does Carrier IQ explain the video posted by Trevor Eckhart, which showed an Android-based phone running Carrier IQ in the background and grabbing data like encrypted Google searches?

“While ‘security researchers’ have identified that we examine many aspects of a device, our software does not store or transmit what consumers view on their screen or type,” the spokeswoman said. “Just because every application on your phone reads the keyboard does not make every application a key-logging application. Our software measures specific performance metrics that help operators improve the customer experience.”

The spokeswoman said Carrier IQ would record the fact that a text message was sent correctly, for example, but the company “cannot record what the content of the SMS was.” Similarly, Carrier IQ records where you were when a call dropped, but cannot record the conversation, and can determine which applications drain battery life but cannot capture screen shots, she said.

Several things matter here: 1) what data the CarrerIQ app collects on the handset, 2) what data the CarrerIQ app routinely transmits to the carriers, and 3) what data can the CarrierIQ app transmit to the carrier if asked. Can the carrier enable the logging of everything in response to a request from the FBI? We have no idea.

Expect this story to unfold considerably in the coming weeks. Everyone is pointing fingers of blame at everyone else, and Sen. Franken has asked the various companies involved for details.

One more detail is worth mentioning. Apple announced it no longer uses CarrierIQ in iOS5. I’m sure this means that they have their own surveillance software running, not that they’re no longer conducting surveillance on their users.

EDITED TO ADD (12/14): This is an excellent round-up of everything known about CarrierIQ.

Posted on December 5, 2011 at 6:05 AMView Comments

Android Malware

The Android platform is where the malware action is:

What happens when anyone can develop and publish an application to the Android Market? A 472% increase in Android malware samples since July 2011. These days, it seems all you need is a developer account, that is relatively easy to anonymize, pay $25 and you can post your applications.

[…]

In addition to an increase in the volume, the attackers continue to become more sophisticated in the malware they write. For instance, in the early spring, we began seeing Android malware that was capable of leveraging one of several platform vulnerabilities that allowed malware to gain root access on the device, in the background, and then install additional packages to the device to extend the functionality of the malware. Today, just about every piece of malware that is released contains this capability, simply because the vulnerabilities remain prevalent in nearly 90% of Android devices being carried around today.

I believe that smart phones are going to become the primary platform of attack for cybercriminals in the coming years. As the phones become more integrated into people’s lives—smart phone banking, electronic wallets—they’re simply going to become the most valuable device for criminals to go after. And I don’t believe the iPhone will be more secure because of Apple’s rigid policies for the app store.

EDITED TO ADD (11/26): This article is a good debunking of the data I quoted above. And also this:

“A virus of the traditional kind is possible, but not probable. The barriers to spreading such a program from phone to phone are large and difficult enough to traverse when you have legitimate access to the phone, but this isn’t Independence Day, a virus that might work on one device won’t magically spread to the other.”

DiBona is right. While some malware and viruses have tried to make use of Bluetooth and Wi-Fi radios to hop from device to device, it simply doesn’t happen the way security companies want you to think it does.

Of course he’s right. Malware on portable devices isn’t going to look or act the same way as malware on traditional computers. It isn’t going to spread from phone to phone. I’m more worried about Trojans, either on legitimate or illegitimate apps, malware embedded in webpages, fake updates, and so on. A lot of this will involve social engineering the user, but I don’t see that as much of a problem.

But I do see mobile devices as the new target of choice. And I worry much more about privacy violations. Your phone knows your location. Your phone knows who you talk to and—with a recorder—what you say. And when your phone becomes your digital wallet, your phone is going to know a lot more intimate things about you. All of this will be useful to both criminals and marketers, and we’re going to see all sorts of illegal and quasi-legal ways both of those groups will go after that information.

And securing those devices is going to be hard, because we don’t have the same low-level access to these devices we have with computers.

Anti-virus companies are using FUD to sell their products, but there are real risks here. And the time to start figuring out how to solve them is now.

Posted on November 25, 2011 at 6:06 AMView Comments

Smartphone Keystroke Logging Using the Motion Sensor

Clever:

“When the user types on the soft keyboard on her smartphone (especially when she holds her phone by hand rather than placing it on a fixed surface), the phone vibrates. We discover that keystroke vibration on touch screens are highly correlated to the keys being typed.”

Applications like TouchLogger could be significant because they bypass protections built into both Android and Apple’s competing iOS that prevent a program from reading keystrokes unless it’s active and receives focus from the screen. It was designed to work on an HTC Evo 4G smartphone. It had an accuracy rate of more than 70 percent of the input typed into the number-only soft keyboard of the device. The app worked by using the phone’s accelerometer to gauge the motion of the device each time a soft key was pressed.

Paper here. More articles.

Posted on August 23, 2011 at 2:09 PMView Comments

Protecting Private Information on Smart Phones

AppFence is a technology—with a working prototype—that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.

The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don’t want applications to have. Many applications take advantage of this to gain access to users’ device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.

The problem is that the mobile OS providers might not like AppFence. Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.

Posted on June 24, 2011 at 6:37 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.