Protecting Private Information on Smart Phones

AppFence is a technology—with a working prototype—that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.

The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don’t want applications to have. Many applications take advantage of this to gain access to users’ device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.

The problem is that the mobile OS providers might not like AppFence. Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.

Posted on June 24, 2011 at 6:37 AM33 Comments

Comments

Peter June 24, 2011 7:34 AM

This sort of technology is over due in mobile OS arena. As to Apple going along with it that has a snowball’s chance in hell.

CMike June 24, 2011 7:42 AM

I wonder if the folks on the Windows Phone 7 team have thought of building in customer privacy as a possible marketing advantage for their platform? Or does the research show that the dumb masses don’t care about privacy?

Anonymous 1 June 24, 2011 7:44 AM

Wouldn’t surprise me all that much if Google put it (or something like it) in a future version of Android.

iJailbreak June 24, 2011 7:50 AM

“The problem is that the mobile OS providers might not like AppFence.”

As Mr. John Perry Barlow famously stated in his response to the Telecom Reform Act:

“Well, fuck them.”

Google won’t have a choice about this due to Android’s core being open-source. Regarding Apple’s stance about not liking it, see my username. There’s already firewall apps for completely blocking connections, but nothing as elegant as the ability to lie to a data-mining app.

Looking forward to badly skewing the signal-to-noise ratio in their marketing data.

iJailbreak June 24, 2011 7:52 AM

@Anonymous 1: “Wouldn’t surprise me all that much if Google put it (or something like it) in a future version of Android.”

It would surprise the hell out of me, since harvesting personal data for marketing/advertising is what Google loves most of all. They’re not going to kill their golden goose.

GoogleCares June 24, 2011 7:59 AM

Google’s sole revenue stream is based on collating and selling your information. This would be a much bigger hit on Google than Apple. In contrast, Apple has already gone up (without success) against magazine publishers who wanted to keep user info coming.

Firewall Lover June 24, 2011 8:16 AM

I use the Droidwall app on my rooted DroidX. Easy to use firewall, restricting the apps you want from accessing the network. I put all the games and some widgets behind it. Plus some Verizon/Moto crapware apps from possible leaking info.

Jaime Magiera June 24, 2011 8:29 AM

Apple has added a new sandboxing feature in their Lion operating system — which will likely end up on the iOS side as well (though the media has portrayed this as similar to what iOS already does, it’s not quite the same). Essentially, when an app is submitted to Apple for the store, it is given a set of permission rights to access particular places on the drive and particular types of information in the OS. The app is then labeled with what permissions set it has for the user’s benefit. If the app tries to access something outside those permissions, the OS throws a warning to the user (or the user can choose to ignore warnings). I think this feature will be carried over into iOS once it is successful on the desktop.

Anonymous 1 June 24, 2011 8:39 AM

Oh I don’t think Google will be too willing to allow people to block sending data to their servers but they aren’t the only people collecting the stuff (and it is very likely that they would actually help with blocking other peoples’ tracking systems).

Might even help Google corner the advertising market on Android.

stvs June 24, 2011 8:57 AM

“good luck getting Apple to agree to it”

That’s why god^H^H^H^H the devteam gave us jailbreaking.

A related app for jailbroken iPhones is Firewall iP, which lets you review then block or allow ALL outgoing connects. If you don’t believe that you should have to connect to a developer’s site to run your app, then block the connect. Likewise, if you don’t want to look app ad adware, then block those connects too.

Norman June 24, 2011 8:59 AM

WhisperCore is working well if you have a Nexus S or One. There’s been WhisperMonitor (firewall) since v0.2, and the 0.5 release in the last few days, has added ‘Selective Permissions’ to spoof info without crashing the app.

N

lazlo June 24, 2011 9:02 AM

My guess would be that the only possibility for Apple would be for them to build their own system and include it with the OS – something that keeps all your data as private as you want it, just between you and Apple.

VidKid June 24, 2011 9:12 AM

“Google probably doesn’t care, but Apple is one of the biggest consumers of iPhone personal information.”

I fail to comprehend this statement. In what way does Apple ‘consume’ personal information and Google does not? Especially considering that Google’s business method is advertising based on reading my email and messages?

V.

SpamPol June 24, 2011 10:55 AM

“but Apple is one of the biggest consumers of iPhone personal information.”

I dont get it either. How could Apple possibly be before Google when it comes to collecting and exploiting user info? Google is the one trying to track everything you do on the web so they can build proper advertising profiles.

SpamPol June 24, 2011 11:00 AM

” but good luck getting Apple to agree to it.”

Apple will certainly not encourage users to root their phone. Not only is it completely not user friendly, but it would bring more problems than it solves – see security issues when jail breaking.

That is however completely orthogonal to the issue at end, namely monitoring closely resources access by apps. Apple already does today a limited form of this with sandboxing, and I don’t why it couldnt get more strict in the future.

Remember, on iOS you have clear indicators on which apps accessed your GPS position in the last 24 hours. Could be extended too…

Seiran June 24, 2011 11:15 AM

Posted a rather long and windy comment about Android security model at:
http://www.schneier.com/blog/archives/2011/01/trojan_steals_c.html

It’s fine and all to focus on privacy-impacting permissions, as these are the most obviously over-used/abused. However, the option of External Storage virtualized containers shouldn’t be forgotten as it’s only a matter of time before malware starts messing with the SD Card.

The latest Gingerbread from Google is also clamping down on power usage. Battery parasites aren’t malicious by nature but enough poorly-written apps are doing it they had to do something about it. I’ve read reports that Glympse and Hoccer could halve your runtime just by being installed.

Premium SMS is another one that I thankfully see is being addressed by some third-party application vendors. But to truly get rid of SMS fraud they ought to change the whole model to one that’s much more secure. Right now, it’s like the 1-900 world before the 1-900 rules came into effect.

A little off topic but somewhat relevant: My suggestion regarding Premium SMS is that carriers develop and install an application inside the phone that interacts with the SMS billing system to insert a user consent loop. Once the app has registered itself as “enabled” on a handset, all future SMS billing triggers a request to the app over the data channel for confirmation. The app checks to see whether the user has recently texted that number, and asks if they would like to pay. Subscriptions must be manually whitelisted.

There is already a delay before vendors can confirm whether or not an SMS payment was successful, and the amount received, so slowing it down by adding pop-up notifications on a phone won’t break anything. It’s never used for critical payments such as utilities and mortgage anyways. If no response can be found from the verifier app or the user within 90 seconds, just deny the payment.

This model can work on phones without a data connection, using an app that sends encrypted responses over SMS to a carrier-owned shortcode, or simply having the cell company texting the user: “TelecomMobility: Pay 31415 $5.99 once? Reply ur billing ZIP to allow; BLOCK to deny. ”

What a wishlist. Mobile security has a long ways to go.

Mark R June 24, 2011 12:25 PM

Apple’s new sandboxing feature sounds like SELinux (except that the vendor sets the rules instead of the user; less control exchanged for avoiding the notorious headaches).

“Selective Permissions” functionality is something I had four years ago on a Nokia Symbian device.

Solutions to these problems are out there… I guess it’s just a question of who’s got the incentive (or dis-incentive) to implement them.

Dirk Praet June 24, 2011 1:43 PM

I like it. But what would really excite me is some app allowing you to spoof/fake pretty much any personal and device information being sent out. Like a random IMEI, modified geo-location data that puts you somewhere in Mongolia, fake caller ID (already exists for iPhone), contact data pulled from a phone guide in Azerbeidjan, keylogger providing dialogues from “Patriot Dames” etc. etc.

Back in the days, I had a nifty .cgi on most of my websites that when accessed provided a page with a ton of randomly generated bogus email adresses and links pointing to the same script again. Any email harvesting bot that didn’t respect robots.txt got caught in it and I do hope it corrupted more than one spammer’s database.

Too cut a long story short: although I have little to hide, it’s just nobody’s business what I do, where I go or how I’m living my life unless I explicitely choose to give out this information myself. That means Team Big Brother (Facebook, Google, Apple, governments etc.) just as much as some average Joe with a pen and paper trying to extract such data out of me when hanging out at the bar. And which for all practical purposes would be considered very creepy by any other patron too.

Nick P June 24, 2011 1:53 PM

@ Bruce

The description on your blog post of the substitution trick sounds much like the scheme that Mark Currie and his associates came up with around 2009, I think. We’ve been discussing his scheme’s applicability to online banking in the comments of your post “Court ruling on reasonable electronic banking security.” Key difference is that his was a cheap hardware device that protected SSL connection and did critical info substitution. This precedent means two things: the scheme has more credibility; the AppFence folks are late to the party. 😉

NZ June 24, 2011 2:40 PM

@iJailbreak
“Google won’t have a choice about this due to Android’s core being open-source.”

Android kernel is GPLed, so it will stay open-source. Other parts of the OS are open only as long as Google wishes it. And handset makers can block flashing their handsets.

Richard Steven Hack June 24, 2011 4:30 PM

Sounds to me like these phones need “anti-forensics” tools easy enough for users to actually understand and use. Tools that block everything and/or erase everything they don’t want off the phone without their consent. Tools that understand and reach deep into the hidden OS stuff to make sure there’s no privacy violation going on in the background.

Hackers have such tools for the desktop, I’m sure they’ll develop them for the phones eventually. It’s just another arms race.

Richard Steven Hack June 24, 2011 5:12 PM

Relevant article:

LulzSec docs show Ariz. cops’ unhealthy obsession with iPhone
http://www.itworld.com/security/177409/lulzsec-docs-show-ariz-cops-unhealth-obsession-iphone

Quote:

Another document warned about remote-wipe capabilities in iOS v. 3, and recommends arresting officers isolate phones from radio signals in a Farraday bag or “some other nickel, copper and silver plated storage container (see figure 3). The device must be protected from any wireless connection/radio signal even throughout the forensic imaging process.”

End Quote

Timothy Schwer June 24, 2011 5:56 PM

Another +1 on WhisperCore. I’ve been running it for a while now, and it’s really everything I’ve wanted in terms of a mobile security story.

Nick P June 25, 2011 11:11 AM

@ Richard Steven Hack

That’s hilarious. I should leave an iPhone there just to watch them do that shit. Faraday cage? Could police really operate one properly? lol. They should just buy one of those jamming pouches:

Like this…
http://www.dragonext.com/cell-phone-signal-blocker-pouch-bag-with-anti-degaussing-and-anti-radiation-function.html

Although, I’d test some of these things first. “Anti-deguassing” and “anti-radiation?” It can stop gamma rays? Really? (Probably just the alpha particles, maybe beta…)

Anonymous 1 June 26, 2011 1:46 PM

Mobile phone pouches which block the radio waves would also render the phone inoperative (at least as a phone) so if they actually worked they’d be a pretty big waste of money.

If they merely blocked part of the signal then the phone would just increase power to compensate reducing battery life and keeping radiation levels constant (at least until it reached maximum power, then it drops out).

Of course when you consider that the negative health effects of mobile phones don’t actually exist there’s no point in even trying to reduce the signal (of course it seems easier to make money pandering to baseless fears than through honesty).

Also given that magnetic stripes these days are pretty much all high-coercivity you don’t really need to worry about stray magnetic fields wiping them (a rare Earth right to the card might do it).

If the back pouch completely blocks the signal then it might have some niche uses, like putting mobile phones you don’t know how to turn off in it when you don’t want them to work but an ordinary metal box should do that job just as well (and even Chief Wiggum should be able to put a phone in a box).

Nick P June 28, 2011 3:25 AM

@ Anonymous 1

“If the back pouch completely blocks the signal then it might have some niche uses, like putting mobile phones you don’t know how to turn off in it when you don’t want them to work ”

You can’t know when a mobile phone is truly off without measuring its electrical power consumption and EMF emissions. Many executives are known to take the batteries out of their cell phones when they want true privacy or untraceability. They just make periodic outgoing calls and calls to their voicemail to see missed calls. A working, blocking pouch makes things so much more convenient.

“but an ordinary metal box should do that job just as well”

Metal is a conductor. Unless in a properly-grounded faraday cage configuration, metal transmits information quite well. How would it “do the job just as well” as a non-conducting, possibly EMF absorbing material?

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.