Comments

Richard Steven Hack June 15, 2011 12:27 PM

No surprise, though. Given the state of software development in industry, ANYTHING running software is a security vulnerability by definition and should be treated as such.

Really, the meme “There is no security” needs to be tattooed on the back of everyone’s hand at birth. 🙂

chen June 15, 2011 12:48 PM

So, what does this mean for the image of open-source development? Speaking of open-source, couldn’t the code for these apps be scrutinized?

Dave Piscitello June 15, 2011 12:49 PM

Thanks for the pointer, Bruce.

Apps pose even more of a problem than PC/Mac malware. Far fewer mobile device users think of their phones as computing platforms and fewer still think that mobile devices can be infected. I overheard someone chatting on the DC Metro a few weeks back saying “I use my mobile phone for banking, it’s much safer than my PC…”

With consumer attitudes like this, Google and other Apps hosting sites have to step up and do a better job of assessing and filtering malicious code or we’re in for a seriously rough time.

karrde June 15, 2011 1:10 PM

@chen,
While the Linux kernel is open-source, the Google Android toolkit is not.

Thus, any apps which use the Android Toolkit aren’t GPL, since the Android toolkit isn’t.

On the other hand, not many people are that conversant on the difference between Linux and a system using the Linux kernel.

Roflo June 15, 2011 1:11 PM

Title’s a bit misleading..

The malware is found in a few apps, not in Android itself.

Another completely different issue is the statement in the last section: 90% of users are running an outdated kernel, with known vulnerabilities.

@chen: I don’t know if the apps are open-sourced or not, but I’d bet it hurts the image of OSS development.

CLP June 15, 2011 1:33 PM

The takeaway is that an app’s inclusion in Google’s official App Market should not be seen as a guarantee that the app isn’t malicious. I view downloading apps from the App Market as similar to downloading shareware from BBSes in the late 80s and early 90s: most of the programs are going to be okay, but there is some risk of accidentally downloading malware. You have to be careful.

(Perhaps someone will come along and make an Android app store with only throughly vetted programs.)

I hate to say this, because I love my Android phone, but I don’t think it’s a good product for users who do not have at least a mild level of technical sophistication.

Brianary June 15, 2011 1:34 PM

The article seems like a screed advocating paternalistic, authoritarian control.

Kevin Peterson June 15, 2011 1:34 PM

According to the article ‘Carrying titles such as “Angry Birds Rio Unlock,” the apps posed as legitimate programs.’

My concern: is Android secure enough for sophisticated users? I don’t think anything that is convenient enough to achieve significant market share is secure when operated by naive users.

It looks like they operated entirely within the permissions granted when the user installed, unless I miss something.

Richard Steven Hack June 15, 2011 1:44 PM

Just because a piece of software (ooh, that acronyms as “POS” which has another meaning!) is open source, which means it CAN be scrutinized, doesn’t mean it IS being scrutinized.

The article does say that Google has been pretty proactive about notifying users when they detect malicious apps. You don’t see Microsoft doing that (nor can they given the number of Windows apps in the world).

CLP: “I don’t think it’s a good product for users who do not have at least a mild level of technical sophistication.”

You could say that about all computing devices, as I indicated in my first post. As I said, ANYTHING running software is a security vulnerability by definition, because software at the present stage of software engineering is inherently insecure.

If it’s made by humans to do more than one simple thing, it’s insecure, somehow, some way – and someone will find a way to make it insecure or use it to develop further insecurity. It’s inherent in the nature of human engineering.

Dave: “I use my mobile phone for banking, it’s much safer than my PC…”

They’re right – at THIS point in time. There are far fewer malware directed at mobiles than PCs at this point. That’s the same reason various security agencies recommend using Macs or Linux (preferably in a VM) to do online banking instead of Windows.

That will change, however, and quickly. Mobile apps will be huge – and the security problems will be huge.

But you’re right that people need to realize what I said: any computing device is insecure, and at the very least must be treated with care when doing sensitive work like banking.

C Wagner June 15, 2011 2:16 PM

@R.S.H:

“But you’re right that people need to realize what I said: any computing device is insecure, and at the very least must be treated with care when doing sensitive work like banking.”

I am not intending to sound sarcastic but how do you treat the computing device with care when doing banking?

You touch the buttons lightly?

I mean you can either do banking with it or not do banking with it. Otherwise it kinda sounds like PR-speak from the corporate IT security department.

Patrick W. Barnes June 15, 2011 3:20 PM

The title is misleading, maybe even sensationalistic. It is important to distinguish between Android and apps running on Android.

@chen et al: Nothing guarantees that Android apps are open source. It seems that most aren’t. If Android were like a typical Linux distribution, the situation would likely be very different.

@Richard Steven Hack: Using a separate system for banking, no matter what the operating system, is a wise practice. Using an OS that is not as heavily targeted as Windows PCs further reduces risk. Using VMs on a potentially-compromised host guarantees nothing. Keyloggers and other threats on the host may still be effective against a banking session in a local VM.

One of the big benefits of Android is its openness, and it is that openness that leaves it vulnerable to this type of issue. I do not think the solution is to close off any aspect of the platform. I think user education, stronger vetting by Google and independent researchers, possible introduction of finer-grained access controls and stronger privilege separation (potentially including code signing) are all important to defending mobile users. No matter what the platform, as long as users want to run third-party code without any investigation of the application or its source, nothing platform vendors do can protect them completely. We have to find the right balance between security, cost and convenience.

mashiara June 15, 2011 3:21 PM

A shameless plug before I read all the comments:

maemo.org has a nice process for getting third party (generally open source) applications to the “maemo.org extras” repository which is enabled default in the N900 application manager

A few links:

http://wiki.maemo.org/Extras

http://wiki.maemo.org/Extras-testing#The_extras-testing_QA_queue_.26_you

http://en.wikipedia.org/wiki/Maemo

Full disclosure: I/my company was involved in building of many of the maemo.org services.

mashiara June 15, 2011 3:29 PM

And then to hijack the thread: has Clive, NickP or Richard (Steven Hack) (or anyone else who aspires to be like them) taken a look at Cubes: http://qubes-os.org/

This looks very interesting (and it tries very hard to mitigate the fact that “security via correctness” just is not going to happen).

Tim June 15, 2011 3:42 PM

“While the Linux kernel is open-source, the Google Android toolkit is not. Thus, any apps which use the Android Toolkit aren’t GPL, since the Android toolkit isn’t.”

So so wrong. Android is completely open source.

Alan June 15, 2011 4:21 PM

“So so wrong. Android is completely open source.”

Completely open source in none of the ways that makes it useful. If it’s truly open source, show me how to build and install a working kernel. Show me how to integrate patches with mainline. Show me where the mainline code is developer. Google abused the opensource title by applying it to android. The only thing ‘open’ about android is that google occasionally decide to release the software to the world. There’s no guarantee when or how they’ll do it. There’s also no guarantee that the code they release is the same as was used on any phone rom.

Gweihir June 15, 2011 5:27 PM

Lets face it: These devices are full-fledged always networked computers. You need to be careful what you install on them. On the other hand, with a locked-down “app store” as Apple uses, people may not get what they want. I think this is neither new nor surprising and just repeats history.

Personally, I would prefer Debian stable on a phone, that is a whole different breed of environment. But an open software repository targeted at the masses? What do you expect?

The one thing I find a bit disappointing, is that Android was not made for easy automated patching. That aspect could have been done better. But time and again, people overestimate the actual software engineering competence at Google. The “wild horde” developer model fueled by bright people is not cutting it anymore. And for a lot of highly capable and experienced people, Google has lost its magic some time ago.

Nick P June 15, 2011 6:40 PM

@ mashiara on Qubes’s

QubesOS is interesting. It basically mimicks the strategies and capabilities of one of the old Orange Book Trusted GUI schemes, but with less assurance. The high point is its strong support for modern functionality like Power Management. The low point is it builds on hopelessly insecure foundations and components.

An amusing tidbit is I once was blasted with rhetoric by their project leader because someone quoted one of my posts saying essentially this. I also said that we don’t need another virtualization scheme: it’s a wasted effort. I was hoping the world’s most famous hardware/firmware flaw hunters would help improve assurance of hardware, but instead started extending the Xen platform. Led to a little flame war. Here’s some links. (Note: do not bring this topic back up on their mailing list because dragging out flame wars just waste time that can go to improving their software for the people who use it.)

An innocent beginning
https://groups.google.com/group/qubes-devel/msg/e1e40f51c5c4e3b1

I respond to a dismissal, blasted with rhetoric
https://groups.google.com/group/qubes-devel/browse_thread/thread/6833ac3bb9b9a0d9

Final response called out fallacies. Not as civil…
https://groups.google.com/group/qubes-devel/browse_thread/thread/6ff18a1a677df992

Needless to say, I’ve kind of left them to their project. I keep on the mailing list to stay updated. A recent post indicates they were recently looking into a high performance, secure cross-domain transfer mechanism. This is already offered in previous & existing trusted operating systems, including some I linked to. I’d say the project is mainly good for protecting web surfers from common malware & preventing accidental data leaks. The former is already doable just using Linux w/ Firefox+NoScript+Flashblock or Chrome, and/or Linux Live distro’s.

Still, many people would find it useful for an improvement in security over their existing setups, while still being easy to use. I think my points in the debate show clearly that Qubes can’t be considered medium to high assurance. So, it’s not a good approach to protecting high value assets from online attackers. Still gotta buy that kind of security from an oligopoly of vendors and customize it for a system. Using existing open-source, increased assurance offerings may require significant customization, testing, and integration. Bleak situation for people on a budget.

jammit June 15, 2011 6:49 PM

This seems silly to me, but wouldn’t it be neat to make a secure app? An app that allows you to sandbox certain things, or requires the phone program to ignore all requests from other programs, or perhaps even make all programs run with limited accounts?

tommy June 15, 2011 6:58 PM

The article’s author lost all credibility with me for this statement:

“(Google) has also assembled a brain trust of some of the most respected security researchers in the world. Their work has gone a long way to developing a web browser, a stable of web-based applications, and other services whose security is second to none.”

Which Google is he talking about? The one headquartered in California, USA, Earth, or the one on some other planet?

The browser is known to be full of spyware — how could it not be, when Google’s main business revenue is from being an advertising agency. They asked if NoScript could be ported to Chrome; the developer agreed, but it was dropped because Google never did give him a suitable API. The thought of all the google-syndication.com, google-analytics.com, and other scripts being blocked, or run with “surrogate” scripts that make the page happy but yield no data:

http://hackademix.net/2009/01/25/surrogate-scripts-vs-google-analytics/

surely killed that idea. They’re being sued by the EU for privacy violations. Gmail gets hacked as much as any other. Etc. Etc.

“The professor said they also contained a backdoor largely made possible by a weakness documented at a security conference 12 months ago that allows Android apps to be surreptitiously updated.”

Why didn’t those attendees scream this to the world, including all the 24/7 cable news channels in the US, instead of letting it remain in academia for a year, while haxxors had a field day with it? The fact that it existed in the first place, and that they didn’t fix it until there were mass attacks a year later — Yeah, nobody has “perfect” security, as R. S. Hack reminds us, but to imply that Google is “the best in the world” is laughable, if the results weren’t so sad.

I avoid Google as much as possible, especially their search engine (https://www.scroogle.org is better), their browser, and, assuming that I were forced to make telephone calls on my computer and do Internet stuff on my telephone — n/m, it’s supposed to be the other way around, that’s all. Lots of recent news that no brand of smart phone is safe. I use a dumb one, and not for sensitive purposes.

Plugging electronic holes after the fact is a losing game, and can’t be compared to plugging physical holes, like this:

http://www.amiright.com/parody/70s/abba130.shtml

(Mild discretion advised)

tommy June 15, 2011 7:04 PM

@ Nick P.:

Sorry, I was composing while you posted. If you have Firefox + NoScript, why would you need Flashblock? NS blocks all Flash by default, and permits you to allow it on a per-need basis from sites or authors you trust, in addition to the many other protections it offers.

Dirk Praet June 15, 2011 7:13 PM

AFAIK the first public release of Android was somewhere in 2007. With Honeycomb we are now at version 3.1 . Does anyone remember Windows 3.1 ?

The fact of the matter and the point I’m trying to make is that as an OS, Android is far from being on par with the “maturity level” Linux, BSD, Solaris, OS/X or even Windows have reached today. And which still is no guarantee whatsoever when inadequately configured, as shown by the major havoc groups like LulzSec have recently been wreaking on a variety of outfits up to the CIA. As previously discussed in other threads, there is still a lot of work to do in terms of Android security model, product and application distribution management.

Too many people are blindly buying into a marketing hype that wants to make them believe that by definition this shiny, all purpose phone powered by a Google OS must be safe and secure to use. Well it’s not. Their Android is not a phone but a mini-computer running an operating system that in my opinion is just not fit yet to do any trusted computing or transactions with. In its current state of development it still needlessly exposes owners to many of the childhood diseases and security vulnerabilities commonly associated with early versions of most industry standard OS’es. Given the fact that iOS and Android have pretty much cornered the market, it can come to no suprise that criminals will primarily be targetting the one that is the easiest to exploit. Whether the apps are proprietary or open source does not make any difference.

However much I like to toy about with Androids, I’m still sticking to my old Motorola RAZR phone for now.

Nick P June 15, 2011 8:55 PM

@ tommy

Sometimes i have NoScript turned off because Im not worried about an infection much. Example is when using a LiveCD. In that case, I just use Flashblock to stop many ads, annoying extra content, and as a side benefit flash spyware or malware. It’s also easier to selectively activate content: one mouse click on a blocked flash symbol thats located in the place where they prolly put the vid.

Nick P June 15, 2011 8:58 PM

@ tommy

I also replied to you in that article “25% of…” You said you read some papers and wanted to post some responses. Go ahead and post it there.

tommy June 15, 2011 9:27 PM

@ Nick P.:

I don’t see the harm in leaving NoScript on even if you’re not worried about infection. A lot of malice can happen right there in the browser, including XSS, CSRF, cookie-stealing, etc.

“It’s also easier to selectively activate content: one mouse click on a blocked flash symbol thats located in the place where they prolly put the vid.”

Same with NoScript, although mine is configured to give a confirmation prompt. “Are you sure you want to allow …. ?” But you can just uncheck the box “Always ask for confirmation”, then you have the same one-click allow capability. FWIW.

Saw the post @ 25%, brief reply there; full reply when I can do the articles justice. Itching to get to it!

Nick P June 15, 2011 9:46 PM

@ tommy

Good points there. Ive pretty much been using same setup for about two years now. Maybe its time to finally stop procrastinating, review the current NS capabilities, and devise a more efficient browsing strategy.

Clive Robinson June 16, 2011 1:53 AM

:ne thing that does concern me is the attitude towards “code signing” by the article author and some of the researchers.

All code signing does is take a block of code (be it source or executable) and say that on such a such day somebody used their private key to give an indication of the integrity of the block, not anything else.

So from a technical viewpoint if you have a block of code with a security issue in it “code signing” is not going to do anything about it.

What you need for code signing to work is a bullet proof development and verification process behind it structured in such a way it cannot be subverted.

If you think on it for a few seconds you will realise that nobody has such a bullet proof system for a couple of reasons, firstly nobody has come up with one yet, and secondly if they do we know from experiance it will be to expensive and unusable.

From a non technical asspect code signing is a disaster waiting to happen to a business. Effectivly the expectation of signed code is that “it is good” because “the company has put it’s reputation on it”.

The minute you sign code or alow others to sign code you are effectivly setting yourself up for a fall. To be able to defend yourself you need to have all sorts of processes in place that are correctly audited etc etc etc.

From the legal point of view code signing is a little like putting “best endeavors” into a contract.

And if code signing was put in place and all the signed gode had no vulnerabilities to exploit, malicious code writers would simply find a way to go “upstream” of the process and ensure some app did have vulnerabilities.

For code signing (To paraphrase R.S.H), “They can have no security so you have to suck it up and cross your fingers”.

Clive Robinson June 16, 2011 3:42 AM

@ mashiar,

“This [QubesOS] looks very interesting (and it tries very hard to mitigate the fact that “security via correctness just is not going to happen)”

It is interesting and I wish them luck in their endevours.

However the result will “not be secure” only “relativly secure”. This is due to the fact the underlying system they work on is not secure, thus there will always be a way to do an end run around it (oh and this also applies to many of the “Trusted platform” ideas being pushed by those with DRM interests).

We already have one or two general purpose OS’s which are relativly secure (*BSD’s etc) and they can be further hardened.

But you need to ask yourself what “relative security” means in what context.

Firstly it does make an attackers job much harder but not impossible or even improbable. So it will not be exploited unless either, there is money in it or the majority of commodity OS’s are equally as secure.

The first case is an example of “directed attacks” and the current classic example of this was Stuxnet. Somebody was prepared to throw a lot of resources at the attack for specific reasons. QubesOS will like all commodity and many security hardened systems fail to this sort of attack. If you have a need of a computing platform in a very high value area then you should be taking other precautions which are going to be orders of magnitude more expensive than getting speciality hardware and software.

The second case is however of more interest to the general PC user. For a long time people have been advising people to get “Mac” as it’s “more secure”. It’s actually only “relativly” more secure and importantly “in limited use”, thus under the “low hanging fruit” and “mass market share” principles the OS and apps did not get attacked much. So “relative” to the Wintel system it was more secure for an average user.

So “relative security” can be summed up by the old joke about two people running away from a bear, where one stops to tie his shoe laces, and the other says “Why? you can’t out run a bear” to which the first says “I don’t need to outrun the bear, I only need to outrun you”.

The same logic applies to other commodity OS and *nix platforms.

I should at this point make the case for “probablistic security” at this point. The assumption behind it is that “NO OS is secure” however you can exploit certain asspects that make the attackers job nearly impossible.

The simplest example of this is using “Live CD” OS’s. The assumption here is not that the OS on the CD is secure (it’s not) but that it takes time to exploit it and you can use this to your advantage.

When you boot from a live CD it is vulnerable but not exploited, provided the user of the live CD does a single high risk task immediatly and quickly, then the probability of the system having been exploited is considered small.

However there are a number of problems with Live CD use. The first, is that you need a secure way to keep the Live CD fully patched and upto date. The second is the longer the system is turned on the higher the probability is it will be “randomly exploited”.

Thirdly what is seldom if ever mentioned is inline as opposed to random attacks. An inline attack is a directed attack between the client or server and are a general classs of attacks of which Man In The Middle is just one. One such attack is to get the Live CD user to go not to the bank server but some other lookalike server where malware gets installed into the memory on the Live CD client, prior to any further interaction. Live CD’s are generaly more susceptable to this because they don’t get patched as frequently as properly maintained client systems (and yes there is a whole can of worms behind “properly maintained”).

However the idea behind “Probablistic Security” is sound provided you implement it correctly. The issue with inline attacks is addressed by methods that ensure that the system memory is correct at appropriate times. To do this properly requires a minimum of a security hypervisor running on a seperate CPU which is generaly not available on commodity hardware.

However even when not done properly, checking the memory state in the VM sandbox before sending a packet to the network or after receiving a packet would go a long way to preventing malware infection either from a random or inline attack.

Danny Moules June 16, 2011 4:56 AM

“Lets face it: These devices are full-fledged always networked computers.”

This. People have been told by Google and Apple that they don’t need to treat their phones with any respect because phones ‘just do stuff’ without administrative intervention. They’ve been educated that a smartphone is a distinct thing from a computer; a self-contained unit somehow different from the operating systems they use every day. They have no appreciation for what a smartphone is, so they’ll happy adopt habits they’d never consider on a PC.

I recently bought a Maemo phone (N900). My first foray into smartphones and I’m quite delighted at it. I don’t perceive why I should purchase a computer system and then want to have 90% of its power hidden from me.

RobertT June 16, 2011 5:20 AM

Secure Android, that’s an oxymoron right!

I personally have a lot of money riding on the concept of Joe Average embracing the secure smart phone and micro-payment systems. Unfortunately, for some unknown reason, instead of the shiny new “got-to-have-it” secure by design system, all I see is a sieve. The very trust model upon which the whole cell phone system is built is chronically insecure, and this system is responsible for updates….

jggimi June 16, 2011 7:35 AM

The Android OS is open source, to a point. Development is closed. Google releases the source code to a particular release (with code names like Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich) at a point in time of their choosing.

Key to these malware problems — applications are delivered to the Market as binary code packages, suitable for direct installation in Android devices.

Any source code hosting, for any purpose, is entirely the responsibility of the developers.

bob June 16, 2011 1:12 PM

Every day a new system comes out with a CPU and soft/firmware that had traditionally been done with simple dedicated circuits before; thereby opening up a floodgate of new vulnerabilities that the mfr didnt think about when they decided to ramp up the complexity.

For example the environmental controls in my new Silverado are probably 107x as complicated as the 10 yo one it replaced; it takes 9x as much work to get the same comfort (assuming the new system is even able to provide the same level of comfort as the old one did – not my experience so far), provides 11 new points of failure and is probably susceptible to being hacked into providing a gateway into some “security by obscurity” internal bus that can crash the car, clean out your bank account and make crank Onstar calls to the police in your name.

Andy June 16, 2011 6:14 PM

@Clive Robinson, “the use of RF to cause fault injection attacks ”

You could add to that using voice regontion software on phone, might allow low or high frewqunecy sounds to pwn a phone remotely and quitly 🙂

Nick P June 16, 2011 11:26 PM

@ mashiara

“thanks for the insight”

Anytime! One thing I’ve noticed is that many existing security issues have been solved in the past, even in deployed systems. It’s like history keeps repeating itself in various forms. For instance, just compare the mainframe + terminal model to the current cloud or virtualized server farm + web browser model. We went from centralized big iron + thin clients to decentralized and back to centralized. The problem: we forget the lessons history taught us. Well, the new generations do, probably in part due to information overload.

So, some of us like giving out these “Lessons Learned” to people who show up on this blog hoping that the next project won’t reinvent the wheel or reinvent an old mistake. Sadly, few in Corporate America listen.

Richard Steven Hack June 17, 2011 5:14 PM

C. Wagner: “how do you treat the computing device with care when doing banking?”

I was trying not to be TOO absolute! That’s what I get for nuance!

Seriously, one either has to use the computer for banking or not. If one doesn’t believe the computer is secure, don’t use it. Bank in person. That’s the way we ALL did it before the Internet. If that’s too hard, then following the second part of my meme: Suck it up! And use the computer. But be aware you’re taking a security risk.

Patrick Barnes: “Keyloggers and other threats on the host may still be effective against a banking session in a local VM.”

Yup. Correct. Another way might be to use a machine that is otherwise NEVER connected to either the home network or the Internet except when banking. But how many people will buy a dedicated netbook or tablet for that purpose? And what happens if your router is compromised and redirects you to a fake bank site?

Ahem – there is no security!

Mashiara: I thought I’d heard of qubes before. That’s Joanna Rutkowska’s project. Haven’t reviewed it detail. Nick’s comments are insightful.

Clive: “If you think on it for a few seconds you will realise that nobody has such a bullet proof system for a couple of reasons, firstly nobody has come up with one yet, and secondly if they do we know from experiance it will be to expensive and unusable.”

Exactly what I mean when I say the current state of the software industry makes it impossible to do secure software. The industry can’t get half its code to even run without crashing, how the hell is it going to be “secure”?

Also good points on live CD’s. Better than VM’s. Similar to my suggestion of using an entirely separate machine which is never connected to any network.

Basically users have to do what the military does – keep an entirely separate computer for “classified” work which is never attached to a network at all. Update offline. Then the only thing you have to worry about is the updates being compromised either at the source or during the download (on the machine that IS connected to the network.)

Even that has risks. Remember Mitnick trying to compromise a DEC machine by faking a mailed OS update right down to the DEC packaging?

Nick P June 17, 2011 10:44 PM

@ Richard Steven Hack

“Basically users have to do what the military does – keep an entirely separate computer for “classified” work which is never attached to a network at all. Update offline. Then the only thing you have to worry about is the updates being compromised either at the source or during the download (on the machine that IS connected to the network.)”

This is the best approach. It [mostly] works for the military. It can work for consumers. Need WORM media. CD-ROMs most prevalent. There’s WORM USB now, too.

John Paul Donoghue June 26, 2011 5:03 PM

I was wondering if anybody has come up with the idea of quantum malware (maleware that could get past or overpower quantum cryptographic systems)

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.