Schneier on Security
A blog covering security and security technology.
« Free Cryptography Class |
| Friday Squid Blogging: Cephalopod Art Conference »
November 25, 2011
The Android platform is where the malware action is:
What happens when anyone can develop and publish an application to the Android Market? A 472% increase in Android malware samples since July 2011. These days, it seems all you need is a developer account, that is relatively easy to anonymize, pay $25 and you can post your applications.
In addition to an increase in the volume, the attackers continue to become more sophisticated in the malware they write. For instance, in the early spring, we began seeing Android malware that was capable of leveraging one of several platform vulnerabilities that allowed malware to gain root access on the device, in the background, and then install additional packages to the device to extend the functionality of the malware. Today, just about every piece of malware that is released contains this capability, simply because the vulnerabilities remain prevalent in nearly 90% of Android devices being carried around today.
I believe that smart phones are going to become the primary platform of attack for cybercriminals in the coming years. As the phones become more integrated into people's lives -- smart phone banking, electronic wallets -- they're simply going to become the most valuable device for criminals to go after. And I don't believe the iPhone will be more secure because of Apple's rigid policies for the app store.
EDITED TO ADD (11/26): This article is a good debunking of the data I quoted above. And also this:
"A virus of the traditional kind is possible, but not probable. The barriers to spreading such a program from phone to phone are large and difficult enough to traverse when you have legitimate access to the phone, but this isn't Independence Day, a virus that might work on one device won't magically spread to the other."
DiBona is right. While some malware and viruses have tried to make use of Bluetooth and Wi-Fi radios to hop from device to device, it simply doesn't happen the way security companies want you to think it does.
Of course he's right. Malware on portable devices isn't going to look or act the same way as malware on traditional computers. It isn't going to spread from phone to phone. I'm more worried about Trojans, either on legitimate or illegitimate apps, malware embedded in webpages, fake updates, and so on. A lot of this will involve social engineering the user, but I don't see that as much of a problem.
But I do see mobile devices as the new target of choice. And I worry much more about privacy violations. Your phone knows your location. Your phone knows who you talk to and -- with a recorder -- what you say. And when your phone becomes your digital wallet, your phone is going to know a lot more intimate things about you. All of this will be useful to both criminals and marketers, and we're going to see all sorts of illegal and quasi-legal ways both of those groups will go after that information.
And securing those devices is going to be hard, because we don't have the same low-level access to these devices we have with computers.
Anti-virus companies are using FUD to sell their products, but there are real risks here. And the time to start figuring out how to solve them is now.
Posted on November 25, 2011 at 6:06 AM
• 55 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Part of the problem with Android devices is that manufacturers seem to be less keen to update the software after the handset has been sold than with the iPhone.
For iOS devices, analytics typically show that within one or two weeks of a new iOS release, the majority of supported handsets are updated.
For Android, Google may release an OS update, users may request the update be delivered to their phone, but handset manufacturers don't necessarily deliver.
So while I don't believe there's much of a difference in the security of each OS/device combination, the update policies by different manufacturers would allow Apple to issue security fixes much more easily.
472%? Factoring, multiplying, more scribbling.... Hmm, there were initially 18, now there are 85?
The number of diseases in the world is irrelevant - it is the number of infections, and the harm caused by these infections, that is important.
Seems like scaremongering to me.
I think that Apple's gated App store offers more security then Androids. At least by the fact that the seem a lot quicker and effective in removing malware from their store compared to the Android marketplace once it is detected.
One of the problems in the Android marketplace is that malware often has similar names and logos as legit apps which is confusing to the end-user who wants Angry Birds the game and not Angry Birds the malware. This is something that the approval proces of the App store filters out, perhaps not 100% time, but at least close to that.
Healthy scepticism is the best defence.
I haven't downloaded that many apps. The ones I have chosen generally have lots of other users, were released a long time ago and don't require too many permissions. It's not hard to find the legit Angry Birds, it's in the top 5!
iOS gives the impression that nothing can go wrong at any stage with any element. Users who believe this are in the long run more likely to fall victim to a scam of some sort even if not a malware app.
"A 472% increase in Android malware samples" - yes, but how many actual infections? How much actual lost data?
If the answer is nearly none then that figure could be 10000% and it still wouldn't matter. What percentage of actual devices are compromised? Isn't it *orders of magnitude* below desktop devices running Windows.
Why don't you believe that app store policies will help protect iPhones? It is *really* hard to get arbitrary code onto an iPhone even with physical access.
Apple also has problems but they are good in keeping people away from looking into numbers.
I think apples system is even worse
this is a secured way that no one will search
So what will be the attack routes?
Possibilities I can think of, off the top of my head:
* Fake Apps for download thru legitimate App stores
* Web Apps that attack through the phones web browser
* Intercepting TCP or UDP connections used by Apps on the phone
* Using fake/stolen certificates to enable some of the above
* Fake sim cards?
Any other suggestions?
I agree that smartphones will become the most interesting target for online thieves who are looking to gain access to personal data.
However, for other kinds of cybercriminals computers might remain the best platform. For example, if you want to build a DDoS/Spam botnet, computers would be your best bet...
Google already released a response. All your skepticism is validated:
Googler: Android antivirus software is scareware from 'charlatans'
Chris DiBona, Google's open-source programs manager, unleashed his tirade after seeing a press report about "inherent" insecurity of open-source software, which is used not just in Android but also Apple's iOS. He argued that Android, iOS, and Research in Motion's BlackBerry OS don't need antivirus software.
I know, malware is not equal to viruses. But the scaremongers mix them up all the time too.
Malware is the same popularity contest on mobile it is on the desktop, but Android has far higher infection rates as well as more threats - disproportionate to its market share historically. Most of the Android infections are premium SMS trojans, in US, Russia and Asia (I've collated some stats at http://recombu.com/news/... The ability to grab APKs and infect and republish them is a bigger problem for Android; the curated Apple store makes that harder. But the real thing that protects the iPhone is the way jailbreakers 'waste' perfectly good security exploits on jailbreaks rather than on malware ;-)
For starters, all the performance built into smartphones goes to the featureset, and that's purely business common sense. This leaves little computing space for running a tradionnal protection scheme based on signatures... And that's good in a way, because we know it's an outdated approach.
Apple and Google are so busy running against one another for market shares, it's no surprise they *both* downplay the risks of malware on their platform.
Fact remains that if it's easier to develop on Android, and it's a pain to publish on iOS, guess where the baddies will go. Business common sense, on the "other side"
Apple will get itself into trouble eventually, and will have to remove some serious malware, it's a matter of probabilities. With the current mechanism, iTunes mostly, I believe Apple will have a better result on delivery, adding security features such as pre-emptive app removal, once we get there
Google is no way better, as they are currently affected, and try to openly downplay the situation. Business is no different even when promising to do no evil, it takes an income plundge to react. (Wait for it...)
Maybe we'll have to bring in the legislation to solve the liability, one day, just like with the credit cards
Otherwise, the only player, if you consider them to be such, might be Microsoft. They got hit so badly with malware in the past, they had to get organized and have done so, rather well, in the last 10 years. But in order to need security, you need a customer base
Lastly, I agree the smartphone is the next favored target. With all the different types and purposes of malware, you can come up with a good number of plots to leverage key knowledge in malware design and attack vector & oppotrunities
Bruce is wrong about the app store. Apps are simply not allowed to have the same access to the device as Android apps, iOS is sandboxed and Apple can and does quickly pull apps. None of those things are true for Android. Look into it Bruce.
I'm very interested in hearing WHY Bruce doesn't think the iPhone/iOS will be more secure - it seems so likely to me that the company examining each app for any malicious behavior/capability will let fewer malware apps into the store than the platform that doesn't.
So? Malware apps are exploiting bugs to gain access beyond what they are permitted to have. To claim that iPhones don't have bugs that will permit similar things is naive.
I think the next likely step in Android security are apps that white list or black list your other apps. Preventing known malware from installing or running.
How does that change the fact that by design iOS allows less access to the system. Wouldn't an equal potential for bugs simply (theoretically) maintain the imbalance - simply at a less secure level than intended?
Not to mention that potential bugs don't change the fact that Apple tests for malware & blocks, and Google doesn't, and therefore Apple remains more likely to detect & prevent malware from getting into the store in the first place.
What am I missing?
@greg / @darwin - Uh - sorry, my last comment was aimed at Greg, not Darwin.
@Darwin: Then what do you say about that app of Moxie's with a live exploit that allowed him to send *any* new arbitary code to the app?
Remember that the iPhone completely lacks any kind of fine-grained permissions of the kind Android has - a flashlight app with ads that use an exploit like that could suddenly access the users contacts and start doing ANYTHING AND EVERYTHING that an iOS app can do on those devices!
The ONLY thing that limits the capabilites of iOS apps are Apples review process, remote uninstall and the restrictions implemented on the phone are nothing but a "sandbox" (not the Android model). And that "sandbox" supposedly has a lot of structural phones according to iOS devs (due to them being rushed and security not being considered).
If Apples review policy fails JUST ONCE, an app can easily use one of the many exploits and install rootkits that can't be removed remotely by Apple.
Then what? What do you say about that kind of security?
While I agree that the iOS app store does have a higher barrier for entry, and that can help, there are a few issues.
First, iOS runs on a smaller variety of hardware, meaning that once an exploit is discovered it can be used on a greater number of handsets. The exploits used for rooting phones (like Visionary does for the HTC Vision/G2) generally have a restricted number of hardwares they'll function on.
Second, there's no way that I know of to see the permissions an iOS app wants (I don't know if this has changed yet) and no way to selectively deny them (Android flavors like Cyanogen have this capability built-in).
So will we see a rash of virus scanners/blockers written for Android (or Apple) with all the competition and hoop-d-la from the PC market?
@Darwin - apps in Android are sandboxed.
That's a good point. But "consider the source" works both ways. Anti-malware companies certainly have reason to stir up unjustified fear. But on the the other hand, Google has interest in downplaying justified fears.
I'd feel better if the "Nothing to see here, move along" pushback was coming from someone other than Google.
@Kevin - yes but they are sandboxed by the same operating system (and programming team ) that created the vulnerability in the first place!
Sandbox is great in theory, not always in practice.
not sure why the fuss about android or IOS in this case. Is it not what we already have on millions of PCs in any cases. ? the option and freedom of download / buy any software and install it on our computers. those are computers too.. Apple chose the no freedom there option which anyway will not guarantee no malware , Android is much more open and because of this i will chose android any days.
I see a huge chance for Microsoft to turn the game because of this. Windows Phone 7 only runs signed managed code. And that's a good thing.
Bruce, as you imply in your article, where is it written that security is an absolute?
Lest you forget; security is a process, an endless process. Personally, I will continue to use and professionally recommend desktop and consumer-level products that, while there still is room for improvement...reflects a long-standing commitment towards security, in both how the products are designed and in how they're actively managed: MacOS
To do anything else would be sheer insanity.
I think a big problem with the Android Marketplace and associated apps is that even most legitimate apps with millions of users want a lot more permission than is really needed. I think Google's system could help by rejecting apps that request more access to the OS and data than common sense dictates.
For instance, why does a flashlight app need unrestricted Internet access and the ability to modify my SD? Or why did the conference schedule app I downloaded for a trade show want all of the above plus GPS? The whole conference was indoors!
This is probably laziness on the developer side, (it's far easier to request any permission that might ever be needed up front) but it certainly makes it life better for malware authors when the users have been trained to give up full access to even the most trivial applications.
@Archlight actually its for the ads....
According to the info on www.groklaw.net on the Barnes & Noble complaint to the ITC against Microsoft, Microsoft is forcing weird update-restricting NDA licenses down the throats of Manufacturers shipping Android devices.
I'm wondering whether that is one of the reasons why Android phones so rarely see OS updates?
The possibilities are almost endless viruses and malware, worms and specialty infections that we don't even have names for. Mobile platforms are designed to promiscuously associate with any signal that it can recover. They reveal all sorts of secret metadata in just the hand-shake procedures. Imagine an app that used NFC (available on all next gen smartphones) it purchased goods unbeknown to the phone owner every time it could.
The mind boggles at the possibilities, when so many open and poorly designed (security wise) interfaces protocols are lumped together with personal credit information. NFC, WiFi, Bluetooth, GPS, GSM, WCDMA....a veritable stew of leaky RF protocols, each adding to the possibilities for malfeasance.
-----BEGIN PGP SIGNED MESSAGE-----
Alright, there's quite a bit of speculation & misinformation about this. Let's start with the basics. Both iOS and Android are ridiculously easy to exploit compared to a modern Windows 7 or Linux PC. Mobile security is still in its infancy as the platforms themselves are still evolving. iOS malware is harder to spread due to Apple's quality review process, while it's easier to combat malware with Android due to permissions & ability to mod OS for increased security. Also, there haven't been many news reports of massive compromises & data/money loss for either platform. The last point means the current threats are exaggerated and that's quite typical of AV firms.
So, what can we do? Well, complexity will only increase in the firmware, drivers, OS, middleware & application spaces. The best we can do with COTS is isolate security critical code from the untrusted portions, maybe the whole OS. The good news is there are products & techniques available that can do that. They typically combine a robust microkernel, "trusted" apps/features running directly on it, and everything untrusted running in a virtualized OS. The most widespread & mature for mobile phones are the OK Labs OKL4 solutions, such as SecureIT or OK:Android para-VM. INTEGRITY & PikeOS have been used for securing mobile phones this way, as well. Finally, open source initiatives like Nizza Security Architecture & Perseus (Turaya) Security Framework take similar approaches, with code & examples available.
We need to use these efficient, effective mechanisms to decompose these platforms into isolated subsystems carefully interacting. This helped create unbroken systems in the past. It can do it in the present, esp. w/ modern tools. If the crypto, trusted path, whatever is protected by isolation kernels or CPU built-in checks, then it raises the bar very high for malware development. Auditing, containment, and updates are also easier when the untrusted stuff is all in an isolated VM that can be restored from scratch.
It's not that these companies don't have a solution: they are just sitting on quite a few of them. Fortunately, a few companies are working on products that utilize these technologies. INTEGRITY Global Security, Sirrix & OK Labs might already have some for custom orders (likely Enterprise or Gov.t). Anyone that needs something *now* should contact them.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
-----END PGP SIGNATURE-----
My post ignored the protocol issues in part because they're inherently insecure & might initially require a physical presence to hit. Can't create a secure implementation from an insecure high-level design. Hence, I use isolation, monitoring & checks in my scheme. It's all we have unless we can beat the momentum of the IT industry depending on a popular, defacto or official standard.
The problem with Android handsets is that the carriers have too much power and often delay or block software updates.
I wish more mobile OEMs would follow Apple and take the power back from the carriers back to the handset makers to control updates.
The question of "smart phone" security is actually on of resource limitations and efficiency of use.
For those old enough to remember Windows 3.1 it ran on significantly restricted hardware and as a result ran like a dog on only two legs. It was very prone to falling over and just totered backwards and forwards very much incapable of doing anything.
Back then the main security threat was "sneaker nets" with malware in boot sectors etc on 5&1/4 and 3&1/2 floppies, the AV industry was just gettting going (anyone remember Dr Solomon?)
Then Win3.11 came along with networking... Most people found their PCs grinding into the dust, hardware updates were a major necesity in most cases.
Then network malware came along... and AV software got put into "always on" mode as oposed to "scan on disk change" and the PC's were crippled yet again.
Back then we accepted that sort of behaviour we don't now and smart phones darent spoil the user experiance otherwise every review will slate them and they won't sell.
The problem is smart phones are resource limited and you have to "rob Peter to pay Paul" and Paul is the UI and Peter Security.
However under this is another issue "efficiency" it is well known in the EmSec community that it is so difficult to get Security & Efficiency together that it's just not worth the bother.
Basicaly upto now Security & Efficiency means very skilled hand coding to avoid opening up side channels and other vulnerabilities. Hand coding skilled or otherwise is just not going to happen on commercial products, we just don't have the time or the money.
It's why to the usual maxim of "security or Usability" I add "Security or efficiency". They effectivly make a triangle with the "easy life" being in the middle of zero security, zero efficiency and zero usability. The further you stray from the center the harder life gets.
The efficiency issue has mainly been obviated on PC's by the process of throwing more resources at the problem to the point where for most users the utilisation of their PC resources are down in the very low percentages even when they are doing things like opening and closing apps.
Now as Nick P has pointed out there are developing methods by which we can start to get security without having to hand code software, however it still has an overhead without the appropriate resources, that is it does unavoidably impinge on efficiency.
Likewise as RobertT has pointed out the user experiance demands the "easy life" for users and thus the systems are designed to be very very promiscuous. This further unavoidably swallows up resources and impinges on efficiency.
Smart phones are currently no where close to having resources to burn and the UI is given a very very high priority so don't expect the platforms to be anything close to secure for quite some time to come (if ever).
So arguing that iOS is better than Android or the other way around is the equivalent of arguing which leaky bucket is better, the simple answer is neither because on the long haul both buckets are going to be empty when you get there.
Oh and as a final note iOS might use code signing and review by Apple so what...
Both are unreliable easily avoidable non security technologies look at the industry history,
Code review is generaly done by second or third rate programers because the commercial software industry demands the best programers develop the new features as fast as possible. The code review process in the majority of commercial software shops must not get in the way of code release time scales. Thus the code revierws in general just don't stand a chance to pick up anything other than most blatantly obvious bugs and the best programers are going to run circles around the whole proces if they want to (and have).
Code signing is not realy a software or security process all it does is act as an unreliable witness to the unchanged nature of the code from the point of signing. Thus firstly, any bugs or deliberatly installed security features introduced before signing will still be there after signing. Secondly the signing relies on a very very short string of binary digits, securing this string of digits is a security nightmare, I know of no commercial software shop that takes the required precautions to protect this "key to the kingdoms reputation".
Security comes by careful, considered, well tested well reviewed and thus adequately resourced design. Likewise the actual product needs to be carefully designed, well tested and adequately resourced.
However there is a catch, software even at the compiled machine code level is in reality very high level. Below this is the CPU microcode, beneath this the logical state machine design and below this the arangment of logical blocks consisting of sub blocks, then logic gates and the actual solid state switches. All of which is "wired up" via the invisable layers within the System on a Chips IC's we use as the fundementals of the product design.
Most if not all of the base hardware is developed and produced in facilities in foreign countries most of which we don't actually trust (think China).
Just except the fact that Smart Phones are never ever going to be even remotly secure as delivered from the supply chain. Thus work out how you mitigate this appropriately...
If Apple is supposed to have an "App quality review process" which is supposed to weed out all malware with nasty hidden features, how is it possible that the broken GMail app with a blatantly visible flaw was published? If something like that made it through the process, how can anyone be confident something nastier can't easily slip through as well?
It seems to me that the obvious malware vectors against smartphones will be web-based. If an ecosystem offers a trusted alternative to websites for sensitive operations (e.g. the mint app vs. the mint website) and users tend to use it then that platform will probably be more secure.
@NathanielL — if Apple allows a rootkit installing app into the App Store then it will, at worst, compromise the phones of the people who install the app before Apple figures it out. How many will that be? And, Apple will know precisely who has been affected and can send them to the Genius bar to have their phones cleaned.
Apple pays a bunch of people to check apps before letting them in the store. It's not perfect, but it's better than a free-for-all. Apple has sandboxing and is moving towards granular sandboxing (apps only get the privileges they ask for). It's not perfect, but... you get the idea. Apple knows who its customers are. Apple offers excellent person-to-person customer support. Apple takes responsibility for and stands behind its products. This sounds to me like multiple-layer no-single-point-of-failure security to me.
Google's model is to build a machine in the clouds and wait for money to come out of it. If the machine randomly approves malware then maybe they'll fix it. If your phone gets hosed then good luck figuring out whom to blame or finding someone to help you fix it.
Let me give a simple analogy to what jailbreaking is (equivalent) to:
Let's say you live in a dangerous neighbourhood. So, you call a security company to install cameras, alarms, etc.
All's fine. They're putting in their SYSTEM, that they've DEVELOPED to PREVENT problems.
But, you - the homeowner - decide to muck around with it, maybe, you like to keep your windows (pun intended) open, when you're out, and, the system is supposed to have everything closed, if you leave the premises to function optimally.
Then, your home gets broken into - through that window you left open.
Yes, I know how 'simplistic' this sounds, because that's how simplistic it IS.
I understand why someone might jailbreak their iOS device, BUT, by doing so, you inherently break the security level the iOS HAS.
If you do such a thing, and, you DO (you sooner or later, WILL), it's NOT the iOS's fault. It's YOURS.
I'm JUST speaking about jailbreaking. I'm NOT talking about such things as Charlie Miller's proof-of-concept 'evil app.'
However, if Mr. Miller's example IS possible, then I believe Apple will be prudent enough to develop possible safeguards.
In other words, as long as you maintain the iOS's integrity, you should be less susceptible to problems, as the iOS itself is properly updated, and maintained.
The same cannot be said of Android's multi-itterations. THAT job is the handset's developer, and, complaints about the flaws WITHIN that Android variation should be directed at THEM, specifically, and NOT at Google.
"If you do such a thing, and, you DO (you sooner or later, WILL), it's NOT the iOS's fault. It's YOURS"
Err no, it's Apples for not providing a sufficiently robust OS, and if some reports are to be belived Apple quite deliberatly weakened the security underlying iOS to make their life easier.
The point about only being able to get your apps through the Apple market is two fold,
1, Apple gets a nice big fat chunk of the action (for which they do very little)
2, They can prevent apps better than the Apple native apps being sold.
Thus the only security Apple are actually interested in is the security of the "tied-market" income.
Don't fall into the marketing talk that a revenue protection mechanism is actually a security mechanism and you should be grateful for it, it's the same tired argument that is espoused over the likes of DRM re-dressed as "secure boot".
I could equaly argue as others have that the Android sand box is superior in a number of ways, but I won't simply because it's pointless.
As I pointed out in my earlier post neither Apples iOS or Google's Android are sufficiently secure to be of serious use for the applications that people are currently talking about (digital wallets with near field payment systems), arguing one is better than the other is as pointless as discussing how your "fantasy football team" is better than somebody elses "fantasy football team", I believe that in the US people refer to such activities as "pissing contests".
For what it's worth I don't believe we can make smart phones sufficiently secure, we've failed with desktop PC's so why on earth should we think we can do better with smart phones?
The simple fact is a smart phone for better or worse is effectivly a static target, the attackers are highly mobile and intelligent adversaries who are going to seek out the next lowest hanging fruit target be it the technology or the human holding it.
The technology is currently "not fit for purpose" as a "mobile phone" or "web browser" for a whole host of reasons the most important being the fact that neither was designed for the tasks they have been "overloaded" with. HTML and SMS were never designed for the sort of authentication required for Internet banking, just continuing to "overload" an already defective tecnology is just going to make things worse.
The solution is identify the key elements of security and abstract them out of the current designs.
One such area is authentication, not just of the device, nor of the user or communications channel, but of each transaction. And due to the weakness of the current and future designs it needs to be not just abstracted out but actually removed from the existing model and properly tokanised in another device with the user acting as the sole communication path between the token and the smart phone.
Yes this is going to be far from conveniant for the user but it's going to be one heck of a sight less conveniant for any attacker aiming to steal etc from the user.
However this is but one of many many problems that need to be solved as Bruce has noted above....
Near field communication, another security horror story in the making. More convenience over even less security.
"...securing those devices is going to be hard, because we don't have the same low level of access to these devices that we have with computers."
This is the point that "Program Rewriting" - that is automatically rewriting the code to produce a secure version - appears to be a good solution.
Apple doesn't thoroughly examine them all. I'm sure they run a suite of tests at apps and thats probably 99% of the screening. They probably test more against their terms of service anyways, but that is purely speculation.
As one of the earlier commenters points out, lots of Android apps ask for tons of permissions so that they can deliver the ads that pay for them.
Wouldn't it help to add a special permissions label that handles ad delivery, rather than making all ad-supported apps demand full internet access. I tend to read "internet access" as pretty much synonymous to "own everything."
One advantage of the Apple model is that it doesn't rely on a bunch of non-technical people groveling over difficult-to-understand permissions specifications. That Android problem is exacerbated by the fact that we have all been carefully trained to click through such things by being exposed to years of unreadable gobbledygook EULAs.
@Tonio Loewald: That's bad enough. Are you sure Apple will find out? The apps could very well just have the sneaky rootkit code for one update, then for the next it could be "cleaned". To track the source of the rootkit, you'd actually have to detect it first on many phones and try to find something in common between them (since a rootkit can cover it's tracks).
Ans in fact, the app do not have to contain the malicious code as Moxie proved. All it needs is a subtle exploit that allows the malicious developer to send the malicious code to the phone after the installation. By not sending the malware to all phones with the app (and specifically not to phones belonging to Apple themselves), there's a high chance to avoid detection if done right.
The only way to prevent this is higher security in the platform itself. Due to the fact that the iPhone has been jailbreaked more than once through exploits in font management in PDF:s, in such a way that just visiting a hacked site is enough to get malware without knowing it, I don't really trust the security architecture in the iPhone. Nearly identical remote exploits twice? Really? You didn't learn anything the first time around, Apple?
As Clive Robinson said, code review is not always reliable. You need to have somebody doing it that knows security and has an incentive to get it right, every time. Also, a lot of the process needs to be automated, like using "fuzzing" (correct term?) and various types of code analysis. Then the results must be analyzed by humans that also have to look at the code themselves.
Secure code review of a single app at the size of most games could take months for several people.
@U.N. Owen: No, it's more like this: You buy a house with an existing security system. You find flaws that criminals can exploit. Instead of letting it be, you exploit them yourself to be able to make modifications to the house that does not interfere with the system, but that weren't allowed anyway. Often you can also fix the exploit yourself, just like how only jailbroken iPhones could use that tool that intercepted all PDF:s before they were opened when the PDF exploit still were wide open. That means that ONLY jailbroken phones COULD BE SECURE against one of the worst types of remote exploits there are out there.
Apropos Android malware supposedly running inside of a sandbox...
I am not sure about that myself, but according to Trevor Eckhart (http://androidsecuritytest.com/features/logs-and-services/loggers/carrieriq/) Android phones sold in USA have pre-installed spyware from CarrierIQ.
According to Trevor the software from CarrierIQ launches automatically when the phone is turned on, and cannot be disabled (I guess it is part of the firmware). The software supposedly saves a copy of all sms messages received by the user and any keystrokes entered by the user. It also forwards the sms messages to CarrierIQ.
I have not looked into these allegations myself. Trevor has posted a video at his site where he explains all of this, but that site is currently down.
The CarrierIQ company of course denies that their app does any spying.
...and sorry forgot one more thing, namely that CarrierIQ did post a counter-statement where they deny the spying, here:
I was in a meeting, a few days ago, discussing the Android security problem. The conclusion (with one dissenter) was that specialized hardware was required with inbuilt secret encryption algorithms and fancy TRGN's and PuF's It seemed futile to point out that IF you cannot ensure the security of the whole application stack than adding security to the link hardware was pointless. However, pointing out the futility of additional hardware to a chip maker is, is a pointless exercise, especially when you have no alternate solution.
Governments are demanding backdoors, Banks are demanding specialized apps with fancy GUI's, phone makers demanding the chipguys produce full working systems, (reduces the phone makers R&D), BTW they also want lower costs (forces chipguys to find coding teams in India and China), Advertisers / app developers want full access to provide a more "complete service" (clean out your bank accounts?). The most confusing aspect is that customers are paying a premium to be near the front of the queue.
Oh well... whats the expression "a fool and his money...."
RobertT, I hear what ya are saying but the issue(s) would be diminished if the Android ecosystem was more managed (akin iPhone).
Here a recent article from ArsTechnica:
Researcher demos threat of "transparent" smartphone botnets
In a presentation at TakeDownCon in Las Vegas today, security researcher Georgia Weidman demonstrated how malware on smartphones could be used to create smartphone "botnets" that could be used in the same way as PC botnets, providing hackers with a way to insert code between the operating system's security layers and the cell network. In an interview with Ars Technica, Weidman said that the approaches used by Carrier IQ developers to create phone monitoring software could be adopted by hackers as well to create botnets that could silently steal users' data, or send data without users' knowledge. "From what I've seen in Carrier IQ, they just didn't think about what they were going to do," Weidman said. "But malware writers are going to take advantage of those techniques.
I agree with you @zorro.
It's not about the actual number of infections paul, its the RATE at which they are rising. Assume the rate itself is stable (which is unlikely). In one year, it won't be 85 - it'll be 1900, that's of course going off your low estimate of 18 to begin with. Say it wasn't 18 but a more probable 100 victims to start with, 6 months ago. Today, it'd be 472. One year from now, it'd be 10,500. And that's assuming the rate of change stays the same as if the Android platform isn't going to catch the eye of even more malware developers. In another year, it'll be 237,253. Everything starts somewhere, Paul.
By the way, I am one of those "18," and from my experience, this malware thing is a lot less complicated than many are making it out to be. I was easily charged for over $300 via GoogleCheckout, which, of course, is linked to the Android Market. Luckily I monitor my bank accounts regularly and was able to catch it before they charged more.
It's all really simple. All you need.. all anyone needs is to get the username and password that you use for android. That's it. Honestly, that doesn't sound like such a hard task for malware developers (but they do face obstacles in getting this info I assume).
Think about why this is all they need: Once you have someone else's google/android log in and password, you grab your phone and set it up with those credentials. Now you can purchase apps using that person's account... which is good for you... if you're the developer of said apps... the money goes straight to you. Even without the phone, they can go online from a computer and do the same or use googlecheckout to purchase countless things from themselves with your money.
I reported my case to the IC3 (and filed dispute funds with my bank and multiple complaints with google) and was in touch with a local official. This theory was easy enough to test and he was surprised at its efficacy. In my case, the purp went the phone route and attached a device to my account, using his device to then charge me for ridiculous amounts of digital goods. The device is still showing up in my google accounts panel and I've contacted google multiple times about it and they have not given me a way to "boot" the person's device (as you will find out with enough browsing, there is no way to take a device off of your account online). Here's where there's a serious lack of google security. If it's so easy to add a device to my account, I want a simple email saying "you've just added your new droid to your google account" so I can be like "wtf? I didn't add anything," and thus be able to go online, boot the device, change my passwords etc.
It's pretty much like someone getting the credentials for your paypal account... only... people don't yet realize that after purchasing and setting up an android phone with market purchasing capability, that their google account is just like a paypal account now. I didn't even know about this GoogleCheckout crap being set up with my bank card. I admit, I didn't know where exactly my card info was stored for the purpose of purchasing apps from the android market, but I did not think it could be accessed so easily, and I did not think it'd be linked to my google account (hint: it's the same if they get our login and password for your gmail... chaching.)
The hard part for perpetrators is creating ways to steal your login and password. I'm not much of a techie so I wouldn't even begin to know the process involved using apps to gather info, but I am confident that some minds will inevitably find their way around all obstacles. It's only a matter of time before that process is leaked and spread. I do not yet know from which app my login creds were swiped from and I don't even know if programs like Lookout security will be able to identify and block such an app.
I do know that this happened to more than 18 people. I've searched enough threads where people's google accounts have been compromised right after purchasing and setting up an android phone - as it is my same case. It all happened within days of getting my first ever smartphone. Seeing as how I've found this to be the same for quite a many others, I think my phone may have been pre-installed with something that sends my info. If not, I guess I just downloaded a malicious app in a short amount of time. Those freebie games are popular and are probably a good disguise.
The only solution I was given at the moment was to take all my card info off of GoogleCheckout, change my passwords and use the new 2-layer security system (gee, I wonder why they rolled that one out). I'm still working with IC3 to identify the device (stupidly enough, there's a device id showing up on my account). But no word yet on anything further.
If you don't buy apps, make sure you check your google accounts panel and go through everything on that page and unlink your financial account info. Also check where your linked devices are to see if there are any that aren't yours.
For sure Android is where the malware action is right now! I wrote some month ago an article about exploring android malware that could be of interest here.
Please read it at http://www.simonroses.com/...
I agree that the smartphones arena is going to be attacker’s playground for
Simon Roses Femerling
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.