Malicious Barcode Scanner App

Interesting story about a barcode scanner app that has been pushing malware on to Android phones. The app is called Barcode Scanner. It’s been around since 2017 and is owned by the Ukrainian company Lavabird Ldt. But a December 2020 update included some new features:

However, a rash of malicious activity was recently traced back to the app. Users began noticing something weird going on with their phones: their default browsers kept getting hijacked and redirected to random advertisements, seemingly out of nowhere.

Generally, when this sort of thing happens it’s because the app was recently sold. That’s not the case here.

It is frightening that with one update an app can turn malicious while going under the radar of Google Play Protect. It is baffling to me that an app developer with a popular app would turn it into malware. Was this the scheme all along, to have an app lie dormant, waiting to strike after it reaches popularity? I guess we will never know.

Posted on February 16, 2021 at 6:13 AM28 Comments

Comments

Pete February 16, 2021 8:06 AM

Could be that someone has compromised the developers machines and pushed up a malicious version. Ukraine is/was under a country wide assault of digital systems. This could be a result of that.

Or, one of the many levels of the supply chain?

Cley Faye February 16, 2021 9:08 AM

That’s interesting. When I first saw reports about this I looked into my phone and surely, I have an app called “Barcode Scanner” installed. Even though it have a different icons, the interface is strikingly similar with the button positioned and named the same way, the transparent overlay being the same, etc.

I wonder if that’s not a case similar to domain squatting, where people carelessly installs one instead of the other, and once the userbase is large enough it can start doing shady things.

Clive Robinson February 16, 2021 9:39 AM

@ Pete,

Could be that someone has compromised the developers machines and pushed up a malicious version.

If the article has it correctly, then it was not accidental in that an SDK library was augmented. The article indicates that quite some work was done to hide the malicious code.

Whilst this does not rule out a third party, it does make it less likely, because of the level of work involved.

Clive Robinson February 16, 2021 9:49 AM

@ ALL,

This is yet another nail in the coffin of “walled garden safety”.

The idea was originally that in exchange of lack of control by the device purchaser and instead vested in the operating system owner the purchaser would gain “security” against attack by malicious software.

Back when the idea was originally sugested, people indicated that it was not possible to do[1].

However the likes of Google got their way, and now the “proof” is coming home to roost by the flock.

[1] This was actually proved just under a century ago before electronic computers existed. Look up Alan Turing and “The Halting Problem”. Put simply he proved that you could not tell if a program would halt or not by examining the program or for that matter running it. Extending the notion of halting to another form of malicious activity is not exactly difficult.

collins February 16, 2021 11:55 AM

so there is really no way to fully secure an active cellfone.

using a cellfone is inherently risky.

one can only lessen the risk.

Android devices seem the highest risk.

minimizing installed apps helps, but most all cellfone sellers install many non-deletable and unnecessary applications.

minimizing personal data on the cellfone helps, but also reduces functionality.
(perhaps separate cellfones for different purposes ?)

custom built cellfones probably improve security much, but are very expensive — and one must stilltrust 3rd Party software and hardware.

MikeA February 16, 2021 12:33 PM

My ideal for a “secure” cellphone would be a minimal phone (voice and maybe SMS) that allowed a “dial up networking” option, preferably via a minimal serial line. Other than adding SMS, it would be like the AMPS phones with a DB9 and Hayes commands that I used for remote data collection back in the day. That way the (as Clive puts it) Security Endpoints would be outside the (typically backdoored, if not now, soon) Communications Endpoints.

Serial rather than USB to shrink the attack surface. Bluetooth just not acceptable (even if your smartphone allows it at all in any meaningful way) Wifi equally iffy.

If one must watch too many cat videos for a dial-up grade connection, consider also getting a landfill android phone, purchased with cahs by a homeless person and passed via a known CCTV-free dead-drop. And take out the battery while on the subway.

Is anything like this available? Without signing a deal with the Devil?

lurker February 16, 2021 1:19 PM

@collins
so there is really no way to fully secure an active cellfone.

using a cellfone is inherently risky.

one can only lessen the risk.

…but most all cellfone sellers install many non-deletable and unnecessary applications.

Agreed, but the subject problem of this story is an app that the user installed, then blindly updated with a malicious update. I have an app called Barcode Scanner. It has a different logo, and hasn’t had an update since July 2018. Whew! I might be mistaken, but I prefer to think that it’s because the app was made right first time. After reading the details of why updates are offered, I have a growing list of apps that are not updated.

I use my phone as a “computer in my pocket”. The fact it also has a phone, sms &c. is just an incidental nuisance. I started with an iPod touch and a burner Nokia, but Apple just kept making it harder to use the iPod…

Hugh A. February 16, 2021 3:27 PM

Users on Gizmodo are saying there is an older (meaning Lavabird copied it, not the other way around), almost identical app with the exact same name, “Barcode Scanner”, and almost identical interface which has -NOT- exhibited the malware behavior. Reportedly, the reviewers of this other app are slamming it with negative ratings and reviews, mistaking it for the Lavabird one. The clean version appears to be by “ZXing Team”.

So to steal someone else’s app and not even bother to change the name does indeed indicate, to my ignorant self anyway, that they intended the “nefarious purpose” all along.

Joe K February 16, 2021 6:27 PM

MalwareBytes article:

It is frightening that with one update an app can turn malicious while going under the radar of Google Play Protect. It is baffling to me that an app developer with a popular app would turn it into malware. Was this the scheme all along, to have an app lie dormant, waiting to strike after it reaches popularity? I guess we will never know.

frightening, baffling => totally banal

Is this pearl-clutching, or is it journalism? I gUeSs We WiLl NeVeR kNoW!

David February 16, 2021 7:56 PM

The ZXing library is used by many other QR and barcode readers

I use an opensource version from SECUSO

Erdem Memisyazici February 16, 2021 10:43 PM

Reminds me of the eslint package being hijacked to steal developer keys for Node.js It’s unfortunate as it is the software equivalent of a supply chain attack. Developers have code review practices for dependencies, but your average user does not know what a new update may bring.

It looks like Google Play vulnerability detection process wasn’t able to catch the difference either.

I should think a team of human reviewers could have a review process where parties must sign off on the changes to alleviate some of the pain. This would be no different than the average merge request review.

One thing that could work may be to give reviewed apps a “golden star” or whatever if they were subject to constant code review by a dedicated team the publisher pays to conduct for their application.

JR February 16, 2021 10:50 PM

@ All

Now that we aren’t leaving our homes why do we need cell phones? Good ‘ol copper landlines are still available in the US. If this housebound thing extends to Spring I am having one installed and i might even disable call waiting and callers will get busy signals.

In the interim I’ve disabled every feature and permissions on my mobile. Deleted every app. I use my ancient but very accurate Garmin GPS in the car if I need directions. Phone never connects to WiFI. Only use it for talking and text. No email on it either. I sometimes even leave it home these days on the rare quick trip out of the home. I don’t see the sense in paying this much for something that I so distrust.

I receive so many spam calls I don’t even answer it anymore.

I even have tiny bandaids over the cameras. I wish phones would have sliding screens over the camera and mic and lights indicating when both are used. I also wish they’d make phones in the USA. I’m not buying a new one until they do.

xcv February 17, 2021 1:24 AM

@ Erdem Memisyazici • February 16, 2021 10:43 PM

Reminds me of the eslint package being hijacked to steal developer keys for Node.js

There are a lot of agreements or contracts, shrink-wrapped, paid for or signed somehow, with respect to that sort of proprietary software development. It’s not always clear what the agreements are, and what policies if any are being violated in such situations. Somebody’s preaching “do the right thing” as if that’s immediately obvious, but then there’s some other claim being brought in a court of law, not what you thought was the right thing to do.

It’s unfortunate as it is the software equivalent of a supply chain attack.

If you’re developing on top of paid-for proprietary software, then of course you are extremely vulnerable to a software dependency “supply chain attack.” FOSS is an important mitigation strategy for this.

Developers have code review practices for dependencies, but your average user does not know what a new update may bring.

Think of having “kids” assigned to do chores around the house — you can’t depend on everything being done precisely the same from one version to the next or exactly how you would do it if you were to code it yourself. So be flexible and open-minded, and avoid unnecessary assumptions when you make use of dependencies in your own coding. Learn to code robustly so that if there are bugs in the dependencies, they can be dealt with as minor annoyances rather than cascading catastrophic failures of the whole system.

JonKnowsNothing February 17, 2021 1:34 AM

@JR

re: lights indicating when phone features are used

The presence or absence of a indicator light does not necessarily mean the feature is not in use or being used by another aspect of the system.

LED indicators are an option set or not set by the design of the application and the implementation of the spec.

Your camera and mic might be in use even without a LED blinking at you. Just because the light is not “on” is no guarantee of security.

iirc(badly) Not long ago, Apple got caught activating the Geo-location feature of their phones even when that feature was turned off and not enabled for any app. There was a tiny icon representing the feature that would flash intermittently and some folks noticed. Apple was harvesting telemetry and turning on-off the feature during the process.

Or rather, the person in charge writing the code for sucking the telemetry off the phone forgot to disable the indicator.

Garabaldi February 17, 2021 2:01 AM

@Clive Robinson

[1] This was actually proved just under a century ago before electronic computers existed. Look up Alan Turing and “The Halting Problem”. Put simply he proved that you could not tell if a program would halt or not by examining the program or for that matter running it. Extending the notion of halting to another form of malicious activity is not exactly difficult.

That is as bad a misstatement of Turing’s result as I have ever seen.
What was proven is that you cannot tell if an arbitrary program halts or not.
You can prove that some programs halt, and you can prove that others do not. There will remain some programs which can’t be decided. For example you can decide if a finite state machines halt or not. The technique for dealing with finite state machines will not help with general Turing machines, which have infinite states.

This misstatement of Turing’s result is very popular who think it is QA’s job to find all the bugs, rather than the programmer’s job to not write any bugs. And programmers who think code reviewers should not be able to say “This is awful code, I cannot understand it so we cannot release it. Your talents would be better appreciated somewhere else, perhaps MS is hiring.”

FA February 17, 2021 2:57 AM

@clive

This was actually proved just under a century ago before electronic computers existed. … Extending the notion of halting to another form of malicious activity is not exactly difficult.

And such ‘intuitive’ generalisations are almost always invalid.

Apart from that, Turing’s theorem applies to programs having unlimited resources. Deciding if a program P, given input X, will terminate (or have some other property) in bounded time is decidable. And that is all that matters in practice.

If some malicious apps are able to pass under the radar, it’s not because of undecidability – much more mundane reasons are sufficient to explain this.

Jon February 17, 2021 6:30 AM

@JonKnowsNothing

Heehee. As I pointed out to several people some years ago, “What makes you think that un-checking that checkbox does a da[rn] thing besides displaying an un-checked box?”.

Grant Microsoft a few points for honesty when their ‘updates’ magically ‘re-tick’ the box… But not very many, because what makes you think that part of the software controls any other part?*

J.

  • Granted, there are times when there’s feedback. Un-tick ‘fit to page’ and your printer starts spewing fractional layouts, you now have verifiable feedback that that tickbox does something. But as far as ‘send telemetry home’? Even if they DO put in a tickbox for that… Hmmm!

Clive Robinson February 17, 2021 8:36 AM

@ Garabaldi, FA,

That is as bad a misstatement of Turing’s result as I have ever seen.

You try puting it any better in a sibgle sentence.

It’s a foot note, which actuall invites you to go look it up,

Look up Alan Turing and “The Halting Problem”.

From a single sentance indicating that there is previous work based on the proof.

Back when the idea was originally sugested, people indicated that it was not possible to do[1].

This has been gone through on this blog in the past as well.

OK as far as halting goes Alan Turing and Alonzo Church’s independent works in effect show there are three outcomes,

1, Prove it halts.
2, Prove it does not halt.
3, Can not prove either.

The notion of “halting” is actually a class of “branching” or “failing to branch”.

That is the code performs a test and then jumps or does not jump to or away from the “exit code”.

That said there is nothing in the proof that actually says the “exit code” has to be singular or that it has to actually exit. Because is about how you get to or do not get to a given point of execution. Thus “The proof is about ‘the journey’, not the destination or what happens when you get there, that is just assumed”. Exactly the same logic applies even if the “exit code” just performs a function and jumps back to the start of the program. That function could just be a call to some obfuscated code that is infact malware.

The sailient point though, is that you can not prove software does not act maliciously even by running it for a finite period of time.

Which throws the problem onto actually being able to prove code is malicious or not. At the end of the day both code and data are the same thing, they are,

1, A bag of bits.
2, The bits convey information.

Therefore,

3, “data” is information used in a program.
4, “code” is information used in a program.

The difference is,

5, How the information is interpreted by the program.

That is, in a program.

6, Data can be code and code can be data.

7, By just examining the information you can not show if it is code or data.

But it goes further,

8, Data can also be code about code, or a program about a program etc.

At which point you hit the “Turtles all the way down” problem. Which means you can not show if a collection of bits –information– is or is not a program of some sort or if it ever executes or not.

In fact we know that a valid ASCII string can be used as a “sledge” to put executable code into a program to change it’s execution when “Smashing the stack for fun and profit”[1]

The upshot being those running walled gardens can not tell if a program they check is “not malicious”.

Do I need to amplify this any further?

The one thing I think we can all agree on however is, the above is not a single sentance, and trying to make it so would be a somewhat hard task.

As for,

Turing machines, which have infinite states.

Not exactly true, they can have finite states in which they can enter in any order as many or as few times and in what ever apparently random order they care to. Having an alphabetcof only to characters {0:1} does not stop you producing infinite sequences of them. Something Turing understood very well.

This misstatement of Turing’s result is very popular who think it is QA’s job to find all the bugs, rather than the programmer’s job to not write any bugs.

I would not know what other people might do in your eyes, but the point I was making is that the likes of Google and Apple,

“Can not claim to be able to stop malicious code entering their walled garden”. Which is an entirely different view point.

And such ‘intuitive’ generalisations are almost always invalid.

Neither was a generalisation which makes your comment at best moot.

Apart from that, Turing’s theorem applies to programs having unlimited resources.

You have fallen it your own “intuitive generalisations” trap…

If some malicious apps are able to pass under the radar, it’s not because of undecidability

You’ve entirely missed the point of both Turings argument and my use of it if you think that.

The point was not to show that any old crap could get into a walled garden, we already know that, and some of them were so obvious that just seeing an already “known to be bad library” is used should have been sufficient. The point is to show that,

“No matter what a walled garden owner does, they can not stop malicious code getting in their walled garden”.

Which means that the promises made about walled gardens stopping malware on end users computers can never be true…

That is,

“It’s not a matter of intuition, speculation or opinion, it’s a matter of provable fact.”

Thus the security issue is not taken away from the user, no matter how much they may want the convenience and are prepared to exchange their rights and freedom’s for such an unobtainable convenience.

[1] https://travisf.net/smashing-the-stack-today

xcv February 17, 2021 9:10 AM

@Garabaldi @Clive Robinson

[1] This was actually proved just under a century ago before electronic computers existed. Look up Alan Turing and “The Halting Problem”. Put simply he proved that you could not tell if a program would halt or not by examining the program or for that matter running it.

Right. There’s no algorithm or decision-making procedure to examine a program, and determine whether or not it’s going to halt.

You can run the program, but there’s no computable time limit, if a program has been running for so long, that you can assume it will keep running forever (in an endless loop) if it has not halted by that time.

There’s a “descending chain condition” on a partial order of program states that will tell you whether or not the program will halt, but there’s no algorithm to determine if the program states are partially ordered and satisfy the descending chain condition on a monotonically decreasing variant.

https://en.wikipedia.org/wiki/Loop_variant

Clive Robinson February 17, 2021 11:23 AM

@ xcv,

Thank you for the Wikipedia link. On reading the link we find,

And in any case, Kurt Gödel’s first incompleteness theorem and the halting problem imply that there are while loops that always terminate but cannot be proven to do so; thus it is unavoidable that any requirement for a formal proof of termination must reduce the expressive power of a programming language. While we have shown that every loop that terminates has a variant, this does not mean that the well-foundedness of the loop iteration can be proven.

Both proofs from the 1930’s and prior to electronic computers… they keep on poping up. Also the argument is broadly similar.

a_total_n00b February 17, 2021 5:09 PM

Interesting. But for some reason these are often on Android. What makes iPhone so different?

lurker February 17, 2021 7:04 PM

@a_total_n00b
What makes iPhone so different?

December 26.
For those who’ve forgotten their Xmas Carols, that’s St. Stephen’s Day, or as they say on the west-Atlantic Steve [Jobs]

Alan February 19, 2021 6:17 AM

In general, I only install apps (and very few of them) from USA companies, because they are subject to USA legal process and I’m in the USA. Under no circumstances do I install apps from Eastern European, African and most Asian countries…

Clive Robinson February 19, 2021 6:42 AM

@ Alan,

Under no circumstances do I install apps from Eastern European, African and most Asian countries…

Err most times when you instal only “from USA companies” that is actualy what you are doing due to outsourcing you can not see…

Have a look at what went on with SolarWinds, do you realy think little or no US Corps do that?

Garabaldi February 20, 2021 1:58 PM

A correct single sentence statement of the halting problem is that “You could not tell if an arbitrary program would halt or not by examining the program or for that matter running it.”

However some programs can be proved to halt. Restricting ourselves to ones that can be proved to halt necessarily means that we rule out some computations. But every computation that has ever been performed has been run on a finite state machine (for very large values of finite). Since finite state machines can be proved to halt (or not) this is not, in theory, an undue restriction.

Nobody needs to compute Ackerman’s function.

To paraphrase we can divide programs into:

1) Programs that we can prove are not malicious.
2) Programs that we can prove are malicious.
3) Programs that we do not know are malicious or not.

All three classes are not empty.

I’d like to say the question is what to do with programs in the third class, but there is actual significant difference of opinion about whether to run programs in the second class.

xcv February 20, 2021 3:58 PM

@Garabaldi

However some programs can be proved to halt.

#1. Nobody disputes that in the case of formal logic for computer programming.
#2. Humans are not programs. Artificial intelligence has never been fully realized on a “Turing test” basis.
#3. Universities and colleges need to halt their pogroms for the elimination of social undesirables and mental defectives.

To paraphrase we can divide programs into:

1) Programs that we can prove are not malicious.
2) Programs that we can prove are malicious.
3) Programs that we do not know are malicious or not.

#4. You left out a possible fourth class of programs that can at the same time be proven to be malicious and not malicious. If there are any such programs, then the system of formal logic being used to classify them is inconsistent, and therefore all programs are of this class.
#5. You have not specified a well-formed formula in any system of formal logic to express as a valid predicate the notion that any given program “is malicious” or “is not malicious” as the case may be.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.