Hacking a Phone Through a Replacement Touchscreen

Researchers demonstrated a really clever hack: they hid malware in a replacement smart phone screen. The idea is that you would naively bring your smart phone in for repair, and the repair shop would install this malicious screen without your knowledge. The malware is hidden in touchscreen controller software, which is trusted by the phone.

The concern arises from research that shows how replacement screens -- one put into a Huawei Nexus 6P and the other into an LG G Pad 7.0 -- can be used to surreptitiously log keyboard input and patterns, install malicious apps, and take pictures and e-mail them to the attacker. The booby-trapped screens also exploited operating system vulnerabilities that bypassed key security protections built into the phones. The malicious parts cost less than $10 and could easily be mass-produced. Most chilling of all, to most people, the booby-trapped parts could be indistinguishable from legitimate ones, a trait that could leave many service technicians unaware of the maliciousness. There would be no sign of tampering unless someone with a background in hardware disassembled the repaired phone and inspected it.

Academic paper. BoingBoing post.

Posted on August 28, 2017 at 6:22 AM • 31 Comments


0LafAugust 28, 2017 7:53 AM

Surely it would be better to compromise the manufacturer of the replacement screens then you could compromise phones by the thousand, all installed by the (probably) completely innocent repair shops.

Job's a good 'un!

RogerBWAugust 28, 2017 7:55 AM

So clearly what we need is DRM between phone CPU and touchscreen, and only genuine official branded parts will work (and they'll only cost three times as much as the same parts without the imprimatur).

Who?August 28, 2017 9:17 AM

As other people should have noted, with physical access to a device all hope is lost. This one is just another example of "hardware implant."

CallMeLateForSupperAugust 28, 2017 9:30 AM

This is Reason #666 for not owning a fartphone. Not really; I made up the number. But there really are many reasons.

I wish I had started keeping a list of fartphone vulnerabilities (both real and potential) a long time ago. I spout off about the cussed things all the time, and every time, the listener asks, "Vulnerabilities? Like what?". I struggle to get the old memory nodes firing, and it's not going to get easier with time. It'd be nice to have a sturdy, fold-out chart that could be easily carried in bag or, better, wallet.

Bob Dylan's Twisted KneeAugust 28, 2017 9:54 AM

Why is it that really clever hacks always seem to outnumber really clever designs which prevent hacks?

Joel FranklinAugust 28, 2017 10:32 AM

Clever implementation, but it strikes me as just one more example that physical access equals root access.

M. WelinderAugust 28, 2017 10:41 AM

> This one is just another example of "hardware implant."

With the added note, that the repair guy need not know that he is compromising phones.

ShavedMyWhiskersAugust 28, 2017 11:48 AM

This implies that all the new surface touch screen computers & tablets large and small have the same risk.

Phone including fart phones have trusted code in the radio module. The international nature of suppliers and their allegiances to their nation or their benefactors makes this interesting as heck.

Retired Secret SquirrelAugust 28, 2017 1:30 PM

Why is this even getting any coverage

I think we all know, our devices are vulnerable to any number of hacks when we don't have them in our possession, does this really surprise anyone?

David LeppikAugust 28, 2017 2:22 PM

A real-world version of this would have to fit all the hardware in the phone without the user noticing. Phones are already designed to fit as much stuff as possible into the minimum space. Simply put, if there were room for one more chip, they'd add one more chip, or make the battery bigger. (Those exploding Samsung phones last year? Turned out to be sub-millimeter tolerance problems.)

Smartphones are a lot more secure than PCs or laptops in this respect.

Obvious circumventionAugust 28, 2017 5:02 PM

@David Leppik

You seem to forget that instead of adding a new circumvention chip, one can replace original chips and parts with replacement chips with malware software or a insert a microchip in a non-chip part such as in the space created by reducing the volume of the original battery or replace with a smaller battery.

Mike D.August 28, 2017 7:05 PM


To be fair, the researchers "forgot" that too. They claim that it's "trivial" to stuff a new chip in there on the I2C bus to exploit the system. Working as an EE, I've heard similar claims from my boss of similar changes being "trivial" and they've been anything but. Like there's lots of space laying around in a cell phone.

Anyway, the problem with compromising a chip directly is that it's expensive to fab chips that work exactly like the target chip but have the compromise wired into them. That's state-level or corporate-level work right there; people won't be running those off their MakerBots. You can't make the chip bigger or it won't fit, and chip-scale packages of that type have next to no "wasted space" in the plastic, so a piggybacked chip would be quite difficult as well.

Buying space by taking a chunk out of a battery is a great way to start a fire. Assuming that there's an OEM smaller battery is even sillier; how many years has it been since any touchscreen phone has offered different size batteries in the same model? And you're assuming that the battery is next to whatever you're targeting.

And as many have noted, defending against physical access attacks is largely pointless. You'd need some pretty heavy use of TPM-style validation of everything, and people don't like their phones to brick when something's a tad off.

Nobody here is noting that this hack primarily relies on a buffer-overflow bug in the touchscreen driver. Without that, it would at most just waste bandwidth on the I2C bus.

Mike D.August 28, 2017 7:07 PM

P.S. I am reminded of the favorite "This is left as an exercise or the reader" gag in academia.

Consumer Whores get PimpedAugust 28, 2017 7:18 PM

"Why is it that really clever hacks always seem to outnumber really clever designs which prevent hacks?"

Because the best defenses are simple : Do not touch the stove if you don't need to.

Do not use a casual insecure device like a phone for mission/life critical things.
Do you even really need a smartphone? Most people don't but they RATIONALIZE it.
Oh it's so convenient. Oh it's so powerful. Oh it's so necessary.

But it isn't necessary, they've decided to touch the stove. Enter complexity.

The stove is hot. The stove does not care if you use mitts, are careful.
There is no 'foolproof' way of using the stove, it will always be a risk.

It is a complex situation that you entered into while imagining 'delicious food'.
Not unlike any mouse trap. Except you're paying for the cheese also.


JFAugust 28, 2017 8:27 PM

from Roger BW, above...

"only genuine official branded parts will work (and they'll only cost three times as much as the same parts without the imprimatur)."

I would not get a cracked or broken screen replaced, but rather, migrate to a new phone. However, that does raise a hygiene question for me; does a factory reset sufficiently wipe the internal storage? Most documents and such I save to a removable microSD card which stays with me.

AnuraAugust 28, 2017 8:43 PM


The only way to be certain is to encrypt the data and then lose the key when you no longer need it.

WaelAugust 29, 2017 12:39 AM

Interesting. Declassified Spook's manual of mounting the attack:

1: Drop a banana peel in the target's path
2: Spray the path with pebbles
3: When screen breaks, offer target discount repair
4: Return phone with a smile. If screen doesn't break, then
5: Repeat step 1 - 4 with the next contact on the victims list.
6: If no screen breaks, send a fake recall mail to target saying the phone had a defect and needs to be "upgraded" for free.

Or, since all bad guys are clumsy, just wait for the right moment and hope you don't miss any "chatter".

If all fails, then switch to plan B, or a modified version: be a good citizen and assist target to break the screen.

OtherAugust 29, 2017 2:55 AM

@Mike D.

Nobody here is noting that this hack primarily relies on a buffer-overflow bug in the touchscreen driver.

Probably the most useful comment here. The problem is not the hack. The problem is the vulnerability.


This is Reason #666 for not owning a fartphone.

Probably the most useless comment here. Using this reasoning, there are 6,666 reasons for not owning a car, 66,666 reasons for not traveling by plane and 666,666 reasons for not crossing the street. These are so dangerous...

veritrasheumAugust 29, 2017 5:20 AM


However, there aren't (or weren't until recently) so many utterly useless applications for technology such as cars, planes or street crossings. Until recently, cars were cars and not connected or widely vulnerable to dubious or hostile networks, eg Michael Hastings. Until the turn of the century, it took more than a boxcutter to destroy a commercial plane. While arguably causing collateral damage, eg carbon, injuries and certainly fussing up the infrastructure of eg the US, cars weren't zombifying society while basking in gratuitous spyfests. Reasons for not owning a car are limited to safety, environment, and a comparatively small number of personal reasons, while also restricted by the economic consequences of not owning one. Smartphones are still mostly optional and more often superfluous. Reasons to not own a smartphone greatly exceed any such number so far proposed, being limited only by the number of absurd apps that can be conceived for them. Yeah, there is ALPR and there are street cameras, but none of these are resting on our loins, following us perpetually, ever listening, collating and conniving in our pockets or purses. The same cannot be argued for most other things. A car, until recently, could be taken apart and understood, modified and controlled. The closed nature of both hardware and software for the smartphone doesn't compare. Smartphones, in the direction they have thusly developed, are shit.

Clive RobinsonAugust 29, 2017 7:18 AM

This attack is realy just a subset of the more genetal "supply chain" problem.

In essence we can not stop such attacks because we have no way to see them, except by testing to destruction or eternity, niether of which is a viable option.

So if they can not be viably stopped the only viable solution is mittigation. A point I've been making for a while now (so much so it's alowed @Wael to rename it "C-v-P" ;-)

You can look at the problem this way,

    We want a system to be secure, part of that is to stop tampering which in part means stop other people from examining it for weaknesses. Because we can not build every part of the system we use subsystems. They in turn should be secure which means we can not examin their contents.

      You end up with the "yogurt recipe"[1] issue in that something is always hidden from view and you can not get to the bottom of it. Which gives rise to the security issue of "bubbling up" attacks if somebody gets in at a lower level they effectively become an "insider attack". Which potentialy becomes a "weakest link in the chain".

      As you can not stop it the only solution is to mitigate it and catch the insider cheating and thus terminate/remove them.

      [1] The simplified recipe for yougurt is "Take milk and add yogurt", that is actually making yogurt just like "sour dough" is beyond us. What we do is make "syarter cultures" by letting milk get attacked by unseen bacteria. But their are thousands if not millions of different strains of bacteria many of which are not good for us or at the very least will taste unpleasant. Thus we test each nee seed culture untill by a mixture of random luck and selection we end up with the right bacteria to make yogurt.

veritrasheumAugust 29, 2017 8:10 PM

@Lacto Zymolysisyphus, Fermenter of Analogies aka Clive

You've slung your yogurt right over my head with that analogy, but I'm a layman among adepts, although I am a connoisseur of sourdough. Since it's not really off topic, I'm still hoping for some input on the previous link I dropped (this one). Those accelerometers and gyroscopes hold a lot of potential, particularly when they invite it.

Gunter KönigsmannAugust 30, 2017 12:49 PM

Things might be worse than they seem: creating a new chip from zero that I would buy a synthetizable arm core that handles initialization and any complex protocol needed. Then I would add the gates necessary to control a whole screen generation and then would use this chip as often as I can...
...if anybody in the delivery chain has a JTAG debugger and my ARM has flash memory there might be original screens that asked for their serial number respond with a megabyte-long string that (being large) overwrites memory and thus installs malware before kaslr is started.

Clive RobinsonAugust 30, 2017 5:37 PM

@ veritrasheum,

Since it's not really off topic, I'm still hoping for some input on the previous link I dropped

The authors kind of say the most important thing themselves with,

    Previous work in this space quickly lead us to believe it was possible, but we weren’t sure how robust these methods were or if we’d be able to recreate them. Generally, we’ve learned to be weary with academic descriptions of attack vectors — as sometimes they only work in a lab setting, expect certain conditions, or simply aren’t as practical as their authors make them out to be

I have a basic rule of thumb about attacks which is,

    If the laws of physics alow it, then someone will do it.

It's then only a question of when which is more a technological cost issue than it is anything else.

To repurpose the Coco Chanel saying "Side channel attacks are the new crypto attacks".

The point is as I mention occasionaly the security end point has to be beyond the communications end point. If it's not then all an attacker has to do is an "end run" around the security end point and get at the plaintext directly.

Once you accept that it then, as with all side channels, becomes a question of "channel bandwidth" available to an attacker. Either directly in an "active attack" or indirectly as "leakage" in a "passive attack".

Thus the unseen battle between attacker and defender is for side channels and then how to exploit them.

After a little thinking you will see why I no longer talk of the old idea of "air gapping" but the more relevant these days "energy gapping".

Back in times past when the SigInt agencies used to design their own equipment and systems before pushed at COTS solutions by point scoring politicians there were certain design rules used. The first of which is almost a direct translation of the "KISS principle" and is "minimization" of function thus complexity. Having minimized each function within a system you encapsulate it for "segregation". With all communications between functional sub components going through instrumented choke points.

It's easy to see why such systems were eye wateringly expensive. Conversely it's also easy to see why COTS systems are not going to be secure.

Which brings up the "Security -v- Efficirnce" issue I likewise go on about from time to time. As a general rule the more efficient you make a system the more you open up side channels both covert and overt. However the opposit is not necessarily true. That is a less efficient system is not generally any more secure than an efficient system unless specifficaly designed to be secure.

An example of this is power signiture analysis of an algorithm due to unbalanced in time branch instructions. It does not matter in general how efficient the algorithm is written becasue the "test and branch" behaviour is implicit in the algorithm. It's only when you are "in the know" about the power signiture analysis that you take the required action to balance the time in the two execution paths.

It's being "in the know" about what side channel an attacker is using and how that is the important thing. Which is why the likes of TEMPEST and EmSec techniques are usually classified so defenders don't get to find out about vulnerabilities in their equipment. Because the "old school" passive attacks got hit by the likes of EMC in the 80's for a while[1] new attack side channels had to be found.

But high levels of integration of function and efficiency in "smart devices" has opened up a whole raft of ttansducers thus new vectors of attack.

The thing to remember is technical cost is strongly deflationary you in effect get twice the power for the same price point every ten to eighteen months. This alone will take a theoretical attack and make it first practical to a select few then expanding geometricaly till it's feasable to all. To see this effect in action take a look of the history of "Software Defined Radio" (SDR).

As our host has noted what is theoretical today becomes a PhD project tommorow then a common technique the day after.

And that is almost always due to the deflationary "technical cost"...

[1] Put simply prior to EMC all electronics put out lots of EM radiation into the environment around it. These "Emissions" often carried secret information thus they were "compromising emissions". The problem was and still is that often "transducers are bidirectional" --speaker is a mic and vice verser-- thus if a device or system radiates emissions the chances are it is "susceptible" to other EM radiation. This showed up most notably with consumer radio equipment, as it became more in use during and aftet the 60's the problems became unmanagable. Thus the idea of ElectroMagnetic Compatability (EMC) where equipment not only should not have EM emmissions it should also not have susceptabity to equipment that did have EM emissions as part of it's functionality. The EM emission / susceptibility was originally reduced by physical filtering componets like capacitors and inductors. However these are expensive so manufactures realised with digital devices they could "cheat the test" by using spread spectrum techniques to "whiten" the frequency spurs spreading their energy across multiple frequencies thus reducing the single frequency amplitude[2].

[2] The problem with spread spectrum and other "Low Probability of Intercept" (LPI) techniques, is that whilst you can spread the energy across multiple frequencies the process is reversable if you know the "spreading code". Thus the use of simple short length linear spreading codes and no filtering components by manufactures to reduce costs in the 90's onwards was a gift to the SigInt agencies as in effect it took the back to the good old days of high level compromising emissions and high suceptability to EM fault injection attacks.

WaelAugust 30, 2017 10:48 PM

@veritrasheum, @Clive Robinson,

[...] air gapping" but the more relevant these days "energy gapping".

Got your "energy gap" right here, pal!

Show me an "energy gap", and I'll show you a "Quantum Leap" attack that can bridge it.

Clive RobinsonAugust 31, 2017 3:34 AM

@ wael, veritrasheum,

You forgot to add the smiley at the end of,

that can bridge it.

WaelAugust 31, 2017 3:45 AM

@Clive Robinson,

You forgot to add the smiley at the end of,...

I thought of it.

ab praeceptisAugust 31, 2017 4:56 AM

Clive Robinson

Yes. It seems many don't understand that "fine" (not really at all) point: Intention is a crucial differentiator.

When, say, Bruce Schneier, tries to attack a new crypto algorithm from, say, Aumasson, he does it with the intention of a) checking the algorithms soundness and b) getting a better grip on the mechanisms, the domains and factors involved. In other words, his approach is a rather mathematical one and with a rather mathematical mind set. In particular, cryptologists are interested in worst case security, something like "algo xyz will under all known and foreseeable circumstances be at least n bits secure".
A potential attacker, however, has a clear cut - and rather different - mind set and goal. He cares about the algorithm only insofar as that's necessary for his goal to either break or circumvent it. One quite nice example is that of the group which found a way to (with good probability and feasible sample size) find out which ssl library any given communication was built on - to then being able to attack it more successfully (due to library specific problems).

However, as you correctly state, real world attackers will look for ways to "cheat"; they will rarely attack the algo but rather the implementation (with complete circumvention being the ideal for them).

But there is an even much more promising level of cheating and, in a way, attack optimization: the full software stack with its rich set of attack surfaces. After all, the logic of "why attack the algorithm when attacking the implementation is good enough?" also applies at the more general level: "why attack the crypto implementation when attacking the 'wide open' OS or application also leads to the desired?".
After all, attackers typically aren't after some crypto stuff but after things like the user database.

To make matters worse, widely used crypto algorithms can be - and often are - limited and constantly analysed and optimized. But who on earth could limit the gazillions of vulnerabilities in the OSs and thousands and thousands of poor quality application?

There is only one way and a painful and expensive one, namely, to completely change the development - and the paradigms of development, in particular by properly specifying, by using sound languages (and not C/C++/java and derivates), and by extensively verifying.

WaelAugust 31, 2017 10:45 PM

@Clive Robinson,

You forgot to add the smiley at the end of,...

Ok, here it is, lest you think I was serious! A thousand :-)'s

GordonSeptember 5, 2017 11:51 AM

Wow! So many analogies, so little time. We need better software designed & made by better practitioners of the art & science of crafting software!

Bad software sucks! Malware is a blood-sucker, and potentially a mosquito leaving behind a deadly disease. How do humans handle mosquitoes? Drain swamps, install mosquito netting, spray insecticide, light smudge fires, place sacrificial alternate hosts in range but away from our festivities, bio-engineer sterile mosquitoes, KILL THEM as they land on us and our loved ones. And other, more creative answers. All of these have parallels in cyberwarfare (and it IS warfare).

Malware is DESIGNED for the purposes it performs. Counteracting this Mal-design takes vigilance and appropriate, decisive action. Would you let a convicted sex offender date your minor child? Would you let a convicted embezzler be your personal or business accountant? Would you let someone convicted of treason be your business's off-shore `ambassador?` (Please understand I embrace forgiveness and new chances for people. I am also _vigilant_ to being scammed, screwed over, and sold down the river!) How do we `jail then rehabilitate` malware? Can we send it to an `infinite loop jail?` It's not alive with a soul, would we want to do anything other than guard against it and excise it when discovered?

Is anyone working on an Over-Monitor in software that analyzes software and hardware _behavior_ and alerts the human operator of out-of-band behavior, throwing it to the human for `Do you want to allow this UNUSUAL action?` Why not? General Electric has made (publically released) strides in this area regarding power systems.

Too much of our increasingly complex human civilization is depending on Software. I put it to you all that IMHO Software needs to serve the needs of humanity for the best and greatest good. When it does not, terminate it with all due prejudice as quickly as possible and potentially prosecute the humans responsible for it.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.