Security Vulnerabilities in Certificate Pinning

New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.

From the paper:

Abstract: Certificate verification is a crucial stage in the establishment of a TLS connection. A common security flaw in TLS implementations is the lack of certificate hostname verification but, in general, this is easy to detect. In security-sensitive applications, the usage of certificate pinning is on the rise. This paper shows that certificate pinning can (and often does) hide the lack of proper hostname verification, enabling MITM attacks. Dynamic (black-box) detection of this vulnerability would typically require the tester to own a high security certificate from the same issuer (and often same intermediate CA) as the one used by the app. We present Spinner, a new tool for black-box testing for this vulnerability at scale that does not require purchasing any certificates. By redirecting traffic to websites which use the relevant certificates and then analysing the (encrypted) network traffic we are able to determine whether the hostname check is correctly done, even in the presence of certificate pinning. We use Spinner to analyse 400 security-sensitive Android and iPhone apps. We found that 9 apps had this flaw, including two of the largest banks in the world: Bank of America and HSBC. We also found that TunnelBear, one of the most popular VPN apps was also vulnerable. These apps have a joint user base of tens of millions of users.

News article.

Posted on December 8, 2017 at 6:15 AM18 Comments


Petre Peter December 8, 2017 10:44 AM

Remember! “Quis custodiet ipsos custodes”. i don’t remember when was the last time i verified the verifier. i am just not ready to trade convenience or risk retaliation. Certificates are a good idea but the last US election did not, peacefully, convince me that the losing side lost. At a certain level, verification relies on multiplication which is related to retaliation. In this level, quantity triumphs over quality and stagnation becomes the destination. For me, this triumph is the current definition for certificate until i learn how to write and store my own records the beginning of proving that my liver is working.

Wael December 8, 2017 11:02 AM

This leaves the systems open to man-in-the-middle attacks.

Interesting topic! Agree with the assessment of the paper. More to say about this at a deeper level… Some of these ‘attacks’ depend on weaknesses of OpSec, too, such as fooling the user to install a root certificate on the device. This could happen as a ‘condition’ for allowing connection to a rogue hotspot, for example. One thing to note: Certificate pinning has it’s own set of deployment challenges. It’s also not sufficient to stop MiTM attacks, even if properly implemented. Payload encryption is another control that needs to be stacked on top of it (TLS alone is not sufficient for all flows!)

I may have more to say after I get a chance to read the full paper.

Clive Robinson December 8, 2017 1:24 PM

To be honest, I’ve lost any faith I might once have had in PKey Certs.

Part of this was the various web browser developers who tried to hide everything and in effect made any user control of root certificates well nigh impossible.

Even when it was clear that people were getting detained if not disappeared due to Governments use of phoney certificates the browser developers were at best slow…

And so it has gone on each potential improvment for users has taken more time than it should have to appear and often in a way that alowed various companies to develop work arounds they could sell to such governments…

Clearly web browser and app developers need to up not just their game, but their speed of response.

However we also heve CA’s who let’s face it either fall down on the job or can have their arms easily twisted.

But most importantly we do not yet have a way to do “first contact” securely thus there is almost always be an opening for MITM attacks.

There used to be an old industry truism of “What ever the question Microsoft is not the answer” it needs to be updated whereby Microsoft is replaced with CAs…

65535 December 9, 2017 1:12 AM

@ Clive R. and others

I agree with assessment of the Public Key system in general. But, both public key systems and symmetrical key system with a courier or other methods of transporting the key have their own problems. The only advantage the Public Key System is ease of use and possibly perfect forward secrecy. I will say private signed certificates [PK] within a perimeter is probably good to a point. But, private signed certificates don’t transfer the wider internet public.

I think we have had several discussion on Public Key encryption and Certificate authorities and methods of improving the system – but no real progress. Take look at Krebs recent discussion phishing sites and how to major banks such as HSBC and BOA are targeted with realist looking emails and sites with real public key certificates [note a lot of people are blaming the free Lets Encrypt Certificate maker for giving bad guys free certificates – which is different subject]. I will not that using the PKI system probably gives away tons of data to the government and private sector.

Certificate pinning is good but not perfect and the auditing of certificates by the now defunct Perspective add on did not catch on.

Last, is the problem of Kaspersky and Symantec anti-virus programs which employs SSL stripping This leads to the problem of the NSA person carried home Top Secret documents and digital weopons on his laptop only to have them show in Kasperky’s virus data bank.

I am sure the same documents probably are scammed by other anti-virus makers who employ SSL/TLS stripping for “the customers good” and so on.

Exactly how these anti-virus companies deploy their SSL/TLS stripping is not clear – but probably should be. And, the Man in the middle attacks continue at least at the state level.

The general problem with Public Key systems is the state can strong-arm the various Certifica Authorities and possibl get acess to their private keys. The government can problably forge certificats at their will.

Next, is the “browser pre-load” certificate store from the major browser venders, Chrome, Firefox Internet Explore and Edge and how to properly clean out dangerous certificates in said store. There seems to be not easy way.

Little has changed since a discussion started in 2015. Below is one of my earlier posts on the PKI system problems.

If anybody has a good solution to the PKI system problem please speak up.

Clive Robinson December 9, 2017 6:52 AM

@ 65535,

If anybody has a good solution to the PKI system problem please speak up.

If you listen carefully, all you will hear is the sound of tumble weed blowing through an arid dessert of no indescribable features…

Public Key systems are primarily based on the notion not just of one way functions (unproven) but one way functions with trap doors (the old zero, one, infinite issue applies).

Thus any mathmatical advance in one of many subfields or likewise physics will render them defunct in short order. Thus a wise man would consider PubKey systems to be at best a fragile anomaly.

Symetrical systems have been shown in various ways to only be secure for “Truely random keys of the same size as the plain text, known only to the two communicating parties”, which wrests on two assumptions… Firstly there actually is something that is “truely random” and the second the “all plaintexts are equiprobable thus indistinguishable”. In practice both can be shown to theoretical not practical unless othe constraints are put in place.

The upshot is that to get any measure of communications security in an open channel you need to share a secret to use to turn the open channel into a secure channel. The problem is that to share a secret requires some kind of secure channel… which is just one of the reasons why the One Time Pad is considered to be problematical.

The real problem with the Internet is there are no traditional secure channels between the majority of it’s users. Thus either the majority of communications are insecure or there has to be a way to securely create shared secrets on an open channel or via a trusted third party.

Aside from the trust issue of a third party, there is the major hurdle of it having to being “on-line” to be usable when required.

Thus despite the assumptions behind PubKey systems they currently are the only way to get the likes of secure channels on the Internet where all channels are open to observation…

We now know that there is no such thing as a trusted third party even with life or death leverage over them and their loved ones. We can also see that there is an increasing probability that existing PubKey systems will have their assumptions kicked out from under them within a fairly short time period (or atleast that is what “many in the know” believe).

We have seen quite a few “one way functions with trap doors” fail for various reasons one of the original “Knapsack algorithms” being one that many remember[1]. However other Knapsack problems may be one of the few post quantum computer algorithms left to do two party secret sharing in an open channel…

However the fundemental problem that PubKey systems provably exist, leaves us in a “Red Queens Race” with the added temporal problem of “store it all”

The only known two party secure system to exist is not actually the One Time Pad but the One Time Code/Phrase system. The reason is it’s meaning is totally opaque because if used correctly it is indistinguishable from ordinary plain text. Even if the phrase is suspected of being a code it is no more than assumption of a third party, because it has no relationship by length or content to the code phrase real meaning if there actually is one. Which may or may not become apparent with time, which as it is only used once has no predictive value[2].


[2] However that will not stop others trying to argue back from effect to cause. We see this with the long list of debunked forensic “proofs” and false claims made by prosecuters. Even one party claims that a phrase in a conversation was a code there is no substantive proof of this, thus you have to ask if there is a plee deal going on which once would have rendered any such testimony inadmissible if not actual perjury.

65535 December 10, 2017 4:27 AM

@ Clive R.

I don’t disagree with your post in general. The real question is how to fix the problem is viable way. Can you suggest any solutions?

Clive Robinson December 10, 2017 12:15 PM

@ 65535,

I don’t disagree with your post in general.

Yes I understand that, however we are chatting where many others listen, thus framing the problem helps everyone get on the same hymn sheet.

The real question is how to fix the problem is viable way. Can you suggest any solutions?

I have a number of thoughts on the problem but they all rub up aginst one or more of,

1, Centralized Authority.
2, One line only use.
3, Third party involvment.

Issues when designed to be used in an Open –to observation– environment.

Our host @Bruce once observed that we had secure crypto algorithms enough for our needs for the near future. What wr did not have and badly needed was Key Managment systems.

In that respect I don’t think things have changed in getting on for a quater of a century. In which times attacks on PubKey as again predicted by our host have just got better.

My own viewpoint based on what history has shown us since the original DES competition is that no mater how well designed and tested crypto algorithms of most kinds have at most a third of a century life time, thus at the latest ofca quater of a century we realy need to start over again…

Whilst this might be OK for Personal Computing where equipment has at most a Three year life time (in the UK the Inland Revenue alowed the devaluation to zero period to be shortened to 18months for ICT kit from the more normal 3years for office grade equipment and 7years for industrial). It’s not OK for Industrial Control and Infrastructure equipment and medical implants where the expected minimum usable liftime is a quater to half a century…

Which implies that such devices will have to be upgraded/patched the cost of which by going from unit to unit would be prohibitively expensive. However the alternative of “soft embedded” code in Flash ROM etc is considered insecure, not just by the likes of the energy industry from the fraud point of view, but also from the security community from the privacy asspect…

Which to cut a long story shorter, currently means we do not know how to solve a very real problem we are hard building into our lives oh and bodies…

Which is why I said,

    If you listen carefully, all you will hear is the sound of tumble weed blowing through an arid dessert of no indescribable features…

65535 December 10, 2017 5:25 PM

@ Clive R.

We are tilting at windmills?

I do see your point about the fairly rapid life cycle of computer equipment. In the States is about the same a semi-accelerated depreciation of five years for computer equipment and 3 year semi-accelerated depreciation for software [say a Windows OS – other small programs can be expensed].

Yes, the IT industry moves fast but other industries move fast or jerky also. It will take reforms in three main parts: the technical side, the political side, and the educational side [the last side is most important].

Here are my suggestions

1] Any consumer equipment with a hard coded password should be clearly labeled so the consumer knows the risks i.e., IoT devices.

2] Any programs that strip SSL but fair to mention that fact should be reformed. i.e., the AV industry that does SSL stripping without telling the customer.

3] Any hidden back doors in smart phones should be clearly labeled for consumer safety.

4] Any fraudulent forging of Certificates by any entity should be prosecuted as a Fraud at the minimum.

5] The full takes of consumer data from internet backbones or cell towers including cell tower ‘metadata’ should only be done with judge issued warrant.
I will stop at five goals for now.

I think the problem of safe communication with digital equipment can be slowly fixed if each set of problems is handled in concise and sound method. Sure, the SS7 system is full of holes. Sure, side channel attacks are a huge risk for consumers and so on. But, each probably can be fixed if done in an orderly fashion.

Giving up is not the answer.

Tsohlacol December 10, 2017 9:53 PM


.2 Our contribution
This paper presents a black-box method to detect apps (or devices in general) that, when using TLS, pin to a root or intermediate certificate but do not check the hostname of the host they connect to.

Um… I’m pretty darn sure that that’s not pinning. I’m pretty sure pinning is removing everything from your truststore and then inserting the leaf certificate – not a root, and not an intermediate cert.

That means that the research is really not some special way to undermine pinning, but rather your typical hostname verification issue. A good reminder but nothing really special.

Wael December 10, 2017 10:15 PM

@ Tsohlacol,

… I’m pretty sure pinning is removing everything from your truststore and then inserting the *leaf* certificate the *leaf* certificate – not a root, and not an intermediate cert.

In pinning, you don’t remove any certs. You hardcode a cert fingerprint (or public key) in the application to say: only use this trust chain path — nothing else.

You can pin a root, an intermediate, or an end leaf certificate. Pining the leaf is the most “secure”, but has challenges as for example when SAN certs are used. The next best to pin is the intermediate. Pinning the root also provides security so long as you trust the root won’t issue rogue intermediates. Often the root and the intermediate will come from the same CA.

Actually pinning was meant to protect against compromised CA’s but can also be used to protect against rogue certs that give a path to the trust chain. If a rogue cert. is installed, then a MiTM attack can be mounted where TLS sessions can be “decrypted by the rogue cert”. One will need something like SSLsplit and then be able to extract credentials such as username / passwords. There are other methods, too.

Either way (pin or not,) you need payload encryption on top of TLS in some situations.

I don’t want to read the full paper… looks too much like work 😉

Clive Robinson December 10, 2017 10:54 PM

@ 65535,

I think the problem of safe communication with digital equipment can be slowly fixed if each set of problems is handled in concise and sound method.

We know the legislative way is most certainly not going to work. There is no ifs buts or maybes on that. They have already lied cheated and passed secret laws when challenged. Prior to that there used to be reciprocal arangments. The US SigInt agency would not spy on US citizens it would be the UK SigInt agency personnel, who would do that, then hand it over to the US SigInt agency. Likewise the UK SigInt agency would not spy on UK citizens the US SigInt agency. That way politicians could be told “we do not spy on our citizens” and they in turn would tell other politicians/public that.

The only way to stop it is by tecchnical means, that can not be bypassed technically. Thus they get two choices either stop or make encryption illegal.

That is the all or nothing option, as long as they can avoid being forced into that position they will carry on lying&spying to/on the citizens.

If they are forced into that situation the citizens can fight back as there is a clear target to shoot at. But the citizens can not fight back against deniable secret legislations and interpretations, because they would be shooting at clouds in the dark.

Thus it falls to the technologists to develop algorithms and methods that can not be broken by the SigInt agencies.

The reality is that “first contact” step needs to be properly solved… That is to establish a secure channel to exchange secrets, without first having to exchange secrets to establish a secure channel.

Untill we can do that in a robust and provable way, we are stuck in the “Red Queen’s race”.

If we solve that then the unfair advantages of “Silent attack” that the founding fathers did not realise was possible remain and with it the Cardinal Richelieu issue remains. Because I realy do not think the WASP nations are prepared to go the Thomas Jefferson way with “The tree of liberty must be refreshed” “with the blood of patriots and tyrants” option. Throwing off one tyrant –English King– and another spilling of patriots blood –American Civil War– should be more than enough to forge one nation… But we all appear to have forgot the price of keeping the liberty of freedom which is “Eternal Vigilance”. We “took our eye of the ball” and the tyrants instilled themselves in like the parasites they are with “Fear, Uncertainty and Deception”.

As any pet owner or farmer will tell you “getting infested with parasites is easy” even when you are vigilant, but also “getting rid of parasites once established is hard, sometimes very hard”. In the UK a while ago there was an outbreak of “Foot and mouth” disease in cattle, the solution was a “Scorched Earth” policy and pyres of carcasses burnt across the land… Sometimes the option is not “Kill OR cure” but “Kill TO cure”…

If we can not come up with a technical “two party only” solution to secure secret exchange, then kulling the parasite “third parties” may be the only choice left. Defunding the IC and SigInt agencies and adding strong indepth oversight imperfect as it is and mandatory criminal sanctions may be the route we are forced to take, but the IC and SigInt agencies will not go quietly or quickly or even at all. Thus a state of defacto suppressed war will exist much as it does in prisons unless we find the technical solution.

If you analyze what the experts say, they are in effect saying the same thing and have been for atleast a quater of a century. They have not been blatent about it as the general public are still not ready to accept the consequences of “knowing”.

Whilst “Ignorance is bliss”, “What you don’t know will kill you due to incaution”. An example of this is the twelve foor rip saw blade in a lumber mill or wood yard. Whilst you can see the solid part of the blade as it spins the three inches of teeth you only see when the blade,is not spinning. But in that rest state they are generally not harmfull… Just because you can not see the teeth when they spin does not mean they have gone away, the ignorance of acting that way will cost you a finger or two if you are lucky… It will be quick it will be noisy but you will probably survive. Unlike the “silent attack” of the SigInt agencies which leaves you a dead man walking, as with terminal cancer with no defence possible.

Untill the majority of citizens not only realy understand that, and are prepared to come to terms with what needs to be done we will be cursed by the IC and SigInt agencies and who ever they chose to silently pass the information onto…

Thankfully a technical solution means that the citizens don’t have to come to terms with it, unless the IC and SigInt agencies come into plain sight to fight, which most definately be a battle ground of their choosing.

Anura December 10, 2017 10:58 PM

@Wael, Tsohlacol

I agree with Tsohlacol – this really has nothing to do with pinning, it’s just failing to check the hostname on the leaf certificate. Pinning itself is irrelevant; apps just have trusted root or intermediate certificates in their store, but are not checking the hostname – the proper term is trusted, not pinned. Pinning about associating a specific identity with a specific certificate, and then rejecting any other certificate they might send even if it is signed by a trusted root authority.

However, this isn’t really about that. The part where they actually talk about pinning is just introducing what their tool itself does, which is detect when an app accepts a certificate with a different hostname. I stopped reading at that point.

Wael December 10, 2017 11:12 PM

@Anura, @Tsohlacol,

I intended to read it, but my eyes hurt so I didn’t read the full paper. My comment was specific to what I quoted. Whether the paper has to do with pinning or not… I’ll take your (both of you) word.

Clive Robinson December 11, 2017 2:58 AM


In my above of,

    unless the IC and SigInt agencies come into plain sight to fight, which most definately be a battle ground of their choosing.

A couple of words got left out.

    unless the IC and SigInt agencies come into plain sight to fight, which most definately would not be a battle ground of their choosing.

Peter Panholzer December 12, 2017 4:27 AM

Although a great tool, the title and abstract of this paper is highly misleading and a cry for attention for something that has been established many times before. When asking security researchers what is pinned when using the term certificate pinning, almost everybody would state that it is the certificate or public key of the host, not of the CA. And in this case the attacks from the paper do not work, you don’t need to check the hostname in most cases, when the host certificate is pinned.

If you pin the CA’s certificate, of course you have to check the hostname. But we do not call that certificate pinning, we call that putting the CA’s certificate in the trust store.

Please remove the term certificate pinning from the title and the abstract and save people’s time.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.