Forged SSL Certificates Pervasive on the Internet

About 0.2% of all SSL certificates are forged. This is the first time I’ve ever seen a number based on real data. News article:

Of 3.45 million real-world connections made to Facebook servers using the transport layer security (TLS) or secure sockets layer protocols, 6,845, or about 0.2 percent of them, were established using forged certificates.

Actual paper.

EDITED TO ADD (6/13): I’m mis-characterizing the study. The study really says that 0.2% of HTTPS traffic to Facebook is intercepted and re-signed, and the vast majority of that interception and resigning happens either on the user’s local computer (by way of trusted security software which is acting scanning proxy) or locally on a private network behind a corporation’s intercepting proxy/firewall. Only a small percentage of intercepted traffic is a result of malware or other nefarious activity.

Posted on May 16, 2014 at 6:43 AM47 Comments


Carlos May 16, 2014 7:17 AM

Hmm, I’m missing something.
Https requires a server certificate, not a client one, right ?
So if this is facebook data, does it mean 0.2% of facebook servers are forged ?
And if this is so, how do we collect such data ??? I might turn redfaced when I hear the answer, but still…

Tom May 16, 2014 7:24 AM

No, it means that 0.2% of connections are intercepted by a man-in-the-middle attack which presents a certificate claiming to be for but which isn’t owned by It’s detected by including some javascript in the page that independently checks the certificate the client has received for the server and reports back if this doesn’t match what facebook thinks it should.

My guess is that a very large fraction of these will be employers who MITM their employees to allow them to monitor their HTTPS browsing sessions.

M@ May 16, 2014 7:25 AM

I’m with Carlos. The headline makes sense, the Facebook reference does not. Other things in the paper are about 10degrees off, from a conceptual standpoint. Bad paper.

mike~acker May 16, 2014 7:43 AM

SSL, TLS, and x.509 certificates are based on using Public Key Encryption to verify the credentials of a remote contact — such as when you connect to a web site — or receive an e/mail.

for Public Key Encryption to work your computer needs a copy of the Public Key for the party whose credentials are to be checked.

Browsers get these public keys in the form of x.509 certificates. these certificates are loaded automatically to your browser by the browser software and TRUST is also established automatically for you — using “Certificate Authorities”.

“Certificate Authorities” are companies set up specifically for this purpose.

Check out who you have been told to trust:



I can’t help you with IE; I use only Linux.

What should be done: you should get PGP: Either GnuPG ( free ) or purchase PGP Desktop from Symantec. Generate your own key. Then vet and verify just those certificates you need to trust, — your Credit Union, online shopping, IRS….
Counter sign just those selected certificates using your pgp key. and assign full trust — if you like.

but certificates automatically loaded they way they are now,– are just a hot mess. as we are starting to find out.

Andrew May 16, 2014 8:06 AM

You could sign X509 with PGP? Anyone pulled this craziness off?

’cause in my mind pgp is the maze-like decentralized public key system and x509 is the heavy centralized (kinda) monstrosity that secures the internets (if you believe in fairy tales, that is). And these two worlds don’t really talk to each other.

An interesting experiment is to put some certificate pinning extension (e.g. certpatrol for firefox) and see how web sites certificate change. Spoiler alert – it’s overwhelming. Also big companies with multiple data centers around the globe will often serve different certificate from different centers and it may cause alerts for certificate exchange as often as every visit. And then when you get annoyed and go search for answers, a google FAQ page will kindly explain to you that a) you’re doing it wrong and b) you’re an idiot.

So in regard to the above experience, I wonder how your plan to sign google’s certificates would work…

Clive Robinson May 16, 2014 8:10 AM

It’s not the number or percentage of bogus certificates that count but where they are on the network.

If your employer uses onr then it’s probably covered by the Usage Policy, and very probably not just legal but in some places effectifly a requirment under business legislation designed to stop insider trading etc.

If it’s the Government of the country you are in it’s de facto legal.

BUT if as in the case of the US Government deliberatly manipulating routing such that communications by two parties is forced into a US Gov system, then it may well not be legal, and this is a point the EU is slowly raising.

Thus just one certificate on a MITM server in the US could concevably catch anything upto 100% of SSL/TSL communications between two European countries… Such tricks have been aledged on the link between Spain and Cuba and there appears to be evidence of this.

uh, Mike May 16, 2014 8:27 AM

Ethical question, if you’re connected to F using a forged cert, and F knows it, shouldn’t F tell you to watch out?

Not that I know much about F, I suppose F has a bunch of legalese, but still….

Greg Slepak May 16, 2014 9:11 AM

Also, the authors weren’t completely clear on whether they explicitly considered the use of old, revoked certificates for MITM.

x11739 May 16, 2014 9:23 AM

My guess is that a very large fraction of these will be employers who MITM their employees to allow them to monitor their HTTPS browsing sessions.

I think this hits the nail directly on the head. I’d also imagine that, in terms of the use of forged certificates, this approach would bias in favor of employer portal MITMs, rather than the ones people would be particularly concerned about like phishing and trojans. Probably wouldn’t be amazingly hard to trace these back and account for at least some of the “semi-legitimate” MITMs.

z May 16, 2014 9:27 AM

The whole CA system is a nightmare for today’s internet. Maybe it would work in a small internet with only a handful of sites that use SSL, but it still wouldn’t be that good.

It’s worse than a single point of failure. It’s multiple single points of failure.

Any CA that gets hacked is a trust catastrophe.

Any malicious use of the signing cert, whether required/coerced by the CA’s home government or voluntarily, is too.

Any government abusing its ability to act as a CA, especially when combined with privileged netowrk positions and interception of all data, is even worse.

And as for a forged non-valid cert, the only thing preventing the system from failing is a popup window in the browser forcing my grandmother to make a security decision with regards to a problem that she cannot hope to understand. She and everyone else is told that there might be a problem (but maybe not) and to click here to ignore it. What do you think will happen?

JD May 16, 2014 10:19 AM

While the number initially was surprising the actual number of certificates in use by suspected malicious threat actors was quite low. Since the author didn’t state how they were able to get their plugin on Facebook, I assume it was through a Facebook game or something similar, this would necessarily skew the results. This information is definitely valuable, but there was some information left out.

Marcos Dumay May 16, 2014 10:25 AM

@Greg Slepak
From your comment I get that 0.2% is probably a lower bound. Sites that are not part of PRISM may get highter rates.

Anyway, yes, much of that’s caused probably employers spying on their employees… But I just can’t stop thinking about all those times some random country at the middle of nowhere puts a wrong routing table at the net, and gets to carry most of the world’s trafic for a while. There are plenty of CAs that could go rougue, and nobody would notice.

Benni May 16, 2014 11:08 AM

@Greg Slepak:

No the nsa uses Man in the Middle very often, and specifically against facebook. Prism is just one way for them to get data.

NSA even puts up faked facebook servers:

or impersonates google:

All done in MITM attacks on ssl encrypted traffic. A faked facebook or linkedin server is then used to inject malware on a target’s computer with a technique that is called quantum insert:

James A May 16, 2014 11:12 AM

0.2% is very much a lower bound, since it doesn’t include firewalls that MITM and block port 843 (required for flash raw sockets), eg school and corporate networks.

Carlos May 16, 2014 11:25 AM

Ok, after reading the paper I get it 🙂
The topic triggers some warning signs in me, but because users don’t fully (fully ?) understand what it means to have SSL/TLS “secure” their connection.

Cisco’s ASA CX does this same thing creating a surrogate certificate on the fly, BTW.

And that goes against what I guess is the users perception of the lock sign means: I’m talking to the right server, confidentially, no MITM.

The real service provided is “you are talking to a referred party” and you trust the introducer, somehow. AFAIK, this is bending the trust. It might break.

Oh well, not red faced after all.

Developer May 16, 2014 11:41 AM

I was just doing some testing of how my Facebook social sharing works within my iPhone app. To do this I wanted to look at all the connections made outbound from my app to Facebook. This is over SSL. So I got a proxy on my laptop (“Charles Proxy” which has a nice UI.) with a fake cert and accepted it as such on the phone. I made my requests via the laptop’s proxy, debugged the situation and was pretty pleased.

I’m sure I don’t account for 0.2% of traffic by myself 😉 But there are plenty of other developers needing to do similar things. Could this all be about nothing?

Greg Slepak May 16, 2014 11:57 AM

@Benni, perhaps instead of “doesn’t apply” I should have said something like “isn’t necessary”.

You’re right, the Intelligence Community (IC) probably does do MITM (sometimes), when they feel it’s necessary, but it doesn’t seem like they use it to do bulk-surveillance on Facebook or Google (usually). For that, they seem to rely on data requests as per PRISM, or perhaps direct extraction by hacking the company servers.

Note that the first link you gave about “faked facebook servers” is not what this study is talking about (MITM), but “man on the side” and their QUANTUMINSERT thingy, which again, isn’t MITM.

The 2nd link about Google though is genuine MITM.

Eric Lawrence May 16, 2014 12:28 PM

This is a horrifically misleading summary of the paper.

What the paper actually says is: 0.2% of HTTPS traffic to Facebook is intercepted and re-signed, and the vast majority of that interception and resigning happens either on the user’s local computer (by way of trusted security software which is acting scanning proxy) or locally on a private network behind a corporation’s intercepting proxy/firewall. Only a small percentage of intercepted traffic is a result of malware or other nefarious activity.

Garrett Kajmowicz May 16, 2014 1:56 PM

I decided to have a little fun with this.

I decided to verify the certificate of a website through an independent channel. I decided to verify the SSL certificate of my bank.

I called my bank and was eventually connected to the online banking department. There was a deep need to verify my account information before providing me any assistance. I found it amusing that in order for me to perform a security verification I was required to provide a substantial amount of personally-identifying information, risking greater disclosure. I initially refused, as I was merely attempting to verify information which was publicly distributed (no logging in required).

The person on the end of the line was polite, but didn’t have a clue about what I was talking about. “SSL Server Certificate fingerprint? What is that?” She was very kind and patient, and made a few enquiries, but wasn’t able to get anywhere. I asked to speak to the IT staff directly. I was told I couldn’t without authenticating that I had an account with them. At this point I was 20 minutes into the call and figured I might as well go for it, so I did.

A bunch of information provided later, I was connected to level-2 support. A wonderfully nice man who sounded really too pleased to be talking to me. When I asked about the SSL Server Certificate fingerprint and signature, I was asked “what is that?” I provided a brief overview of what SSL was and how the trust tree worked (he’d never heard of Verisign), and I was passed up to level-3 support.

The level-3 support person at least seemed to have a clue about what I was trying to do, though had little idea on how to do it. I was put on hold while she spoke with a few people. Ultimately, I was told that there was no way to know which server I was connecting to and so no way to verify the certificate. However, I was told that as long as I was using https and had the little lock in the corner, everything would be secure.

I requested that a note be sent to a network engineer to get back to me with this information. That was agreeable, and I was told to expect a call back in the next few days. We’ll see how that works out.

Total time on the call: 47 minutes and 41 seconds.

Anura May 16, 2014 3:02 PM


The big problem is that a CA is an authority over certificates, not an authority over what they are issuing the certificates for. If you want to have a certificate that says someone is the owner of a given domain name, the authority needs to be an authority for the domain, not a third party. This is why CAs should be replaced with DNSSEC.

Benni May 16, 2014 3:30 PM

Regarding Bulk surveillance, here is a new justification for that.

News from the german secret service:

Apparently, the german secret service BND needs some justification for its tapping of international fibers.

As in case of the NSA, preventing some imaginary threat suits the BND to justify his access to tapped fibers of “partnered services” and its own fiber tapping.

The BND now plans to spend 300 milion euro to create a program that should “actively prevent” industrial espionage.

By analyzing the data flowing in fibers on foreign ground, the BND figured, one could capture and delete a malware that is targeting some german company, before it arrives at the target.

The problem is just, that this program will only work if you can search through every data packet that enters germany within lightning speed.

The fact that BND only needs 300 millions to make something like this work suggests that BND has already in place most of the hard and sorftware that is necessary for this.

So one has to assume, that if a user contacts a foreign server, there is someone at BND seeing this traffic, or that there is someone at NSA seeing this traffic and shares all this with germany.

In the former case, it would mean that we have some kind of mini nsa at germany, and in the latter case, the planned “active protection” system would be useless against malware from nsa, since they certainly would not warn german services when they deploy their bugs on german targets.

But what is more dangerous here is the line of argumentation.

With this, the BND will certainly intercept some malwares directed at german companies. He then can go to press and ralley much industrial support, with the following argument:

Bulk surveillance is needed, because only if we sniff and analyze each datapacket sent from and to germany, we are able to delete malwares directed at german companies.

So if we would end our bulk collection, our important companies would be unprotected and helpless regarding these dangerous foreign agents from china and russia….

mike~acker May 16, 2014 4:45 PM

=”Anyone pulled this craziness off?”

an argument of ridicule,–

Perhaps security should be under the individual/user’s control,…. not issued by the commercial system….

notice the use of subjunctive.

an x.509 certificate does contain a public key signature….. and does get validated by a “certificate authority”.

my point is: that’s not enough. I think the CA’s signature on the certificate would be better taken as marginal trust…… full trust then would better require the user’s approval. just because there’s a program missing doesn’t mean this should not be done. the first step in this is to get people thinking. I notice a lot of that going on in this thread. and that’s a Good Thing.

Hallie May 16, 2014 11:22 PM

You could sign X509 with PGP? Anyone pulled this craziness off?

Andrew, that sounds a lot like Monkeysphere:

“When you direct the browser to an https site […], if the certificate presented by the site does not pass the default browser validation [… the] agent then checks the public keyservers for keys with UIDs matching the site url [….]. If there is a trust path to that key, according to your own OpenPGP trust designations, the certificate is considered valid”

Andrew May 17, 2014 1:14 AM

@Hallie: Hey that monkey-thing looks interesting. I think you just found me a new toy to play with. Thanks!

mike~acker May 17, 2014 7:28 AM

@Hallie: fascinating post!!

=””When you direct the browser to an https site […], if the certificate presented by the site does not pass the default browser validation [… the] agent then checks the public keyservers for keys with UIDs matching the site url [….]. If there is a trust path to that key, according to your own OpenPGP trust designations, the certificate is considered valid””

I am going to study this further. it may be very close to what i have in mind: the idea being to (1) redesignate the Trust Level from general Certificate Authorities as delivered by the browser software OEM to marginal : not suitable for business use (involving money — e.g. credit union, investments, IRS, online shopping etc ) . (2) provide a means where certificates needed for business use (as above) would be vetted and signed by the user. getting the separate verification of the certificate would be done by phone, US Mail, visit the Credit Union or such .

if you think about this…… how many certificates do you need to vet and sign in this manner? half dozen maybe ?

doing this with browser plug-ins would help get this an easy start by making it available to “those desiring”

Clive Robinson May 17, 2014 9:36 AM


What you are looking for is a secondary out of band authentication channel, neither SSL/TLS or similar solutions provide.

The reason for this is SSL was designed for what are in effect “one off connections” where the cost of such secondary channels would be prohibitivly high and wastefull in terms of time and effort.

However that was way back when e-Commerce was at best a novelty, and things have changed and e-Crime is way more rsophisticated than would have been envisioned back then.

As others have noted certificates don’t actually mean much –other than money has been paid to the CA– certainly not that any effort has gone into checking the legitamcy of the information supplied by the CA customer. Further a little investigation show there is little or no checking of CAs by the browser organisations.

Thus the level of trust in a CA signed cert is at best minimal at ttworst known to be untrustworthy. For some reason browser developers don’t make it easy for end users to remove trust from the CAs they’ve nominaly accepted the root cert for, and thus appear complicitly in any unacceptable use of illegitimate certificates. Nor do they appear to want to make any secondary checking of certificates easy to perform…

So such secondary checking of certificates will result in a third party addin, the quality or reliability of which will probably not be high or verifiable as users will expect it to be free etc. Even if it was, it is still going to become a target for attackers who will find attacking it to be profitable.

But even if that does not happen getting support for secondary channels for checking certs will in all probability receive little or no support from commercial organisations, who will as is current normal practice in many places, either not support it or use contractual terms to make any liability carried by the customer and then pay only lip service to it.

Which suggests legislation is required, however don’t expect that any time soon or even at all with arguments still occuring about what juresdiction applies and how it can be enforced for likes of PPI ownership.

The issue of Trust for non local entities is one that is still regarded as “research in the future” by many. Basicaly the current solution is for the communicating entities to meet personaly and exchange credentials face to face. Whilst you might be able to walk into your local branch of your bank to obtain their certificate fingerprint this is unlikely for any other type of online business. Further we know from pre-Internet “Postal Fraud” that secondary channels have to be chosen and mitigated with care otherwise they just become another attack vector.

Thus you have three choices, (1) Cross you fingers and hope (2) use a trust broker (3) travel to the suppliers premises.

In many respects CAs were supposed to be reliable trust brokers but freemarket economics and lack of regulation made a compleate mockery of that notion in short order. So personaly I will not be placink trust in brokers untill there is suitable legislation is in place for me to externalise my liabilities through.

yesme May 17, 2014 12:45 PM

Moxie Marlinspike[1] held a presentation about SSL on Black Hat in 2011. He describes the origin and the real problems of CA (they are related).

“SSL And The Future Of Authenticity”[2]

Serious if you want to know all about CA’s, watch this!

(altough his answer to the problems are of course HIS opinion)

A blog about this subject[3]


Albert May 17, 2014 3:57 PM

0.2% is pervasive?

From Merriam-Webster: “existing in or spreading through every part of something “

z May 17, 2014 8:22 PM


Unfortunately, Convergence is’t working for me anymore. I’ve been using Perspectives for years and it works great, but I do think Convergence is a better solution.

His point about Comodo is very important. Even if a CA is no longer trustworthy, and even if your browser can easily be configured not to trust it, can you really stop using it when it breaks 1/4 of the internet? CAs are too deeply woven into the internet to stop trusting them when they are untrustworthy.

mike~acker May 18, 2014 7:24 AM


thank you for a very careful reply . i find no argument with any of your remarks.

i visualize progress as starting slowly, most likely with an open source plug-in, hopefully for Firefox & Chromium

the plug-in would be attacked of course — as is software generally . we have been moving slowly to adopting the practice of using digital signatures for software distributions — a critical step as the software must be protected before there can be any discussion of protecting data .

the practice has started in some limited areas. “Authenticode” comes to mind and in the Linux World we have an MD5 hash we can verify before we burn an ISO image for a new OS. Paypal had done some work in using PGP to authenticate e/mails but this was done working with major e/mail services — Google, MSFT, Yahoo… AAPL and Linux use approved libraries for application code, another good idea.

My thought on accessing the 2d source of authentication was to start by working through Credit Unions which are much more customer friendly than commercial elements — and readily accessible to customers .

I see some movement on this issue — which I see as a very good sign that things may well improve — sooner rather than later. it’s not that people don’t recognize the problem at this point… here’s Krebs’ scorecard for the Target Store Heist

Observer May 20, 2014 2:02 AM

@ mike~acker

“in the Linux World we have an MD5 hash we can verify before we burn an ISO image for a new OS.”

1.) MD5?

Hasn’t MD5 been deprecated for /years/ already?

Here’s what the GnuPG FAQ has to say about MD5:
“For many years it was one of the standard algorithms of the field, but it has not aged well and is widely considered to be completely obsolete.”
( )

Does anyone actually dispute this?

2.) Even the most robust cryptographic hash, /alone/, cannot do more than verify the completeness/integrity of a file– not the /authenticity/.

For that, one must somehow establish trust in the /source/ of the hash.

mike~acker May 20, 2014 7:19 AM

= ” Hasn’t MD5 been deprecated for /years/ already? ”


but just because it could be cracked doesn’t mean it’s useless: it makes the process of posting a corrupted distro much more costly, and — helps to further the practice of protecting software by offering authentication

the software must be protected first — before there can be any discussion of protecting data .

Anura May 20, 2014 8:21 AM


It’s a lot harder to get an EV certificate issued by a legitimate 3rd party, but it doesn’t protect your website from being spoofed, as you don’t need an EV certificate to do so. If you are worried about government agencies, then it solves nothing as the concern is that they have recovered private keys for intermediate and root certificates and can spoof any certificate.

tjallen May 20, 2014 1:23 PM

A personal story: I bought a SSL certificate from a major hosting company to build an https-everywhere website. The Economy SSL certificate required that I move the site to a “dedicated domain.” Okay, paid for that, they put the certificate in place. Then I noted that clicking on the lock in the browser address bar did not identify my company, but instead named the hosting company. Worse, the pop-up box said the site was run by “unknown” – ugh! – why would anyone trust that?

So I called the hosting company to ask why my company is not identified when I click the lock, and they explained that the less expensive certificate only guarantees that I am connected to the hosting company’s servers. However, if I purchased the more expensive Premium certificate, then someone would investigate my company, get a Lexis-Nexis report, BBB report and more, and then clicking on the browser lock would name my company, but it would not be any more secure, it would just look better. This was a learning experience.

Back to lurking,

Mr. Pragma May 20, 2014 3:02 PM


I wouldn’t consider as authoritative anything about certificates, TLS, etc. coming from someone who states “The public key infrastructure is a beautiful, cleverly designed and highly scalable system. It’s one of the few things we got right as the Internet was being born.”

One ugly problem with EV certs, which per se might be nice (if only for psychological reasons) is that nsa & Co may still pervert and abuse the “trust” structure and your client may still be fooled in diverse ways.

But then, in effect those EV certs are — and quite probably are meant as — what Bruce Schneier calls “security theater” but as such they seem to enjoy much trust with Joe and Jane Smith.

Observer May 27, 2014 9:48 AM

@ mike~acker:
“just because it [MD5] could be cracked doesn’t mean it’s useless: it makes the process of posting a corrupted distro much more costly, and — helps to further the practice of protecting software by offering authentication”

Let’s say an attacker has gained sufficient access and privileges to a server to allow him to swap a legitimate file (in this case, an ISO for an OS) for a trojan. At that point, what would stop said attacker from simply swapping the corresponding hash file as well, for one containing the hash of the /rogue/ ISO? Absolutely trivial, no? Same for an attacker who sets-up a rogue mirror.

Thus, I do not see how a hash alone could make the process of posting a corrupted distro “much more costly”, as you asserted (or even just more costly at all).

This is not unique to MD5 either but is true, as I wrote in my initial post, for even the most robust cryptographic hash; it /alone/ cannot /authenticate/ a file.

It sounds to me as if you are confusing or conflating simply verifying file integrity*, on one hand, with /authentication/, on the other. This seems to be a common error that many people make. To illustrate, say I download fileXYZ from siteXYZ. I then generate a hash for the copy of fileXYZ that I have just downloaded. If the hash I generate matches what siteXYZ tells me is the correct hash for fileXYZ, then I can be reasonably certain that the file I downloaded is complete, i.e., identical to the one on the server. But, as I demonstrated in my initial paragraph, I would /not/ know whether the file on the server, the one that I had downloaded, was actually the file it was presenting itself as and not a trojan masquerading as the legitimate file.

In order for a hash to tell me /that/, I would have to first authenticate the /hash itself/. This is why distros such as Debian and the *buntus make available /GPG-signed/ files containing the hashes for a given release. (Tails, on the other hand, signs the ISO file itself. Tor Browser Bundle does both: signs the files themselves /and/ makes available signed files with the hashes.)

(*’integrity’, obviously, in only a very specific, /technical/, limited sense of the word here, as the whole point is that this alone does not and cannot verify the /broader/, /overall/ ‘integrity’ that is at issue here)

Inquirer a.k.a. Observer May 27, 2014 12:33 PM

@ Anura,
“It’s a lot harder to get an EV [extended validation] certificate issued by a legitimate 3rd party, but it doesn’t protect your website from being spoofed, as you don’t need an EV certificate to do so.”

What the source I quoted from ( ) claims, specifically (a little farther down on the page), is that (in Firefox and Chrome but not in Internet Explorer),

“Any EV site being intercepted will LOSE its green EV display status!
(It will show as “secure”, but it won’t show as EV.)”

Has any widely-respected authority confirmed or disputed this specific claim?

If the claim /is/ true, then wouldn’t EV still have considerable value in protecting against many, if not most, attacks short of the TLA-level ones? (See below for those)

“If you are worried about government agencies, then it solves nothing as the concern is that they have recovered private keys for intermediate and root certificates and can spoof any certificate.”

Valid point indeed. That I can find no mention of such concerns on the cited page would, itself, seem to be cause for concern.

@ Mr. Pragma:

I share your skepticism with regard to the credibility of Steve Gibson and GRC. And for a lot more reasons than just the quote you posted. More troubling and, it seems to me, more dangerous, is GRC’s password-strength calculator; see:

To say nothing of GRC’s ShieldsUp! firewall test. How many people have been needlessly made to worry or even panic simply for not getting the “TruStealth” result (the value of which is dubious, at best)?

Anura May 27, 2014 11:30 PM


The odds that you will notice a website was once EV and now is not, is about slim to none. It’s my experience that the only value is the potential to increase sales because if potential customers see your site has the green bar they may be more inclined to feel safe buying from you. For major sites like Amazon, the value is absolutely nil.

Anura May 27, 2014 11:39 PM

It’s also kind of interesting to note that the original idea behind CAs was that they would perform all the kinds of validations that EV certificates require. They would have a lot more security value if EV certs were the only certificates issued by CAs. I worked for a company that resold SSL certificates and there wasn’t even email validation; you placed the order, and as long as you order billing information matched the domain registration, it was issued immediately… But they didn’t validate it, we passed that information through the API. So I could have put down whatever I wanted, and got an SSL certificate for 99% of domains out there.

Anura May 27, 2014 11:50 PM

That was incoherent – I blame the long island iced tea I’m drinking.

We passed the order information to the Certificate Authority – Domain name, billing information. They didn’t bill the customer, we did, but they accepted our word that the billing information was verified. If the billing information matched the whois information, it was issued immediately and we would receive it in a matter of minutes.

Inquirer a.k.a. Observer May 29, 2014 4:31 PM

@ Anura: “The odds that you will notice a website was once EV and now is not, is about slim to none.”

When one navigates to an ordinary, non-EV HTTPS site in Firefox*, a gray padlock displays within the URL bar, just before the URL itself. In contrast, for an EV site, a distinct, additional bar appears with the padlock, along with the name of the owner of the domain, both in a conspicuous green color.
(*I am fairly certain that the difference is similarly conspicuous in the other mainstream browsers as well, it’s just been a while since I’ve used any of them, so I can’t say for certain.)

Do you really mean to tell me that if one day, a site that had always appeared as EV for you (by the conspicuous indicators I just described, i,e., green padlock and owner’s name within distinct bar) suddenly appeared with only the ordinary grey padlock, that you might not notice the change? I find that difficult to imagine.

Or are you suggesting that an attacker /could/ forge the green padlock, etc. in order to make a non-EV site appear identical (or at least similar enough to fool a considerable number of people) to an EV one?

If so, all I ask is that you make this clear and that you provide some basis for your contention.

(I am skeptical about the claim in question because of who is making it, yes. But that doesn’t mean that I dismiss it /a priori/, either.)

Inquirer a.k.a. Observer June 1, 2014 12:10 AM

Let me also note that I appreciate other points made by Anura as well as tjallen.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.