The NSA Warns of TLS Inspection

The NSA has released a security advisory warning of the dangers of TLS inspection:

Transport Layer Security Inspection (TLSI), also known as TLS break and inspect, is a security process that allows enterprises to decrypt traffic, inspect the decrypted content for threats, and then re-encrypt the traffic before it enters or leaves the network. Introducing this capability into an enterprise enhances visibility within boundary security products, but introduces new risks. These risks, while not inconsequential, do have mitigations.


The primary risk involved with TLSI’s embedded CA is the potential abuse of the CA to issue unauthorized certificates trusted by the TLS clients. Abuse of a trusted CA can allow an adversary to sign malicious code to bypass host IDS/IPSs or to deploy malicious services that impersonate legitimate enterprise services to the hosts.


A further risk of introducing TLSI is that an adversary can focus their exploitation efforts on a single device where potential traffic of interest is decrypted, rather than try to exploit each location where the data is stored.Setting a policy to enforce that traffic is decrypted and inspected only as authorized, and ensuring that decrypted traffic is contained in an out-of-band, isolated segment of the network prevents unauthorized access to the decrypted traffic.


To minimize the risks described above, breaking and inspecting TLS traffic should only be conducted once within the enterprise network. Redundant TLSI, wherein a client-server traffic flow is decrypted, inspected, and re-encrypted by one forward proxy and is then forwarded to a second forward proxy for more of the same,should not be performed.Inspecting multiple times can greatly complicate diagnosing network issues with TLS traffic. Also, multi-inspection further obscures certificates when trying to ascertain whether a server should be trusted. In this case, the “outermost” proxy makes the decisions on what server certificates or CAs should be trusted and is the only location where certificate pinning can be performed.Finally, a single TLSI implementation is sufficient for detecting encrypted traffic threats; additional TLSI will have access to the same traffic. If the first TLSI implementation detected a threat, killed the session, and dropped the traffic, then additional TLSI implementations would be rendered useless since they would not even receive the dropped traffic for further inspection. Redundant TLSI increases the risk surface, provides additional opportunities for adversaries to gain unauthorized access to decrypted traffic, and offers no additional benefits.

Nothing surprising or novel. No operational information about who might be implementing these attacks. No classified information revealed.

News article.

Posted on November 22, 2019 at 6:16 AM46 Comments


Benjamin Kaduk November 22, 2019 7:22 AM

It’s an interesting thought experiment to consider what safety would be provided by using a name-constrained X.509 CA certificate for the middlebox’s embedded CA. (This is a thought experiment since I don’t know of any actual deployments that make use of such a scheme.) With TLS 1.2 the middlebox could see the actual certificate being presented by the server and see if (all the names in) it is compatible with the name constraints and thus if the middlebox is allowed to break and inspect, but in TLS 1.3 the server certificate is encrypted and the middlebox would need to use the SNI value and some heuristics to guess whether or not it is allowed to break and inspect. The encryption of the server certificate provides privacy improvements in the normal two-party case, and the ongoing work on encrypting SNI will provide further privacy improvements and effectively force such middleboxes to either break all connections or break none. In cases where regulatory requirements prevent the middleboxes from breaking connections to (e.g.) banking or healthcare websites, it seems that this would force the “break none” case.

Wael November 22, 2019 8:19 AM

Nothing new… but some clarifying comments are in order:

To minimize the risks described above, breaking and inspecting TLS traffic should only be conducted once within the enterprise network.

First: Hmm! How does that bode with the buzzword: ZeroTrust — an old concept / principle that has been rebranded 😉

Second:There are TLS inspecting solutions that operate on strong cypher (cipher) suites that provide FS / PFS in the cloud as well.

Third: Payload encryption is recommended if TLS offloading happens by a less trusted entity or an entity that’s not under full and exclusive control by the owners — these are typically the session termination endpoints. Naturally payload encryption willt complicate TLSI and TLS deep packet inspections, but won’t fully eliminate it. So I hear.

@uh, Mike,

Is that person in the middle Trent, Alice or Mallory

If video inspecting is the game, then Vohkinne “the Voyeur” is the name!

James November 22, 2019 11:01 AM

It’s almost as if engineering a back door into an encryption system opens that system up for attack. Interesting advice coming from the same executive branch as Bill Barr.

Dan November 22, 2019 11:51 AM

I was shocked when I first heard of these security products that man-in-the-middle SSL. I was even more shocked when I found out that many/most IT departments think they are a necessity. It is hard enough to prevent man-in-the-middle from malicious software, it is really bad that we expect security software to do so.

Such products also prevent the use of client-certificates. While client certificates aren’t frequently used, they ought to be. They are a very good security feature in some circumstances.

SpaceLifeForm November 22, 2019 4:03 PM

“Inspecting multiple times can greatly complicate diagnosing network issues with TLS traffic”

s/diagnosing network issues with/spying on/

Cris Perdue November 22, 2019 8:31 PM

So, is it wrong to believe that an ISP, such as Comcast just to pick an example, could unilaterally decide that all their customers are part of their “enterprise network” and inspect all customer HTTPS traffic?

It is not clear to me what is standing in the way of this.

John Other November 23, 2019 5:58 AM

This has been a known issue since forever, but the big one I remember was in 2012 in an article at The Register titled “Trustwave admits crafting SSL snooping certificate” where Trustwave issued an intermediate ca for the * tld, to Bluecoat IIRC. They were also pressured into revoking it. This is one of the reasons that the certificate transparency ( ) exists.

Dan, these products don’t prevent the use of client certificates unless the destination web server validates your login and client certificate match. Even then, there’s nothing preventing a company from purchasing a client certificate that simply identifies the company; after all, the company owns all of the the inboxes.
After a short google search I can’t find any CA that advertises multi-email-address client certificates, but there’s nothing that would prevent a company from providing their users a document that directs them to’s client cert page in order for the employee to create a client certificate, while at the same time running an automated script that does the same thing via a different CA. In actuallity, each employee would have two digitial certificates, one stored locally on their computer and used to authenticate, and the other stored centrally (in an HSM?) to be used by the web proxies for the external connection.

This is why, if you’re using certificate authentication, that you should restrict it by signing CAs, and also validate the fields.

Also, these inspecting proxies are a requirement for some industries and are stipulated by the regulations under which they’re allowed to operate. They’re also useful if you employ anyone who plans to break their employment contract and post your company’s data to pastebin.

On the other hand, they’re also great for e-book, music, and video piracy — if you’re an unscrupulous administrator.

Who? November 23, 2019 8:18 AM

@ uh, Mike

Is that person in the middle Trent, Alice or Mallory?

In most cases he is Trent, our ISP acting in name of our Government. Others he is Mallory. Sometimes even Trent becomes Mallory if there is something tasty in our communications.

Thanks, Bruce, for sharing this NSA/CSS CSI. I have been collecting these great documents for years, but have not looked for new ones in a month or so. The NSA is doing a great job improving security with these documents full of good tips and [sometimes unusual] common sense.

Wael November 23, 2019 10:02 AM


Is there any way to avoid the effective use TLSI?

Long answer: Depends on many factors and various design and architectural choices or constraints. Also depends on actors and owners (or delegated owners, such as a cloud provider) in control.

Short answer: Strong cipher suites + Perfect Forward Secrecy + strong payload encryption and protected endpoints. But this won’t work in all flow configurations — see Long answer.

If you are in control of end points, then reduction of the attack surface to protocol-only (line of attack, rather than surface of attack) is your option. And that’s not easy to achieve. Think of it as a “Theoretical Max Protection design goal”. We briefly touched on that not too long ago.

If you’re just a consumer, and not a designer, then you need to operate under the assumption that TLS is not as secure as you’d like it to be.

RealFakeNews November 23, 2019 11:44 AM

Call me nïeve, uninformed, or whatever, but this is basically saying TLS can’t be considered remotely secure?

So why the hell are we still using it?

It’s bad enough that MITM occurs by WiFi providers and others, but this appears that TLS is designed to be MITM?

Just what the hell am I missing in all of this?

Mike November 23, 2019 12:43 PM

If you want to communicate securely in a corporate enterprise setting with friends and family, you can always download a browser like Firefox/Chrome yourself rather than using the preinstalled one on your workplace PC with the administratively installed certificates.

SpaceLifeForm November 23, 2019 1:34 PM

@ Mike

You have completely missed the point.

It is the traffic that is being MITMed.

Whatever browser is used, does not matter.

Note that one of the main purposes of this MITM, is not to watch normal website traffic, but to watch email traffic.

Which could be browser based. Or POP3 and/or IMAP.


It’s not really SSL/TLS that is the problem.

It has much more to do with domain certificates.

DNS, CAs, all centralised.

Eliminating DNS and CAs would help.

Mike November 23, 2019 1:51 PM


TLS traffic security is in part based on the authenticity of installed certificates. As I said you could just download Firefox, which comes with its own certificates, and use it to communicate with friends and family without getting snooped on through MITM attacks.

Jonathan Wilson November 23, 2019 3:27 PM

If you tried to use an alternative browser without the MITM proxy’s special root certificate installed all you would see is a “hey, this is signed with a CA I dont recognize” error (pointing at the MITM proxy’s certificate).

ISPs like Comcast wouldn’t be able to get away with doing this since it would break so many things customers use. In a corporate environment they have more control and can avoid using devices that aren’t compatible with the MITM proxy or find other ways to ensure those devices can’t be used for the kind of things the MITM proxy is being used to detect.

Who? November 23, 2019 6:09 PM

@ Wael

Thanks for the detailed answer. I will look at possible approaches but I agree with you, it will be hard to do right. In some cases I am in control of both end points, but not in all cases. Sometimes I need to use services provided by third parties from affected workstations.

I suspect there is some sort of inspection on TLS traffic at my University since last summer, as I am unable to log into a few servers that run OpenBSD and pf (using /etc/pf.os to allow access from other OpenBSD hosts only). These machines reject access from my OpenBSD-based workstations at the University now, while other ones placed outside that network are allowed. I suspect the University is using the FortiGates to inspect TLS traffic.

No one of these workstations have digital certificates owned by the University, but the problem looks deeper that just traffic normalization.

Dave November 23, 2019 7:32 PM

@Dan: No shock, many organisations operate under regulatory requirements to monitor traffic in their facilities, financial institutions being an immediate example. This is standard practice, there’s nothing shocking about it, they’ve been doing it for decades (long before SSL, e.g. with keystroke-level auditing of everything done on the banking mainframe).

Megalomaniac Corporate Boss November 23, 2019 7:44 PM

Transport Layer Security Inspection (TLSI), also known as TLS break and inspect, is a security process that allows enterprises to decrypt traffic, inspect the decrypted content for threats, and then re-encrypt the traffic before it enters or leaves the network.

Anyone with a corporate “management” level of clearance or higher has access to TLS-encrypted traffic on a “must have” basis without end-user or consumer permission.

This has always been the case since who knows how many x.509 root CAs are trusted by the average web browser by default.

It’s a bit like the town locksmith and municipal police department with access to all the local brick-and-mortar businesses as well as, nowadays, their video cameras and computerized inventory control systems.

But much, much worse, because hackers from around the world are always busy in the comfort and privacy of their own homes social-engineering and cracking into anything online.

They downgrade the security because guns are banned, hate speech is banned, marijuana’s legal, there’s plenty of peace and love to go around, and hackers are just experimenting, not harming your business like real skinhead crackers.

Firefly November 24, 2019 2:43 PM

I understand the matter in focus are products like Blue Coat:

I work for a corporate where a “fake” root CA was distributed to and installed on all machines, making the browsers believe fake TLS certificates. These are created on the fly for various sites.

A user does not see any warning on his side. Only when inspecting the certificate, it is visible that it was created and signed by Blue Coat.

Curious November 25, 2019 2:19 AM

What would be the difference between so called “TLS inspection” and hacking/cracking?

‘TLS inspection’ was quoted as being “a process”, which sounds awefully vague to me.

Curious November 25, 2019 2:22 AM

To correct my former post:

“a security process” should have been the more correctly worded quotation from the original post at the top, in my post just above, not just the phrase “a process”.

Joe November 25, 2019 11:16 AM


As was pointed out in the comments, if you fear you’re being monitored in a corporate environment without any warning, then download a browser like Firefox that comes with its own certificates and browse familiar sites. If you get a popup that the certificate is not recognized then you know you’re being MITM’ed.

Firefly November 25, 2019 11:25 AM


Firefox, as well as chrome, are both respecting any customized CA installed on the OS level.
It is a valid feature, abused by the corporate.

Personally I’m using my private phone for private transactions such as banking. Another habit is to peek on the certificate signature before forms submition.

ME November 26, 2019 4:14 AM


I wouldn’t call it a “backdoor”, TLS inspection is just a user friendly name for good ‘ol MITM attack. It more than doubles the amount of attack vectors. It adds one more TLS session with all its attack vectors and adds new attack vectors on inspection process and server/CPU/VM it’s running on.

“Playing with fire” fits perfectly.

SpaceLifeForm November 26, 2019 3:09 PM

@ Curious

“What would be the difference between so called “TLS inspection” and hacking/cracking?”

Money, time, resources.

Big players have that.

Hackers, script kiddies, not so much.

Weather November 26, 2019 10:48 PM

@Pocono Chuck
If they have the private key, they can decode the traffic from the handshake etc, if you are asking if someone is parrellel not actual mitm then you can as a attacker make them use you key, but if you are mitm and give the client and website your key, they will have to use other means to detect the tap, one send them wrong encryption data with TTL or number of hops before drop and if it disappears. Set the MTU max trimsion unit to a low value and see if the webserver asks to increase it, and also timing. If the webserver is under you control, you can set up a verification system, not necessary using know each public key. You can use the webserver selection of port number, well client to workout os

Ambrose Eslick November 28, 2019 11:57 AM

There seems to be some misunderstanding about how TLSI works and the kind of impact / scalability we can expect.

Before panic runs wild: TLSI only works if someone (e.g. your company’s sysadmin) has installed the right kind of CA in your box. If your computer does not have the right CA, TLSI won’t work. No: an ISP can’t flick a switch and perform instant TLSI on all its customers.

In a nutshell: it’s basically the technology that Bluecoat have been selling for ages, and yes: using your own browser to circumvent the pre-configured CA will break the chain. In particular, Tor-Browser (ideally with a bridge) will do the trick nicely.

John November 28, 2019 12:14 PM

@ Ambrose Eslick

Apparently you can’t use “your own” browser or just any browser. As you correctly mentioned the Tor bundled browser would work. But downloading any browser out there thinking it will bypass TLSI won’t work, as they respect the already installed OS level certificates.

MxRip November 28, 2019 2:34 PM

An off-the-shelf installation of Firefox on a “compromised” OS wont bypass TLSI, but you will get a warning message (something along the lines of: unrecognized or misconfigured certificate, are you sure you want to continue?). The Tor browser would bypass TLSI.

John November 28, 2019 6:14 PM

@ MxRip

Unfortunately you won’t even get a warning popup message if you install a regular browser on a “compromised” system. As was indicated earlier, these browsers respect the installed OS level certificates. You won’t be warned, but you can look at the certificate in the page padlock icon to see if there’s anything funny going on. The only other option is to use Tor browser.

SpaceLifeForm November 29, 2019 3:26 PM

Do not use Tor in corp env.

Just don’t.

You will stand out like a sore thumb.

Only use corp VPN if off-site and doing actual work.

Otherwise, if off-site, you takes your chances.

But, I still say, no VPN or Tor.

Just be part of the noise.

Clive Robinson November 30, 2019 2:30 AM

@ SpaceLifeForm,

But, I still say, no VPN or Tor. Just be part of the noise.

“Hiding in the noise” is all we’ve got currently, and it’s no where near good enough.

Tor could be so much better but this century it’s pretty much been “to little to late”.

We know how to make things way better but…

I guess people should be asking “Why?” but they either don’t ask or they get ignored.

Charette December 1, 2019 9:46 AM

@ SpaceLifeForm

I have been using tor in my workplace for over a decade. I guess I probably do stand out like a sore thumb to the IT guys, but I don’t care because I use tor for privacy, not to commit any illegal activity. If I ever had to do something nefarious more discretely (and for some bizarre reason I decided my office was the place to do it), it would probably be a different story.

AtAStore December 1, 2019 4:58 PM

@Charette, SpaceLifeForm

“… for privacy …”

You’ve probably already considered cameras, keystroke loggers, etc., and if you are in the USA, AFAIK, you have very little in the way of rights regarding employers’ surveillance of their employees.

Anybody want to flesh out what etc. might include?

TRX December 3, 2019 11:05 AM

> I, you can always download a browser like Firefox/Chrome yourself

Except the ones preinstalled in Firefox in your fresh download are pwned and haXORed to start with. Some of them are only “questionable.”

> Call me nïeve, uninformed, or whatever, but this is basically saying TLS can’t be considered remotely secure?

The basic idea of TLS wasn’t too bad, but the system broke down due to lack of trust with the Certificate Authorities.

In practice, it’s like using an escrow account, except instead of using only a trusted bank or broker, you include any random crackhead who says he’s trustworthy.

TLS is better than nothing – at least it keeps casual snoopers at bay – but it would be unwise to have that as your only security if your data is important.

B December 3, 2019 4:08 PM
Examination of a certificate chain generated by a Cyberoam DPI device shows that all such devices share the same CA certificate and hence the same private key. It is therefore possible to intercept traffic from any victim of a Cyberoam device with any other Cyberoam device – or to extract the key from the device and import it into other DPI devices, and use those for interception.

I’ve heard that Cyberoam (which is now Sophos, btw) is not the only proxy with this issue.

Jeffrey Deutsch December 8, 2019 12:54 PM

How secure do you consider Gmail’s TLS? And what would you be willing to email over that?

Does it make a difference whether it’s on a laptop/desktop using Windows or a smartphone or tablet using Android?

(I’m talking about ordinary personal Gmail, not G Suite.)

PS: Thanks, RealFakeNews and TRX!

Matthijs van Duin December 15, 2019 9:57 AM

Call me nïeve, uninformed, or whatever, but this is basically saying TLS can’t be considered remotely secure?

You misunderstood. TLS interception requires that the client systems trust the key used by the TLS middlebox. System administrators that want to use TLS interception on their network have to install a custom* root CA on each client system whose TLS traffic they want to intercept.

Is there any way to avoid the effective use TLSI?

TLSI can’t be used against you unless the custom* root CA certificate for the TLSI middlebox is installed on your computer. This is typically only done on company-owned computers where the company wants to do TLSI.

* TLSI using public roots has happened on rare occasion, but it tends to be detected quickly and to cause the certificate authority responsible to be distrusted by browsers, effectively revoking all certs issued directly or indirectly by that root. This is obviously not something any certificate authority wants to happen to them, considering the long and arduous process of getting trusted in the first place.

Chrome has also introduced a new way to protect against misbehaviour of certificate authorities (including TLSI) called “Certificate Transparency” (CT), which requires certificates to carry cryptographic proof that they’ve been submitted to public logs. Needless to say, this makes certificate misissuance especially risky since proof of it will forever be on public record.

Unfortunately, the CT requirement can currently still be worked around by backdating the certificate to make it appear to be issued before this requirement went into effect (30 April, 2018), but this will cease to be possible 1 June, 2021.

None of this applies to custom roots obviously.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.