WhatsApp Vulnerability

A new vulnerability in WhatsApp has been discovered:

...the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here's the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it's theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here's the research paper.

EDITED TO ADD (2/12): Commentary from Moxie Marlinspike, the developer of the protocol.

Posted on January 25, 2018 at 6:47 AM • 21 Comments


Zoë R.January 25, 2018 7:10 AM

Isn't this old news? I have no idea why the cited paper is dated January 8, 2018, but I'm sure I have read about this somewhere last summer. This is the analysis where Threema came out on top, right?

PhaeteJanuary 25, 2018 8:40 AM

So if you control the servers that an app uses, you can change the data that an app uses from that server?
Isn't this how it is supposed to work?

afrinJanuary 25, 2018 8:55 AM

This is sensationalist at best. Here is the response from Moxie (developer of Signal) on the topic:

Here's how WhatsApp group messaging works: membership is maintained by the server. Clients of a group retrieve membership from the server, and clients encrypt all messages they send e2e to all group members. If someone hacks the WhatsApp server, they can obviously alter the group membership. If they add themselves to the group:

1. The attacker will not see any past messages to the group; those were e2e encrypted with keys the attacker doesn't have.

2. All group members will see that the attacker has joined. There is no way to suppress this message.

Given the alternatives, I think that's a pretty reasonable design decision, and I think this headline pretty substantially mischaracterizes the situation. I think it would be better if the server didn't have metadata visibility into group membership, but that's a largely unsolved problem, and it's unrelated to confidentiality of group messages.

In contrast, Telegram does no encryption at all for group messages, even though it advertises itself as an encrypted messenger, and even though Telegram users think that group chats are somehow secure. An attacker who compromises the Telegram server can, undetected, recover every message that was sent in the past and receive all messages transmitted in the future without anyone receiving any notification at all.

There's no way to publish an academic paper about that, though, because there's no "attack" to describe, because there's no encryption to begin with. Without a paper there will be no talks at conferences, which means there will be no inflammatory headlines like this one.

To me, this article reads as a better example of the problems with the security industry and the way security research is done today, because I think the lesson to anyone watching is clear: don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not. It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing.

via: https://news.ycombinator.com/item?id=16117487

Clive RobinsonJanuary 25, 2018 9:12 AM

The key thing to get into your mental model from Mat Green's blog is,

    ... the standard Signal protocol doesn’t work quite as well for group messaging, primarily because it’s not optimized for broadcasting messages to many users.

It's kind of understated, because it's both a major Key Managment (KeyMan) issue and a routed traffic issue.

The first point to get to grips with is that as currently built the Internet is a routed traffic system thus not inherently suitable as a "Broadcast" system... With the result that without care it is quite susceptible to "Traffic Analysis" at several levels.

That is it is unlike "Home to Out-Station" radio systems that genuinely "Broadcast" not needing any response from the "Out-Stations" or routing information thus the out-stations as passive observers remain effectively "covert". Which is why the German U-Boat orgsnisation used it during WWII as did the KGB and other intel organisations did both during WWII and long afterwards. It's why many believe rightly or wrongly that "Numbers Stations" are doing the same thing even today.

The Internet whilst not circuit switch based, is certainly packet routed at the lowest levels thus "Out-Stations" are not covert to any entity that can see traffic originating from a "Home-Station" or "Out-Station" behaving as a "Broadcast-Station" without further quite considerable precautions (it's why we need store and forward Mix-Nets, fixed rate and pace circuits with FullCap padding to get even close).

But none of that solves the KeyMan issue of sending an encrypted message to more than one out-station which is what a "group-chat" is trying to do.

One way is to encrypt the message under each individual out-stations key, which immediatly gives away the number of out-stations to an attacking observer, thus is not the way you would want to go. Another way is to have a "group-key" that you encrypt under, but this is problematic as often a message is only ment for a subset of the group, thus you end up with a significant number of keys. Which means the managment issue is high and very likely to get missused in one way or another which could leak information.

Whilst there are solutions to "group messaging" they all have disadvantages one way or another. Even the use of Public Key or other ways of using ephemeral symetric keys for the message have issues with traffic analysis or information leakage.

So don't go to heavy on Moxie Marlinspike and friends over this, even though it's a known and thus expected problem, effectively solving it is more than a little difficult.

A number of years ago our host @Bruce noted that whilst we had sort of solved the basic encryption algorithm issue, nobody was researching into KeyMan issues. One of the reasons for that was back long ago when a certain leading web browser company came up with what is now HTTPS to alow e-Commerce. It did not solve the KeyMan issue at all, let alone in a secure way, it just came up with a realy realy terrible trust model based on a hierarchy...

As history has shown over and over again hierarchical structures invest power thus automatically corruption at the higher levels, thus they can under no circumstances be trusted ever...

AlanJanuary 25, 2018 10:57 AM

@Clive - Sometimes I think I come here to read your comments as much as I come here to read Bruce. The informed comments are a big part of what makes this blog so useful.

echoJanuary 25, 2018 1:03 PM


The UK domain name registrar is turning tiself into an investment company and has been accused of abusing its market position. The only companies I am aware of with hire and fire recruitment at board level and cross subsidising are all fairly shady companies more concerned with ptroecting their rice bowl than their customers.


Special report Nominet, which runs the UK's domain-name registry, has abandoned its own charitable foundation, raising questions about the organization's direction and accountability.

Tony H.January 25, 2018 2:54 PM


[broadcasting is] why many believe rightly or wrongly that "Numbers Stations" are doing the same thing even today.

Is there any other serious theory as to what purpose the numbers stations could be serving?

Clive RobinsonJanuary 25, 2018 3:04 PM

@ echo,

I don't know if you read Private Eye abd it's "In the back" section?

But if you do you might have read about the horror story that was the Department For Overseas Development. In essence it was the UK departmrnt providing "grass roots" investment in what some call third world countries. Well it became subject to a "managment buyout", now it's more a less a hedge fund to any dodgy third world dictator / tyrant or "Nigerian General" or "Arab Prince" type making making ridiculous levels of profit that disappears back to those involved with the managment byout.

Thus I suspect Nominet is going to head down the same "profit for the managers" type behaviour. Which begs the question of what "goods and services" they will have to offer.

If we look around you will find other Domain Registras that have also gone private. One thing that appears to be the case with some is the dispute resolution process appears to favour those who spend the most. Thus if you say have a website dedicated to say bird watching and it's domain is birds.com you might find a well known Powdered Custard manufacturer chasing you down to grab your domain name. Even though you might have had it for some time they will "get prefrence" and the next thing you know you are striped of your domain and possibly a big fat set of legal fees to fork over...

So I think it's a very bad idea, after all how likely are they to see your point of view, when their bonus system is dependent on getting big corp money through the door for hundreds if not thousands of domains "to protect their product"... Thus a protest site against BT Open Reach would get killed a very quickly...

echoJanuary 25, 2018 4:10 PM


Sorry no I don't read Private Eye. An old friend did and bought me a Private Eye annual as a Christmas present. I must to-do list reading this or buying a copy of the latest issue when I rememeber (which may be easier as I have no clue where I stored it). I follow major policy initiatives when I can. I only have half a dim clue about some issues now the deckchairs have been shuffled so many times.

Yes, I am worried about corruption especially of the "controlling minds" variety you mention from time to time. Exposing delinquacy in the public sector isn't easy and this is getting a bit too far off topic.

With regard to all this the Whatsapp vulnerability although largely theoretical in practice could be very worrying.

Clive RobinsonJanuary 25, 2018 4:47 PM

@ Tony H,

Is there any other serious theory as to what purpose the numbers stations could be serving?

Yes quite a few and some are quite scary.

During the Cold War the UK Nuclear Deterant in Subs had standing instructions to decide "Positive Action" when C&C was lost. As you went down the list, one item involved BBC Radio 4 on longwave (200KHz back then). If it could be heard it would be doing the old "Now some messages for our friends" with One Time Phrase codes that were kept in a sealed envelope. If it could not be heard then the Sub Captin was instructed "To open the Prime Minister's orders letter and act on their contents".

This was one of a number of "Dead Hand" systems to ensure MAD would happen.

It just so happens that very recently North Korea has started up a couple of Numbers Stations again and it has been suggested that now they have Nukes that could be put on Submarines and Ships that they are a "Dead Hand" system rather than for controling espionage rings.

The simple fact is numbers stations could also be encrypted, weather reports, air sea rescue and all sorts of other things just as probably, including "spoofs"[1] and station keeping.

We just do not know, and in all probability neither do those who make the recordings or run the transmiters. Which gives rise to the possability some serve no actual purpose at all anylonger and because somebody forgot to say stop they are "dead men walking"

[1] This one I happen to know is true, because a couple of friends and I put one out in the 50meter band back when AM transmitters still used valves/bottles in their PA to give more than 10watts up the antenna feed.

[2] I actually know of a true story that demonstrates this. There was an Army barracks in the UK that once had been at the edge of town but was now surrounded by houses. Any way on each change of the guard there was a check list, that had to be checked as part of "standing orders". One of the items to be kept available in the Guard house was a pair of rubber gloves. Nobody asked why, the just checked it off. Eventually the rubber gloves being made of rubber, perished and fell appart, but were still kept in place.

Unfortunatly yours truly made the mistake of saying that they were only fit for the bin to QM and asked if they could be bined. To receive a near panicked response, when I asked what the stressing was all about it turned out absolutly nobody knew what the gloves were for but everybody agreed it was vitally important they be kept in the guard house...

Now as some know I can be "more stubborn than a donkey in hob nail boots", so I made it my mission to find out WTF the rotted old gloves were doing in the guard house. Well it took a long time and a freek chance but I found out.

I Happened to be at the local library for a historical talk from the local industrial archeological society about the now long gone narow gauge railway that had been put in during WWI when it had been a big Garison Town. As part of the talk was a snipit about the electric tram that had turned a corner at the barracks and how it had been taken out as it was considered to dangerous to the community. After the talk I asked the old boy who must have been in his eighties why the tram was dangerous. Well to cut a long story short for some reason the overhead cable had broken several times at the corner outside where the guard house was. And would fall to the ground causing lots of sparks scaring horses and potentialy electrocuting people. He had some old newspaper clipings and low and behold the reason for the gloves was revealed, the soldiers were to put on the gloves and grab the cable and drag it safely out of the way... Well it turns out that 60 years after the trams were taken away, the army still required the rubber gloves in the guard house... When I wrote up a report with photocopies of the newspaper clipping about both the gloves and the removal of the tram and requesting that the rubber gloves be removed from the guard house I actually got quized as to if I knew what I was talking about as it was "those important rubber gloves" that absolutly nobody had any idea about except for me and the old boy from the local industrial archeology society... What finally killed the rubber gloves off was when I found a Health and Safety Executive notice about nolonger using natural rubber gloves because not only did they perish they did not stop modern chemicals used for cleaning etc... Thus one very very very tiny victory for sanity over ingrained institutional bureaucracy B-)

As for the rubber gloves, well there's nolonger a guard house to put them in, or a barracks, it's all "Executive Housing". But the library is still there and the industrial archeological society still gives talks, and in their records is this old boy telling the story of "The Rubber Gloves"...

maqpJanuary 25, 2018 5:50 PM

@Afrin, (and Moxie)

"If someone hacks the WhatsApp server, they can obviously alter the group membership."

This "duh, obviously the proprietary app using Signal protocol has a problem where Signal spec differs from the original open source library in a way that gives the server ability to add contacts that can eavesdrop on communication" is so obvious. How could I have assumed anything different after Moxie said WhatsApp uses same protocol as Signal.

"All group members will see that the attacker has joined. There is no way to suppress this message."

Moxie misses the fact that some group chats consist of communities where not everyone knows each other. While such groups do have different expectation of privacy for messages, that's no reason not to have security from nation states. And it's not impossible to join it without anyone noticing, especially since attacker can forge to each user a message about who added them. Nobody's going to tell everyone to be quiet and interrogate the new buddy of buddy. Very few actually care about what they share in group if they don't know them IRL. It's easy not to think about those contacts.

"I think it would be better if the server didn't have metadata visibility into group membership, but that's a largely unsolved problem"

Metadata about who's in the group isn't the problem here. Ability to add members to group is.

"In contrast, Telegram does no encryption at all for group messages"

True. But this is also whataboutism. We should not tolerate Durov's "Signal is funded by US governemnt" accusations, and we shouldn't accept pointing fingers from Moxie's side when discussing this issue. This was a screw-up from WhatsApp developers, not Moxie, and I don't understand why he would stand behind their backs.

"There's no way to publish an academic paper about that, though, because there's no attack to describe, because there's no encryption to begin with."

It was only this week Tinder made the headlines for not using any encryption at all. Also, there was no attack to describe in Signal yet somehow they managed to publish a formal Signal audit. It probably didn't make the headlines back in 2016 but is even today extremely valuable proof of security. Audit that makes note of Telegram's crappy TLS group messaging would not only convince some users, it could also be used as a source in debates, and there's a chance it could make headlines. One big issue with Telegram currently is it's outdated evaluations. It's not clear to what protocol versions audits apply to or what attacks, like the infamous 64-bit precomputation MITM attack, still apply to the client.

"don't build security into your products, because that makes you a target for researchers, even if you make the right decisions, and regardless of whether their research is practically important or not"

There's nothing overly impractical about this attack. We consider Telegram's encryption broken when all it lacks semantic security (IND-CCA). All this means is you can edit ciphertext without changing to what it decrypts into. That's no different from messing with imaginary ECC bundled into ciphertext. So, why don't we consider a protocol (implementation) broken when there's a good chance several end-to-end encrypted messages might leak to adversary when they are able to join the conversation.

It's true it's hard to write stories about Telegram that raise eyebrows, especially with media fixated on Durov as a celebrity. But if enough experts agree on how Durov's claims about distributed cross-jurisdictional encrypted cloud storage are full of shit, it might change things.

"It's much more effective to be Telegram: just leave cryptography out of everything, except for your marketing."

This sums my feelings about Telegram exactly. Everything they do could work on Signal protocol. But it's too easy to beat the competition for ignorant user-base with invisible insecurity that enables much faster message delivery and feature development.

Joshua BowmanJanuary 25, 2018 11:21 PM

@maqp, it sounds as if you're asking the protocol developers to solve social problems ("Nobody's going to tell everyone to be quiet and interrogate the new buddy of buddy.") with some technological means, and castigate them for not having done so, without offering up any kind of solution, or that you even understand the scope of the problem you're ranting about.

ThothJanuary 25, 2018 11:39 PM


Retroshare, Signal, WhatsApp, Telegram, Threema, Skype et. al. are not designed to be robust in security that would resist nation state attackers. The most they can prevent is the neighbour or kids.

Even a moderately funded hacker backed by a government or some powerful organisation would be able to hack into the servers routing WhatsApp messages or any other messages to execute the compromise.

To prevent notification from showing up that a new covert user have been added, an endpoint attack can be used to manipulate the GUI or in the case of Signal, extract the 128-bit Group ID.

But the more realistic attack to any secure messaging is still endpoint attacks thus @Markus Ottela's research into TFC and @Clive Robinson's recommendation of using flammable paper and pencil and sometimes with discussions of invisible ink and also using QR codes for air-gapped and energy-gapped secure installations to complement the use of TFC.

It is much more cheaper and easier to infect your smart devices and PCs and with access, you could have the plaintext, ciphertext, screenshots, long-term keys, session keys and everything else. No amount of encryption could be of use once you have been compromised on the endpoints and up till now, the methods in the industry still do not bake in security into the core and you have what @Clive Robinson repeatedly describes attacks that bubbles up from the lowest layers where a "security-enhanced" kernel or some "Secure OS" could not do much to prevent attacks that bubble up.

Link: https://arxiv.org/pdf/1801.07800.pdf

echoJanuary 26, 2018 4:31 AM


I agree with you. There is significant legal precedent within adequacy and discrimination to raise the threshhold and counter societal factors leading to direct or indirect abuses. This is exactly why I found the trivialisation of the exploit worrying.

This kind of thing is bad enough in the West (especially within some organisations or regions who haven't got with the programme) and within none Western countries can lead to graphic condequences.


When you are member of a discriminated group and experience this discrimination in real life you might have an understanding of how awful and pernicious abuses of power can be. There have been studies done on oppressed group bias and I can asure you that an "outsider" who doesn't withstand scrutiny will be watched for. In this respect the group is self-protecting but this also feeds a climate of powerlessness and invisibility.

I don'twant to be overwrought about this but nor do I believe trivialising whether unintended or otherwise empathises with at risk groups.

MJanuary 26, 2018 6:44 AM

@Thoth "To prevent notification from showing up that a new covert user have been added, an endpoint attack can be used to manipulate the GUI " - If an endpoint has been compromised there's no need to compromise the server and add someone to the group, and they can even get access to previous messages.

maqpJanuary 26, 2018 11:13 AM

@Joshua Bowman

In my post above I already implied this, but let me be more clear: Signal app has already solved this particular problem in it's original protocol implementation. Server can not add members to the group. In Signal there is another problem with knowing the group id, but that's much smaller a problem.

So I did indeed offer a solution on that post. I am also a developer offering a solution that's much more secure than Signal:

TFC (the messaging tool I'm working on) has taken this into account in it's group chats since day 1. When you generate a group, you define the contacts your TxM multicasts messages to. The only way to add a contact to the group is if you have already exchagned keys with them and you _want_ them in your group. So you always know to what members you send your message to. Your RxM will receive group creation command from your TxM, encrypted with local key, and that and only that defines which of your contacts are allowed to redirect their private messages to the group's window. So if you remove the contact from your side of the group, your TxM will stop sending messages to that contact, and your RxM will discard all messages the contact tries to redirect to the group on your side.

Groups are kept in sync with group management messages (TxM prompts you to publish your executed group management command to new/existing group members). So when Alice creates a group with Bob, Charlie and David, Alice can share information that she's generated that group, and share the XMPP accounts of her group's members to one another. Bob and Charlie already know each other so they'll see the nick of one another. Bob and David do not know each other, so they will see the XMPP-account of one another, and they can then add each other as contacts, perform key exchange, and add each other to the group, _if they want_.

In case you're wondering why this is not fully automatic, it's because of the unidirectional links between TxM, RxM and NH. RxM can't automatically send data about contacts' groups to TxM because that would invalidate the endpoint security properties.

Considering the constraints the endpoint security of TFC provides, its (based on my conjecture and 5 years of development, temporary CLI UI issues aside) the best possible solution. It solves the main issue: It prevents Bob that is writing a message to Alice, Charlie and David from accidentally also sending the message to Eve who Alice adds just before Bob sends the message. It also solves the "join by knowing the group ID" issue of Signal. The only major problem is uniqueness of group names. This means user can not be in two groups with exact same name. However that hasn't been an issue on any of my messaging apps, ever.

Clive RobinsonJanuary 26, 2018 1:11 PM

@ M,

If an endpoint has been compromised there's no need to compromise the server and add someone to the group, and they can even get access to previous messages.

You might want to think on that a bit more.

An endpoint compromise --as an endrun attack,-- does not of necessity give the attacker access to everything.

In particular a GUI or more specificaly a "plaintext" UI compromise may only be very limited. That is it gets to see the screen through the hardware drivers via shims. But not the OS above the shim or the application above the OS.

Therefor and most importantly the attackers only get to see the message "if and when" the user choses to look at it in "plaintext", which might be never on the "Communications End Point Device". That is they might decide to export the ciphertext to another unit that provides the "Security End Point"[1] that does the decryption.

Thus adding a user to the group on the server may be the atackers only feasible option to get at the plaintext of messages.

I actually expect people to finally wake up --over the next decade-- to the realisation that having the comms and security end points on the same consumer grade device is very bad news for their security. Thus as with "Home Banking" get used to the idea of having the security end point and plaintext UI on a seperate token.

I realised about the end point issue and started to talk about it last century[1] and people are only starting to catch up[3] often very badly with the idea with "Home Banking" in the past few years.

If we write Apps to not alow for this move to "two endpoint division across two device" operation then we are being fools to ourselves.

Which means the likes of the SigInt agencies can carry on getting their Intel either by making "targeted attacks". Or as they have done in the past just hovering up the "plaintex" forwarding that the likes of CarrierIQ did in the name of "tech support" several years ago.

[1] If you are a frequent reader on this blog you will know that I regularly bang on about the idiocy of having the Security End Point on the Communications End Point. In fact if you look back as far as you like on this blog, I've been going on about this since the mid 1990's[2] with authentication of individual financial transactions not just authenticating the communications channel with "Home Banking". Further stressing the importance of doing the authentication on a seperate immutable device that only communicates through "the human" acting as in effect a firewall[3].

[2] I have, as I've mentioned before, the misfortune to have not just designed and made public the idea of not just using a mobile phone SMS as a side channel to transfer login credentials. But more imoortantly how to get around the fact that SMS is considered a secondary not primary service by the service providers, thus avoid the upto 8 hours time to deliver an SMS they used to have.

[3] It's taken the banking industry the better part of two decades to still do this ineffectively in most cases.

ThothJanuary 27, 2018 5:51 AM


I did mention the same statement just below what you quoted but @Clive Robinson is also correct in the fact there are varying degrees of compromise and an attacker may not have access to the full stack.

65535January 27, 2018 9:55 PM

Here is what Matt Green has as problem is WhatsApp:

How do members know when to add a new user to their chat? Here is where things get problematic. WhatsApp’s implementation is somewhat worse than Signal. Here I’ll break them down. From a UX perspective, the idea is that only one person actually initiates the adding of a new group member. This person is called the “administrator”. This administrator is the only human being who should actually do anything — yet, her one click must cause some automated action on the part of every other group members’ devices. That is, in response to the administrator’s trigger, all devices in the group chat must send their keys to this new group member… Ok, what’s the problem?... group management message contains the “group ID” (a long, unpredictable number), along with the identity of the person I’m adding. Signal protocol does authenticate that the group management comes from me, it doesn’t actually check that I am a member of the group — and thus authorized to add the new user! In short, if this finding is correct, it turns out that any random Signal user in the world can you send a message of the form “Add Mallory to the Group 8374294372934722942947”, and (if you happen to belong to that group) your app will go ahead and try to do it. The good news is that in Signal the attack is very difficult to execute…. messages are sent using the Signal (pairwise) protocol, they should be implicitly authenticated as coming from me — because authenticity is a property that the pairwise Signal protocol already offers... group management messages are not end-to-end encrypted or signed. They’re sent to and from the WhatsApp server using transport encryption, but not the actual Signal protocol [which is minor problem –ed].”- Matt Green

[Jump WhatsApp]

“When an administrator wishes to add a member to a group, it sends a message to the server identifying the group and the member to add. The server then checks that the user is authorized to administer that group, and (if so), it sends a message to every member of the group indicating that they should add that user. The flaw here is obvious: since the group management messages are not signed by the administrator, a malicious WhatsApp server can add any user it wants into the group. This means the privacy of your end-to-end encrypted group chat is only guaranteed if you actually trust the WhatsApp server. This undermines the entire purpose of end-to-end encryption. …[The fix] In WhatsApp, make sure that the group management messages are signed by an administrator.*… Note *The challenge here is that since WhatsApp itself determines who the administrators are, this isn’t quite so simple. But at very least you can ensure that someone in the group was responsible for the addition…. the main lesson here is: test, test, test.”-Matt Green



“…many pro facebook reveiwers especially facebook themselves try to claim whats app is signal system just under its control… facebook controls the whatsapp server which knows group id to all groups; an admin can add a data collection account to every group if it wanted to thus being a member of every single group message on the entire whatsapp system. If they wanted to or maybe the gov agency wanted them to!?!!”- Ttim

[See abouve link]

I see what both Green and Ttim are saying. The confidence game of inferring Signal protocol is the exact same as WhatsApps protocol running on Facebook’s servers. That is a huge difference. It is blindly trusting Facebook to securily handle “end-2-end” encryption. That is a bad bet.

I seem to recall Facebook joining the NSA program PRISM in 6-3-2009. Did they un-join?

Even though PRISM my be shut down I would guess it just changed its name and is run 702 laws with multiple exception from warrents including the warn out “imidedate threat of death or destruction” legal loop hole to avoid warrents.




@ Thoth, Clive and others.

You make good sense given the old “Least Truthful Statement” thing.

Next, we get to the collection of BULLRUN systems including TURMIOL, requests to Certificate Athorities, and LONGHORN… Which twist the arms of USA companies and probably all of the 5-eye countries into doing their bidding. This includes playing the “route the packet around the Word Game” to bypass the USA Constitution without proper search warrents.




I think Matt Green is justified in his concerns. I don’t trust FaceCrook have stopped using NSA connection after seeing those actual NSA slides. I do not use Faccrook or WhatThat.

Excuse all the grammar errors and other errors. I just banged this out.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.