Cell Phone Location Privacy

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Posted on January 15, 2021 at 6:36 AM29 Comments


Alex January 15, 2021 8:04 AM

In the 90s I would have been strongly in favor of using these types of systems. Now, I’m not so sure.

Leo January 15, 2021 9:21 AM

One thing I’m not getting, there has to be some way to uniquely identify each phone to make sure subscriber A doesn’t receive subscriber B’s phone calls.

I’m assuming that’s the IMSI.

Why can’t the IMSI location be tracked? Isn’t that a unique identifier per cell tower?

Clive Robinson January 15, 2021 9:32 AM

@ Bruce, All,

Remember under curent GSM requirments both the serial number of the SIM and the serial number of the phone are sent to the cell tower.

Thus it is possible to have SIMless operation with full tracking.

Rebraem January 15, 2021 10:01 AM


… and the extensive government regulatory bureaucracy has yet to approve this new privacy workaround.

In US, the original GPS capability in all cell phones was a MANDATE from the Federal government. The government likes to keep track of everyone.

Winnie January 15, 2021 10:14 AM

The minute they get a letter from the Feds is the minute that all of this comes crashing down. Lest people forget what happened to Lavabit.

In fact, it’s worse than that because users will flock to what they believe is a “secure” solution, only to find out that otherwise when its too late.

Which is to say that “secure” is another word for “honeypot.” A branding term to lure in the suckers and raise prices.

MSB January 15, 2021 10:28 AM

I agree with @Leo.

I have deployed GSM/LTE for a CLEC and the radios track IMSI/IMEI (SIM/PHONE serial numbers) at the tower. They don’t necessarily track Lat/Long data off the handset, but they do track distance from tower so they know when to hand off to another cell. If you are within range of any three towers, your location can be calculated pretty quickly.

This also does nothing to prevent identification by Stingray or other types of IMSI catchers. Your phone responds with it’s IMSI to all requests from your network name as there is no authentication process for a radio. The authentication happens at the network layer because the operator is concerned about their security, not yours.

There is no reason to trust one M(V)NO over any other. Any licensed CLEC must conform to lawful intercept rules (Yep, the men with black suits show up when you apply for a CLEC license.)

So at the end of the day, burner phones paid in cash are still your best form of anonymity.

The Flower of Marxism January 15, 2021 10:29 AM

You still have things like lawful intercept requirements and RRLP.

“… since the [RRLP] protocol does not require any authentication, and it can be used outside a voice call or SMS transfer, its use is not restricted to emergency calls and can be used by law enforcement to pinpoint the exact geolocation of the target’s mobile phone.” (wiki)

David Leppik January 15, 2021 12:47 PM


According to the paper, 5G allows encrypted ISMIs, which conceal the identity of the user being reached. By generating new ISMIs frequently, it is possible for a MVNO to keep the MNO from knowing the identity of the user.

This is clearly a step in the right direction. That said, this form of anonymization depends on having a large enough population of users from a particular MVNO. If you are the only one in your neighborhood who uses a particular MVNO, you won’t be protected.

It’s like how Privacy Badger no longer does custom tracker lists by default and the major browsers have disabled no-track requests. They reduce anonymity instead of increase it.

Until a major cell phone player, i.e. Apple or Google, starts its own MVNO, this will only be useful if you happen to live in a dense neighborhood filled with other people using the same MVNO.

Clive Robinson January 15, 2021 1:05 PM

@ Winnie,

In fact, it’s worse than that because users will flock to what they believe is a “secure” solution, only to find out that otherwise when its too late.

Yes anyone remember “Crypto AG” of Zug Switzerland?

I guess a “honeypot” might even give it’s self away…

Look at it this way, anyone claiming to offer “security as a service” that lasts for more than a little while is going to be suspect one way or another.

The now old saying about “If X is made illegal, then only illegals will have it” is only partly true these days.

As I’ve mentioned before if you want real “information privacy” you need to take it entirely off of the communications device.

But at the end of the day the intelligence agencies are more interested in traffic analysis than they are message content[1]. It’s LEO’s that are the other way around… Trying to explain traffic flow to an average jury is going to be a tough sell and is not realy as much as circumstantial evidence in a court. Provable and undeniable message content is much more like real evidence, if the defence are not upto the job the prosecution might get away with claiming it’s the equivalent of a signed confession[2]…

[1] Traffic analysis gives faster generally more usefull results. After all if you spend a billion dollars cracking TLS and then discover the traffic content is OTP encrypted you are a billion bucks and god alone knows how long down for a “no show”.

[2] Some years ago “The Grugq” wrote an article on why you should never sign electronic communications, it’s worth reading if you can find it (my search foo has let me down today).

SpaceLifeForm January 17, 2021 3:44 AM

@ Clive

This ring any bells?

ht t ps://twitter.com/thegrugq/status/588767424289243136

Or this?

h t tps://twitter.com/thegrugq/status/784129130946179072

I also can not pin down what you are referencing.

Or, were you thinking of the issues of Sign/Encrypt vs Encrypt/Sign?

Which, as you may recall, you need to do both. At least one twice. I prefer

S(E(S(E(payload)))) vs E(S(E(S(payload))))

Duchess Gloriana XII of Grand Fenwick January 17, 2021 4:06 AM

In an age of mass surveillance, facial recognition and soon, track-all government digital currencies its hard to believe that this kind of research, while interesting, would actually be allowed to to increase privacy of ordinary citizens. For something like this succeed it would require a preexisting environment of respect for privacy, especially from a legal perspective. Does this exist? For example, 4G/5G could have been designed with privacy in mind. Was this done?

SpaceLifeForm January 17, 2021 4:12 AM

@ –

It’s a bot with a handler. I would not waste much more effort on it.

As long as readers can mouse over the link, and see it, they can ignore.

As I said before, the site has no history or reputation.

Clive Robinson January 17, 2021 8:50 AM

@ SpceLifeForm, Winnie, Who?, ALL,

Or, were you thinking of the issues of Sign/Encrypt vs Encrypt/Sign?

No, and I can not find it either, though The grugq does partially mention it in the slide deck @Who? Linked to.

It’s about the difference between authentication and attribution.

You have to consider that in every “assumed to be” two party communication a third party may be evesdropping at the time or one of the two communicating parties may rat the other out, either intentionally or to save their skin[1].

Thus comes a trade off between one party authenticating to the other, but still having full deniability to all other third parties no matter hoe sophisticated.

In a way it harks back to the notion of “duress tells” used by SOE and other operators during WWII. That is the two parties have an agrement to modify a message in a particular apparently inocuous way such that the other party knows the sender has been captured and is now under the control of a third party.

The important thing is that it must be sufficiently deniable that even if the method is reviealed it is insufficient to act as proof to a third party or anyone else such as a jury, whilst sufficiently strong to authenticate the originator to the second party.

There are three important things to note,

1, It must not require any magic numbers or other information for you to use it, as possession of these is in effect proof positive.

2, What ever the method is it must be able to survive a “messages in depth” attack. That is if the third party has any number of previous messages then the authentication method should not show up as a verifiable correlation.

3, The method should not stand out in any message. That is if the messgae is “chatty” the method should not be “formal” or “styalistic” etc.

Supprisingly as difficult as this sounds it can be done with “common knowledge” used as a code. So “met A at B’s” actually conveys nothing to a third party but if the second party knows what A and B are it has meaning to them.

As I’ve mentioned before you can do this with codes that are reordered by “One Time Pad”.

So the expression,

“We should AAA for a XXX YYY Z”

So, AAA could be – meet, meet up, go out, get together

Which gives you two bits of information.

XXX could be – tea, coffee, dring, chat

Which gives you two more bits.

Similarly XXX could be a list of times or places to give another couple of bits, and Z a final use or not of a question mark, full stop or comma gives you another two bits.

So you have 8 bits or a one in 256 chance to authenticate correctly. Obviously these indicators would have to change randomly which is why you use a one time pad to encrypt your authenticator.

So provided you use the one time pad correctly and destroy used values you have deniability, even if the second party turns traitor or is coerced and produces their OTP all they are doing is tying themselves to the messages not you.

A wise person will however do other things that the second party is not aware of to give them further deniability.

As these codes work without the need for other encryption of the messages, they can be sent in plain text over an open broadcast network such as a morse code radio circuit without attracting attention.

[1] In the slide deck the old joke about “having better sneakers to run fast enough to out run your companion when the bear comes charging through” is used. Only the twist is the slow one is the one the bear gives a chance to rat out everyone else… A point that everyone using any kind of communications with others should remember at all times. There is an old saying about “snitches and wires” which is,

A friendly chat is a deadly chat.

Clive Robinson January 17, 2021 9:21 AM

Just noticed I could have made,

“It’s about the difference between authentication and attribution.”

A bit more clear,

“It’s about the difference between authentication information in a message and unwanted attribution to the person who sends the information.”


“The first party wants the second party to accept the information as authenticated or valid BUT only to them and them alone. To the third and all other parties the first party wants full deniability or non attribution to them.”

In effect the electronic equivalent “of a note dropped in the pocket” in an old school tradecraft “brush off”.

littleknown January 18, 2021 1:47 AM

How difficult would it be to do the last mile yourself? A lot of people spend a majority of their time within say 50 km radius of their home/ office. Have a base station at home, and have an rf link within ISM to the handset. For example, T800 by Motorola has a range of 35miles in open areas. 467 Mhz. I am sure some system like this can be used to establish a data link offering a decent bandwidth. Yes, the handset will be like a beacon in the dark, but only if someone is following/ trailing you. The automated track ISMI method won’t work unless you are already a suspect and being tracked. The base station can take the usual privacy precautions on the internet.

Peter A. January 18, 2021 3:16 AM

I stopped updating my knowledge about cellular systems at 3G networks and I am not aware of current developments, so the article is partly unclear for me, but I see the concept of using identical IMSIs (permanent identifier) and keys for each customer (promgramming all SIMs to be identical), then letting them all in (no duplicate detection), and assigning them a TMSI (temporary identifier) for the connection, so they could route L2/L3 traffic to individual handsets, page them, etc. Then they would authenticate data connections – solely for billing purposes – by some kind of one-time passwords. This would require special ‘dialer’ or similar app. Then they’d assign customers IP addresses and route/NAT their IP traffic as normal. On top of that, user’s would use some (more or less) secure comms app. In this way users could hide in the mob of indistinguishable clones.

This has several drawbacks:
– you need to trust the MVNO not to track what bunch of OTPs were given to what customer – so they cannot be forced to reveal your identity
– you need to fight the network retrieving handset serial number (ESN/IMEI) somehow – either modify the network firmware (generally not possible) or reprogramming your handset to have the same serial number as all customers (the MVNO staff needs to be competent enough)
– there needs to be an actual mob to hide in, i.e. a fairly large number of customers in a particular area, and your movement pattern should not stand out from the crowd

On top of that, legal frameworks mandate spying, so such an MVNO would simply not be allowed to exist.

Clive Robinson January 18, 2021 4:01 AM

@ Peter A.,

For the system to work all handsets would have to be identical in every way. That is even temporary ID’s will give away which handset is which just by “normal physical movment”

That is for most of us we sleep in the same bed each night and sit at the same desk most work days, shop at the same shops often at the same times every week etc etc. Thus the temporary ID can be tied to an individuals phone by an individuals standard behaviours in geo-physical space.

All the mobile networks need to know where a handset is in their network so calls can be routed to it, otherwise the broadcast bandwidth would be excessive and a compleate waste of spectrum space.

It’s just one of many reasons I keep mentioning “Security-v-Efficiency”. Generally the more efficient a system is the less secure it is.

Normally efficiency makes a system more transparant and opens more side channels. This case however shows a different issue. To get higher efficiency resources are reused, thus you need to somehow seperate the reuse instances to stop them interfering with each other. This reuse seperation can be done in different ways such as time, space, code/identity but the result is the same… For the system to work it has to know where individual units are to synchronize with the resource reuse and to ensure it uses the correct resource reuse instance.

When you sit and think about it you realise there is no usable way around the issue.

I’ve known this for four decades and I’m still looking for a way to do a version of it which is on a routed network where mobile units have to change their network ID’s / addresses when they move, how do you establish a peer to peer communication anonymously. That is can you come up with an anonymous Rendezvous Protocol because you can not use a broadcast model.

SpaceLifeForm January 19, 2021 3:21 AM

@ Clive

I’ve hinted at what I believe is the answer before. It requires a federated cloud. And there is no way it can be realtime. Nearly realtime maybe. Like maybe the speed of SMS. Probably better than SMS. Probably more reliable than SMS. But it has to be connectionless between Alice and Bob.

Clive Robinson January 19, 2021 9:40 AM

@ SpaceLifeForm,

I’ve hinted at what I believe is the answer before. It requires a federated cloud.

The problem with “federated” is it means diferent things to different people.

However there is one major failing in all the common federated systems,

Trust Me because of “token”.

That is each domain has to trust another domain because it has some kind of secret that is the token.

Such a shared secret token can be issued by Alice’s Domain to Bob’s Domain (or the other way around). Provided Alice and Bob trust each other not to loose or betray the “shared secret” they can use it as a route of trust.

However such a method very quickly sufferes from a database size isse SS[m] = 0.5(n^2-n) problem that is the number of entries m in the Shared Secret database goes up proportionately to the number of domains n squared. So 100 domains needs 4950 ~5k entries and 200 domains needs 19900 ~20k entries so twice the number of domains four times the entries as expected. Now IPv4 in theory allowed ~2^24 network domains of 254 members so ~1.4×10^14 domain entries or ~1.2×2^47 or a little under 2^48 which is a very large database… The long answer short,

One to One, Shared secret systems are not scalable, thus not practical.

And that’s before talking about the practicalities of actuall sharing a secret between two domains. And how it has to be done prior to any electronic communications, etc, etc, etc.

There are two commonly known ways to resolve this. One is by a “hierarchy” system such as CA’s which has a serious coruption at the top issue. The second is by a “reputational” system such as a web of trust which has a very serious range issue beyond two degrees and becomes increasingly fragile thus easier to pervert. Oh and whilst both are pervertable by simple corruption and wildly differe for just one perversion, as the number of perversions rise the costs fairly quickly converge to about the same for industrial scale surveilance.

So the question arises of can a federated service be avoided.

Well not realy take it to the ultimate conclusion and your “address book” is in effect a federated reputational system.

The problem is assessing and weighting trust for hierarchy -v- reputation. In general the bigger a “conspiracy” is the higher the probability of one member is of betraying the conspiracy so people think hierarchical systems are more trust worth as they would hear about it sooner. The problem is like a door that argument swings both ways they are more easily secretly corrupted as well. In fact as I mentioned with corruption costs after a limited number of corruptions both hierarchical and reputational systems are about the same for mass surveillance.

So how to get reliable trust is the key question and no matter how loud you shout it the answers are few and mostly negative and lost in the background noise.

- January 19, 2021 3:33 PM

@ Moderator,

The above from “Auloa” is unsolicited advertising, and a repeat offender at that.

SpaceLifeForm January 19, 2021 6:21 PM

@ Clive

I should not have used ‘federated’.

What I meant was autonomous servers that agree to peer with each other.

No tokens, no DNS. They are store-and-forward. Think NNTP via ip address.

The servers have zero concept of users. They are only there to store-and-forward chunks of data. The servers have no keys. The servers are blind as to any meaning about the chunks of data.

Alice creates the chunk and gives it to a server or multiple that are part of the peering network.

Bob will ‘find’ a chunk destined to him. (Bob will also ‘find’ chunks that have zero meaning to Bob).

Chris Drake February 15, 2021 3:40 AM

What MSB said – this does nothing to prevent the original triangulation-tracking that’s been around since before mobile data was even invented…

James February 16, 2021 8:18 AM

PGP is crackable since 2010. Using supercomputer power i could grab the key in less than 1 hour. PM me for access.

xcv February 16, 2021 9:16 AM


PGP is crackable since 2010. Using supercomputer power i could grab the key in less than 1 hour. PM me for access

I don’t doubt it for a minute. Since year 2000, even. There’s too much vice, and it’s never admissible as such in a court of law.

That means it’s breakable. Not to mention the cops never seem to have any problems with it in the field. Especially as it yields a “fingerprint” to trace any message back to its author.

A crew of Germans has taken over GnuPG development, Computer Chaos Club and others, all licensed and registered “hackers” under the direction of Angela Merkel’s administration.

james February 16, 2021 5:48 PM


Yes, all “HACKERS” from Chaos Computer Club are now the Directors of Cyberintelligence. (funny). Hackers are promoted to government.

I am a government coder and have access to that information.
PGP is crackable, easy to do it nowadays.

xcv February 16, 2021 6:28 PM


Yes, all “HACKERS” from Chaos Computer Club are now the Directors of Cyberintelligence. (funny). Hackers are promoted to government.

Slaves to the government in any event. Merkel’s regime keeps them under such close supervision online and off — German government agents who direct certain online activities find domestic hackers more or less “useful” on a CIA-like contract basis.

I am a government coder and have access to that information.
PGP is crackable, easy to do it nowadays

Probably much easier to crack the devices that host the said keys than by cracking the mathematical algorithms (RSA, DSA, etc.) the “honorable” way. But who says you can’t cheat in a game where all’s fair in love and war?

Mellowin May 31, 2021 3:07 AM

Privacy settings in iOS and iPadOS allow you to control app access to information stored on your device. For example, you can allow a social networking application to use the camera so that you can take pictures and send them using that application. You can also allow access to your contacts so that the messaging app can find friends who are already using the app.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.