Determining Physical Location on the Internet

Interesting research: “CPV: Delay-based Location Verification for the Internet“:

Abstract: The number of location-aware services over the Internet continues growing. Some of these require the client’s geographic location for security-sensitive applications. Examples include location-aware authentication, location-aware access policies, fraud prevention, complying with media licensing, and regulating online gambling/voting. An adversary can evade existing geolocation techniques, e.g., by faking GPS coordinates or employing a non-local IP address through proxy and virtual private networks. We devise Client Presence Verification (CPV), a delay-based verification technique designed to verify an assertion about a device’s presence inside a prescribed geographic region. CPV does not identify devices by their IP addresses. Rather, the device’s location is corroborated in a novel way by leveraging geometric properties of triangles, which prevents an adversary from manipulating measured delays. To achieve high accuracy, CPV mitigates Internet path asymmetry using a novel method to deduce one-way application-layer delays to/from the client’s participating device, and mines these delays for evidence supporting/refuting the asserted location. We evaluate CPV through detailed experiments on PlanetLab, exploring various factors that affect its efficacy, including the granularity of the verified location, and the verification time. Results highlight the potential of CPV for practical adoption.

News articles.

Posted on February 12, 2016 at 6:19 AM17 Comments

Comments

keiner February 12, 2016 7:00 AM

Is the system able to detect if I’m not routing my internet traffic trough VPN but instead use a remote computer via VPN?

Clive Robinson February 12, 2016 8:01 AM

I remember the UK’s Cambridge labs looking at the “telling distance” problem to try and stop CC card MITM attacks etc.

It’s a hard almost “Russisn Doll” problem for a number of reasons, one of which is it only works on the likes of “tokens” not humans, thus giving rise to various other problems.

I shall have to have a think on this to see if I can spot ways around it 😉

steven February 12, 2016 8:24 AM

The speed of communication over the Internet keeps getting faster, and is some fraction the speed of light.

You can measure the RTT (round-trip time) of a TCP three-way handshake. Assuming low latency and no congestion, you can prove the initiator is within a certain distance based on the time that takes.

But in practice, some VPNs, most proxies and all Tor exit relays, initiates the real TCP connection to the server themselves, so you’re not seeing the full end-to-end latency in that case.

If you can do some cryptographic challenge that is really end-to-end, like how HTTPS usually works, something running in the browser or on the user’s machine, you can prove the endpoints are within a certain path distance. Using a VPN, in another country, would likely increase that distance. An multi-hop anonymity network like Tor would increase it even more. You really could block those, or far-away countries, by setting a threshold on how fast the response must come back.

In reality, there will be congestion and processing delays; Internet traffic may legitimately take other paths from time to time; and the end user may be using wireless connectivity where latency can get high due to interference and retransmits.

So it sounds like it would be a useful heuristic, but that’s all really.

steven February 12, 2016 8:40 AM

@keiner: remotely operating a machine that is physically close to the server would easily circumvent this kind of access control, yes. Some state actors and (other) cyber criminals have control over thousands of machines already inside target countries.

This might be more appropriate when smartcards are doing the actual crypto challenge. You could determine the smartcard and server are within certain proximity.

I’d be surprised if NFC or contactless payment systems don’t employ such a technique, between smartcard and terminal to ensure they’re really in proximity. Though I think I read they screwed this up and you actually can put relays between them.

Joel Sommers February 12, 2016 9:08 AM

The analysis behind this technique may be reasonable but the experimental methodology (i.e., does it work?) is completely unsound. Using Planetlab for accurate delay measurement has known flaws and the platform simply cannot be relied upon as a platform for evaluating any Internet measurement technique requiring accurate delays (or accurate packet loss measurement). A few searches on google scholar or other academic indexers will turn up references (full disclosure: I wrote one of the papers that analyzes the poor performance of Planetlab hosts for packet delay and loss measurement). Even worse, the measurements were done in April 2013, which is a time of the year known to yield especially poor results for measurements on Planetlab due to an academic conference paper deadline in May (ACM Internet Measurement Conference).
Unfortunately, the appear to be completely oblivious to these problems, and reviewers of the paper apparently did not do due diligence on that aspect of the paper. As a consequence, my personal opinion is that the results of the paper cannot be accepted.

Shazbot February 12, 2016 9:53 AM

From the linked article: guy developing a way to enforce outdated, artificial, false-scarcity-based geographic pricing models for licensing of media content. Seeks to protect exploitive corporate interests by further restricting the free internet.

Thanks guy. You’re a real sport.

Neill Miller February 12, 2016 10:47 AM

the danger is that any equipment change will also change the timing by fractions of ms, and potentially disable devices thereafter

i do know locations where CAT3 with 10M half duplex is being used that was maybe installed during the ramses II period, others are on 802.11b (yes, even with WEP)

MrC February 12, 2016 11:03 AM

Looking at this from the perspective of a user over a proxy: Perhaps I’m missing something here, but what’s to stop a next-generation proxy from identifying the verifiers* and then just interacting with them directly (i.e., not proxying those particular connections and therefore not incurring the telltale lag the system depends on)?

  • Although I can envision more complicated schemes (e.g. browser extension identifies the verifiers when the webpage is parsed, then browser instructs proxy to initiate direct connections to them), I’d expect that cost factors would keep the rate at which verifiers get moved about low enough that proxy operators could keep up with a hard-coded list updated by hand.

Anonymous Cow February 12, 2016 12:13 PM

…locations where CAT3 with 10M half duplex is being used that was maybe installed during the ramses II period…

In the late 1990s my parents signed up with the local telco’s TV offerings (since discontinued). When I looked at the setup everything looked fine up to the demarc on the house. Opening that up I found the house service was 20/4 stranded, not labled as CATanything! I could not find, but was suspicious, that they were actually sending wireless because I could not understand how that puny 4 conductor – actually 2; the other two wires were disconnected – could handle the amount and quality of the signal they were getting (over 100 video and 50 audio channels going to 2 distribution boxes, with PPV capability).

r February 12, 2016 12:32 PM

@Clive,

That’s what it struck me as, but I don’t really see it working at any layer vs bank/cc fraud considering the use of RDP/VNC and that recent Belgian bug that literally had every transfer pre+scripted…

It may help vs BGP, DNS and route hacking though… it could still be modulated for higher latency windows if an adversary had the high ground first in a preemptive fashion if it’s a ‘learning’ implementation…

Just thoughts, I’m not a network engineer.

r February 12, 2016 12:42 PM

Maybe if the banking apps ran in the new ‘ring -1’ and were fully encrypted https with the banks certs pre installed like how Microsoft is inside UEFI… Then I suppose the only problem is if there is globally accessible video memory… Nvidia recently had a problem clearing, and I think some of the Android exploits for Samsung were global read/writes?

Daniel February 12, 2016 4:18 PM

My reaction is that it is going to generate too many false positive and then one is going to have a version of the “fraud vs insult” rate. In other words, even if it were to have some technological usefulness the way around it (@Clive) will be social engineering.

MonBullet Google App February 14, 2016 2:18 PM

Google App MyShake provides SAFETY by large sensor networks
on smartphones.
FRANCE Monbullet or ‘Mon’ Criminal=Terrorist Paris ‘Shake’
provides SAFETY – the French programmers have NO SUCH
APP!

Quel dommagge. what a paity.
MYSHAKE MYBULLET?

the requiremetns.
1.) even in the dark, person can wave or shake it
towards the sounds of bullets or person shouting
HATE.
2.)real time mobile sensor platforms.where exactly
are the terrorists moving to?
3.)meets standards of S.M.A.R.T.
4.)discriminates against accidental panic button
presses or ‘but dialing.’ morse code S.O.S. is
three short, three long, three short.
MYSHAKE MYBULLET?
perhpas I missed it but VIP persons like schneier, etc
go to HOTEL CONFERENCES in FRANCE.
French Programmers have NOTHING, NADA, fill in the
French word here similar to Myshake.

Thank you Berkely and University Scientists.
Merci. MYSHAKE MYBULLET?

PS. could be nice to see code not in USA closed
source Microsoft Visual Basic (the 10 year old version,
of course) and rather in Ocaml, Haskell, Rust, etc.

Perhaps the French Programmers have heard of Ocaml?

non, please do not consider posting this on HACKER NEWS!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.