Entries Tagged "tracking"

Page 11 of 17

RFID Tags Protecting Hotel Towels

The stealing of hotel towels isn’t a big problem in the scheme of world problems, but it can be expensive for hotels. Sure, we have moral prohibitions against stealing—that’ll prevent most people from stealing the towels. Many hotels put their name or logo on the towels. That works as a reputational societal security system; most people don’t want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect: making towels and other items into souvenirs of the hotel and thus more desirable to steal. It’s against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted. (This might be different in third world countries. In 2010, someone was sentenced to three months in jail for stealing two towels from a Nigerian hotel.) The result is that more towels are stolen than hotels want. And for expensive resort hotels, those towels are expensive to replace.

The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items—this is particularly common with bathrobes—and charge the guest for them if they disappear from the rooms. This works with some things, but it’s too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool.

A more recent system, still not widespread, is to embed washable RFID chips into the towels and track them that way. The one data point I have for this is an anonymous Hawaii hotel that claims they’ve reduced towel theft from 4,000 a month to 750, saving $16,000 in replacement costs monthly.

Assuming the RFID tags are relatively inexpensive and don’t wear out too quickly, that’s a pretty good security trade-off.

Posted on May 11, 2011 at 11:01 AMView Comments

Software as Evidence

Increasingly, chains of evidence include software steps. It’s not just the RIAA suing people—and getting it wrong—based on automatic systems to detect and identify file sharers. It’s forensic programs used to collect and analyze data from computers and smart phones. It’s audit logs saved and stored by ISPs and websites. It’s location data from cell phones. It’s e-mails and IMs and comments posted to social networking sites. It’s tallies from digital voting machines. It’s images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.

Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is “Software on the Witness Stand: What Should it Take for Us to Trust it?

We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.

From a presentation he gave on the subject:

Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?

[…]

Witnesses are sworn in and cross-examined to expose biases & conflicts—what about software as a witness?

Posted on April 19, 2011 at 6:47 AMView Comments

Pinpointing a Computer to Within 690 Meters

This is impressive, and scary:

Every computer connected to the web has an internet protocol (IP) address, but there is no simple way to map this to a physical location. The current best system can be out by as much as 35 kilometres.

Now, Yong Wang, a computer scientist at the University of Electronic Science and Technology of China in Chengdu, and colleagues at Northwestern University in Evanston, Illinois, have used businesses and universities as landmarks to achieve much higher accuracy.

These organisations often host their websites on servers kept on their premises, meaning the servers’ IP addresses are tied to their physical location. Wang’s team used Google Maps to find both the web and physical addresses of such organisations, providing them with around 76,000 landmarks. By comparison, most other geolocation methods only use a few hundred landmarks specifically set up for the purpose.

The new method zooms in through three stages to locate a target computer. The first stage measures the time it takes to send a data packet to the target and converts it into a distance—a common geolocation technique that narrows the target’s possible location to a radius of around 200 kilometres.

Wang and colleagues then send data packets to the known Google Maps landmark servers in this large area to find which routers they pass through. When a landmark machine and the target computer have shared a router, the researchers can compare how long a packet takes to reach each machine from the router; converted into an estimate of distance, this time difference narrows the search down further. “We shrink the size of the area where the target potentially is,” explains Wang.

Finally, they repeat the landmark search at this more fine-grained level: comparing delay times once more, they establish which landmark server is closest to the target. The result can never be entirely accurate, but it’s much better than trying to determine a location by converting the initial delay into a distance or the next best IP-based method. On average their method gets to within 690 metres of the target and can be as close as 100 metres—good enough to identify the target computer’s location to within a few streets.

Posted on April 8, 2011 at 6:22 AMView Comments

Identifying Tor Users Through Insecure Applications

Interesting research: “One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users“:

Abstract: Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications. In this paper, we show that linkability allows us to trace 193% of additional streams, including 27% of HTTP streams possibly originating from “secure” browsers. In particular, we traced 9% of Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor.

Posted on March 25, 2011 at 6:38 AMView Comments

The FBI is Tracking Whom?

They’re tracking a college student in Silicon Valley. He’s 20, partially Egyptian, and studying marketing at Mission College. He found the tracking device attached to his car. Near as he could tell, what he did to warrant the FBI’s attention is be the friend of someone who did something to warrant the FBI’s attention.

Afifi retrieved the device from his apartment and handed it over, at which point the agents asked a series of questions ­ did he know anyone who traveled to Yemen or was affiliated with overseas training? One of the agents produced a printout of a blog post that Afifi’s friend Khaled allegedly wrote a couple of months ago. It had “something to do with a mall or a bomb,” Afifi said. He hadn’t seen it before and doesn’t know the details of what it said. He found it hard to believe Khaled meant anything threatening by the post.

Here’s the Reddit post:

bombing a mall seems so easy to do. i mean all you really need is a bomb, a regular outfit so you arent the crazy guy in a trench coat trying to blow up a mall and a shopping bag. i mean if terrorism were actually a legitimate threat, think about how many fucking malls would have blown up already.. you can put a bag in a million different places, there would be no way to foresee the next target, and really no way to prevent it unless CTU gets some intel at the last minute in which case every city but LA is fucked…so…yea…now i’m surely bugged : /

Here’s the device. Here’s the story, told by the student who found it.

This weird story poses three sets of questions.

  1. Is the FBI’s car surveillance technology that lame? Don’t they have bugs that are a bit smaller and less obtrusive? Or are they surveilling so many people that they’re forced to use the older models as well as the newer, smaller, stuff?

    From a former FBI agent:

    The former agent, who asked not to be named, said the device was an older model of tracking equipment that had long ago been replaced by devices that don’t require batteries. Batteries die and need to be replaced if surveillance is ongoing so newer devices are placed in the engine compartment and hardwired to the car’s battery so they don’t run out of juice. He was surprised this one was so easily found.

    “It has to be able to be removed but also stay in place and not be seen,” he said. “There’s always the possibility that the car will end up at a body shop or auto mechanic, so it has to be hidden well. It’s very rare when the guys find them.”

  2. If they’re doing this to someone so tangentially connected to a vaguely bothersome post on an obscure blog, just how many of us have tracking devices on our cars right now—perhaps because of this blog? Really, is that blog post plus this enough to warrant surveillance?

    Afifi’s father, Aladdin Afifi, was a U.S. citizen and former president of the Muslim Community Association here, before his family moved to Egypt in 2003. Yasir Afifi returned to the United States alone in 2008, while his father and brothers stayed in Egypt, to further his education he said. He knows he’s on a federal watchlist and is regularly taken aside at airports for secondary screening.

  3. How many people are being paid to read obscure blogs, looking for more college students to surveil?

Remember, the Ninth Circuit Court recently ruled that the police do not need a warrant to attach one of these things to your car. That ruling holds true only for the Ninth Circuit right now; the Supreme Court will probably rule on this soon.

Meanwhile, the ACLU is getting involved:

Brian Alseth from the American Civil Liberties Union in Washington state contacted Afifi after seeing pictures of the tracking device posted online and told him the ACLU had been waiting for a case like this to challenge the ruling.

“This is the kind of thing we like to throw lawyers at,” Afifi said Alseth told him.

“It seems very frightening that the FBI have placed a surveillance-tracking device on the car of a 20-year-old American citizen who has done nothing more than being half-Egyptian,” Alseth told Wired.com.

Posted on October 13, 2010 at 6:20 AMView Comments

Hacking Cars Through Wireless Tire-Pressure Sensors

Still minor, but this kind of thing is only going to get worse:

The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they’re wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.

The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.

More:

Now, Ishtiaq Rouf at the USC and other researchers have found a vulnerability in the data transfer mechanisms between CANbus controllers and wireless tyre pressure monitoring sensors which allows misleading data to be injected into a vehicle’s system and allows remote recording of the movement profiles of a specific vehicle. The sensors, which are compulsory for new cars in the US (and probably soon in the EU), each communicate individually with the vehicle’s on-board electronics. Although a loss of pressure can also be detected via differences in the rotational speed of fully inflated and partially inflated tyres on the same axle, such indirect methods are now prohibited in the US.

Paper here. This is a previous paper on automobile computer security.

EDITED TO ADD (8/25): This is a better article.

Posted on August 17, 2010 at 6:42 AMView Comments

Tracking Location Based on Water Isotope Ratios

Interesting:

…water molecules differ slightly in their isotope ratios depending on the minerals at their source. …researchers found that water samples from 33 cities across the United State could be reliably traced back to their origin based on their isotope ratios. And because the human body breaks down water’s constituent atoms of hydrogen and oxygen to construct the proteins that make hair cells, those cells can preserve the record of a person’s travels.

Here’s the paper.

Posted on July 5, 2010 at 10:00 AMView Comments

AT&T's iPad Security Breach

I didn’t write about the recent security breach that disclosed tens of thousands of e-mail addresses and ICC-IDs of iPad users because, well, there was nothing terribly interesting about it. It was yet another web security breach.

Right after the incident, though, I was being interviewed by a reporter that wanted to know what the ramifications of the breach were. He specifically wanted to know if anything could be done with those ICC-IDs, and if the disclosure of that information was worse than people thought. He didn’t like the answer I gave him, which is that no one knows yet: that it’s too early to know the full effects of that information disclosure, and that both the good guys and the bad guys would be figuring it out in the coming weeks. And, that it’s likely that there were further security implications of the breach.

Seems like there were:

The problem is that ICC-IDs—unique serial numbers that identify each SIM card—can often be converted into IMSIs. While the ICC-ID is nonsecret—it’s often found printed on the boxes of cellphone/SIM bundles—the IMSI is somewhat secret. In theory, knowing an ICC-ID shouldn’t be enough to determine an IMSI. The phone companies do need to know which IMSI corresponds to which ICC-ID, but this should be done by looking up the values in a big database.

In practice, however, many phone companies simply calculate the IMSI from the ICC-ID. This calculation is often very simple indeed, being little more complex than “combine this hard-coded value with the last nine digits of the ICC-ID.” So while the leakage of AT&T’s customers’ ICC-IDs should be harmless, in practice, it could reveal a secret ID.

What can be done with that secret ID? Quite a lot, it turns out. The IMSI is sent by the phone to the network when first signing on to the network; it’s used by the network to figure out which call should be routed where. With someone else’s IMSI, an attacker can determine the person’s name and phone number, and even track his or her position. It also opens the door to active attacks—creating fake cell towers that a victim’s phone will connect to, enabling every call and text message to be eavesdropped.

More to come, I’m sure.

And that’s really the point: we all want to know—right away—the effects of a security vulnerability, but often we don’t and can’t. It takes time before the full effects are known, sometimes a lot of time.

And in related news, the image redaction that went along with some of the breach reporting wasn’t very good.

Posted on June 21, 2010 at 5:27 AMView Comments

1 9 10 11 12 13 17

Sidebar photo of Bruce Schneier by Joe MacInnis.