Entries Tagged "privacy"

Page 43 of 145

Hacking Fitbit

This is impressive:

“An attacker sends an infected packet to a fitness tracker nearby at bluetooth distance then the rest of the attack occurs by itself, without any special need for the attacker being near,” Apvrille says.

“[When] the victim wishes to synchronise his or her fitness data with FitBit servers to update their profile … the fitness tracker responds to the query, but in addition to the standard message, the response is tainted with the infected code.

“From there, it can deliver a specific malicious payload on the laptop, that is, start a backdoor, or have the machine crash [and] can propagate the infection to other trackers (Fitbits).”

That’s attacker to Fitbit to computer.

Posted on October 22, 2015 at 1:20 PMView Comments

Mapping FinFisher Users

Citizen Lab continues to do excellent work exposing the world’s cyber-weapons arms manufacturers. Its latest report attempts to track users of Gamma International’s FinFisher:

This post describes the results of Internet scanning we recently conducted to identify the users of FinFisher, a sophisticated and user-friendly spyware suite sold exclusively to governments. We devise a method for querying FinFisher’s “anonymizing proxies” to unmask the true location of the spyware’s master servers. Since the master servers are installed on the premises of FinFisher customers, tracing the servers allows us to identify which governments are likely using FinFisher. In some cases, we can trace the servers to specific entities inside a government by correlating our scan results with publicly available sources. Our results indicate 32 countries where at least one government entity is likely using the spyware suite, and we are further able to identify 10 entities by name. Despite the 2014 FinFisher breach, and subsequent disclosure of sensitive customer data, our scanning has detected more servers in more countries than ever before.

Here’s the map of suspected FinFisher users, including some pretty reprehensible governments.

Two news articles.

Posted on October 16, 2015 at 2:33 PMView Comments

Automatic Face Recognition and Surveillance

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­—or politicians ­—being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­—most of all ­—fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­—most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.

EDITED TO ADD: Two articles that say much the same thing.

Posted on October 5, 2015 at 6:11 AMView Comments

How GCHQ Tracks Internet Users

The Intercept has a new story from the Snowden documents about the UK’s surveillance of the Internet by the GCHQ:

The mass surveillance operation ­ code-named KARMA POLICE­ was launched by British spies about seven years ago without any public debate or scrutiny. It was just one part of a giant global Internet spying apparatus built by the United Kingdom’s electronic eavesdropping agency, Government Communications Headquarters, or GCHQ.

[…]

One system builds profiles showing people’s web browsing histories. Another analyzes instant messenger communications, emails, Skype calls, text messages, cell phone locations, and social media interactions. Separate programs were built to keep tabs on “suspicious” Google searches and usage of Google Maps.

[…]

As of March 2009, the largest slice of data Black Hole held—41 percent—was about people’s Internet browsing histories. The rest included a combination of email and instant messenger records, details about search engine queries, information about social media activity, logs related to hacking operations, and data on people’s use of tools to browse the Internet anonymously.

Lots more in the article. The Intercept also published 28 new top secret NSA and GCHQ documents.

Posted on September 29, 2015 at 6:16 AMView Comments

Anonymous Browsing at the Library

A rural New Hampshire library decided to install Tor on their computers and allow anonymous Internet browsing. The Department of Homeland pressured them to stop:

A special agent in a Boston DHS office forwarded the article to the New Hampshire police, who forwarded it to a sergeant at the Lebanon Police Department.

DHS spokesman Shawn Neudauer said the agent was simply providing “visibility/situational awareness,” and did not have any direct contact with the Lebanon police or library. “The use of a Tor browser is not, in [or] of itself, illegal and there are legitimate purposes for its use,” Neudauer said, “However, the protections that Tor offers can be attractive to criminal enterprises or actors and HSI [Homeland Security Investigations] will continue to pursue those individuals who seek to use the anonymizing technology to further their illicit activity.”

When the DHS inquiry was brought to his attention, Lt. Matthew Isham of the Lebanon Police Department was concerned. “For all the good that a Tor may allow as far as speech, there is also the criminal side that would take advantage of that as well,” Isham said. “We felt we needed to make the city aware of it.”

The good news is that the library is resisting the pressure and keeping Tor running.

This is an important issue for reasons that go beyond the New Hampshire library. The goal of the Library Freedom Project is to set up Tor exit nodes at libraries. Exit nodes help every Tor user in the world; the more of them there are, the harder it is to subvert the system. The Kilton Public Library isn’t just allowing its patrons to browse the Internet anonymously; it is helping dissidents around the world stay alive.

Librarians have been protecting our privacy for decades, and I’m happy to see that tradition continue.

EDITED TO ADD (10/13): As a result of the story, more libraries are planning to run Tor nodes.

Posted on September 16, 2015 at 1:40 PMView Comments

Drone Self-Defense and the Law

Last month, a Kentucky man shot down a drone that was hovering near his backyard.

WDRB News reported that the camera drone’s owners soon showed up at the home of the shooter, William H. Merideth: “Four guys came over to confront me about it, and I happened to be armed, so that changed their minds,” Merideth said. “They asked me, ‘Are you the S-O-B that shot my drone?’ and I said, ‘Yes I am,'” he said. “I had my 40 mm Glock on me and they started toward me and I told them, ‘If you cross my sidewalk, there’s gonna be another shooting.'” Police charged Meredith with criminal mischief and wanton endangerment.

This is a trend. People have shot down drones in southern New Jersey and rural California as well. It’s illegal, and they get arrested for it.

Technology changes everything. Specifically, it upends long-standing societal balances around issues like security and privacy. When a capability becomes possible, or cheaper, or more common, the changes can be far-reaching. Rebalancing security and privacy after technology changes capabilities can be very difficult, and take years. And we’re not very good at it.

The security threats from drones are real, and the government is taking them seriously. In January, a man lost control of his drone, which crashed on the White House lawn. In May, another man was arrested for trying to fly his drone over the White House fence, and another last week for flying a drone into the stadium where the U.S. Open was taking place.

Drones have attempted to deliver drugs to prisons in Maryland, Ohio and South Carolina ­so far.

There have been many near-misses between drones and airplanes. Many people have written about the possible terrorist uses of drones.

Defenses are being developed. Both Lockheed Martin and Boeing sell anti-drone laser weapons. One company sells shotgun shells specifically designed to shoot down drones.

Other companies are working on technologies to detect and disable them safely. Some of those technologies were used to provide security at this year’s Boston Marathon.

Law enforcement can deploy these technologies, but under current law it’s illegal to shoot down a drone, even if it’s hovering above your own property. In our society, you’re generally not allowed to take the law into your own hands. You’re expected to call the police and let them deal with it.

There’s an alternate theory, though, from law professor Michael Froomkin. He argues that self-defense should be permissible against drones simply because you don’t know their capabilities. We know, for example, that people have mounted guns on drones, which means they could pose a threat to life. Note that this legal theory has not been tested in court.

Increasingly, government is regulating drones and drone flights both at the state level and by the FAA. There are proposals to require that drones have an identifiable transponder, or no-fly zones programmed into the drone software.

Still, a large number of security issues remain unresolved. How do we feel about drones with long-range listening devices, for example? Or drones hovering outside our property and photographing us through our windows?

What’s going on is that drones have changed how we think about security and privacy within our homes, by removing the protections we used to get from fences and walls. Of course, being spied on and shot at from above is nothing new, but access to those technologies was expensive and largely the purview of governments and some corporations. Drones put these capabilities into the hands of hobbyists, and we don’t know what to do about it.

The issues around drones will get worse as we move from remotely piloted aircraft to true drones: aircraft that operate autonomously from a computer program. For the first time, autonomous robots—­with ever-increasing intelligence and capabilities at an ever-decreasing cost—­will have access to public spaces. This will create serious problems for society, because our legal system is largely based on deterring human miscreants rather than their proxies.

Our desire to shoot down a drone hovering nearby is understandable, given its potential threat. Society’s need for people not to take the law into their own hands­—and especially not to fire guns into the air­—is also understandable. These two positions are increasingly coming into conflict, and will require increasing government regulation to sort out. But more importantly, we need to rethink our assumptions of security and privacy in a world of autonomous drones, long-range cameras, face recognition, and the myriad other technologies that are increasingly in the hands of everyone.

This essay previously appeared on CNN.com.

Posted on September 11, 2015 at 6:45 AMView Comments

The Security Risks of Third-Party Data

Most of us get to be thoroughly relieved that our e-mails weren’t in the Ashley Madison database. But don’t get too comfortable. Whatever secrets you have, even the ones you don’t think of as secret, are more likely than you think to get dumped on the Internet. It’s not your fault, and there’s largely nothing you can do about it.

Welcome to the age of organizational doxing.

Organizational doxing—stealing data from an organization’s network and indiscriminately dumping it all on the Internet—is an increasingly popular attack against organizations. Because our data is connected to the Internet, and stored in corporate networks, we are all in the potential blast-radius of these attacks. While the risk that any particular bit of data gets published is low, we have to start thinking about what could happen if a larger-scale breach affects us or the people we care about. It’s going to get a lot uglier before security improves.

We don’t know why anonymous hackers broke into the networks of Avid Life Media, then stole and published 37 million—so far—personal records of AshleyMadison.com users. The hackers say it was because of the company’s deceptive practices. They expressed indifference to the “cheating dirtbags” who had signed up for the site. The primary target, the hackers said, was the company itself. That philanderers were exposed, marriages were ruined, and people were driven to suicide was apparently a side effect.

Last November, the North Korean government stole and published gigabytes of corporate e-mail from Sony Pictures. This was part of a much larger doxing—a hack aimed at punishing the company for making a movie parodying the North Korean leader Kim Jong-un. The press focused on Sony’s corporate executives, who had sniped at celebrities and made racist jokes about President Obama. But also buried in those e-mails were loves, losses, confidences, and private conversations of thousands of innocent employees. The press didn’t bother with those e-mails—and we know nothing of any personal tragedies that resulted from their friends’ searches. They, too, were caught in the blast radius of the larger attack.

The Internet is more than a way for us to get information or connect with our friends. It has become a place for us to store our personal information. Our e-mail is in the cloud. So are our address books and calendars, whether we use Google, Apple, Microsoft, or someone else. We store to-do lists on Remember the Milk and keep our jottings on Evernote. Fitbit and Jawbone store our fitness data. Flickr, Facebook, and iCloud are the repositories for our personal photos. Facebook and Twitter store many of our intimate conversations.

It often feels like everyone is collecting our personal information. Smartphone apps collect our location data. Google can draw a surprisingly intimate portrait of what we’re thinking about from our Internet searches. Dating sites (even those less titillating than Ashley Madison), medical-information sites, and travel sites all have detailed portraits of who we are and where we go. Retailers save records of our purchases, and those databases are stored on the Internet. Data brokers have detailed dossiers that can include all of this and more.

Many people don’t think about the security implications of this information existing in the first place. They might be aware that it’s mined for advertising and other marketing purposes. They might even know that the government can get its hands on such data, with different levels of ease depending on the country. But it doesn’t generally occur to people that their personal information might be available to anyone who wants to look.

In reality, all these networks are vulnerable to organizational doxing. Most aren’t any more secure than Ashley Madison or Sony were. We could wake up one morning and find detailed information about our Uber rides, our Amazon purchases, our subscriptions to pornographic websites—anything we do on the Internet—published and available. It’s not likely, but it’s certainly possible.

Right now, you can search the Ashley Madison database for any e-mail address, and read that person’s details. You can search the Sony data dump and read the personal chatter of people who work for the company. Tempting though it may be, there are many reasons not to search for people you know on Ashley Madison. The one I most want to focus on is context. An e-mail address might be in that database for many reasons, not all of them lascivious. But if you find your spouse or your friend in there, you don’t necessarily know the context. It’s the same with the Sony employee e-mails, and the data from whatever company is doxed next. You’ll be able to read the data, but without the full story, it can be hard to judge the meaning of what you’re reading.

Even so, of course people are going to look. Reporters will search for public figures. Individuals will search for people they know. Secrets will be read and passed around. Anguish and embarrassment will result. In some cases, lives will be destroyed.

Privacy isn’t about hiding something. It’s about being able to control how we present ourselves to the world. It’s about maintaining a public face while at the same time being permitted private thoughts and actions. It’s about personal dignity.

Organizational doxing is a powerful attack against organizations, and one that will continue because it’s so effective. And while the network owners and the hackers might be battling it out for their own reasons, sometimes it’s our data that’s the prize. Having information we thought private turn out to be public and searchable is what happens when the hackers win. It’s a result of the information age that hasn’t been fully appreciated, and one that we’re still not prepared to face.

This essay previously appeared on the Atlantic.

Posted on September 9, 2015 at 8:42 AMView Comments

1 41 42 43 44 45 145

Sidebar photo of Bruce Schneier by Joe MacInnis.