Entries Tagged "databases"

Page 3 of 14

Voter Surveillance

There hasn’t been that much written about surveillance and big data being used to manipulate voters. In Data and Goliath, I wrote:

Unique harms can arise from the use of surveillance data in politics. Election politics is very much a type of marketing, and politicians are starting to use personalized marketing’s capability to discriminate as a way to track voting patterns and better “sell” a candidate or policy position. Candidates and advocacy groups can create ads and fund-raising appeals targeted to particular categories: people who earn more than $100,000 a year, gun owners, people who have read news articles on one side of a particular issue, unemployed veterans…anything you can think of. They can target outraged ads to one group of people, and thoughtful policy-based ads to another. They can also fine-tune their get-out-the-vote campaigns on Election Day, and more efficiently gerrymander districts between elections. Such use of data will likely have fundamental effects on democracy and voting.

A new research paper looks at the trends:

Abstract: This paper surveys the various voter surveillance practices recently observed in the United States, assesses the extent to which they have been adopted in other democratic countries, and discusses the broad implications for privacy and democracy. Four broad trends are discussed: the move from voter management databases to integrated voter management platforms; the shift from mass-messaging to micro-targeting employing personal data from commercial data brokerage firms; the analysis of social media and the social graph; and the decentralization of data to local campaigns through mobile applications. The de-alignment of the electorate in most Western societies has placed pressures on parties to target voters outside their traditional bases, and to find new, cheaper, and potentially more intrusive, ways to influence their political behavior. This paper builds on previous research to consider the theoretical tensions between concerns for excessive surveillance, and the broad democratic responsibility of parties to mobilize voters and increase political engagement. These issues have been insufficiently studied in the surveillance literature. They are not just confined to the privacy of the individual voter, but relate to broader dynamics in democratic politics.

Posted on November 23, 2015 at 12:03 PMView Comments

Personal Data Sharing by Mobile Apps

Interesting research:

“Who Knows What About Me? A Survey of Behind the Scenes Personal Data Sharing to Third Parties by Mobile Apps,” by Jinyan Zang, Krysta Dummit, James Graves, Paul Lisker, and Latanya Sweeney.

We tested 110 popular, free Android and iOS apps to look for apps that shared personal, behavioral, and location data with third parties.

73% of Android apps shared personal information such as email address with third parties, and 47% of iOS apps shared geo-coordinates and other location data with third parties.

93% of Android apps tested connected to a mysterious domain, safemovedm.com, likely due to a background process of the Android phone.

We show that a significant proportion of apps share data from user inputs such as personal information or search terms with third parties without Android or iOS requiring a notification to the user.

EDITED TO ADD: News article.

Posted on November 13, 2015 at 6:08 AMView Comments

Police Want Genetic Data from Corporate Repositories

Both the FBI and local law enforcement are trying to get the genetic data stored at companies like 23andMe.

No surprise, really.

As NYU law professor Erin Murphy told the New Orleans Advocate regarding the Usry case, gathering DNA information is “a series of totally reasonable steps by law enforcement.” If you’re a cop trying to solve a crime, and you have DNA at your disposal, you’re going to want to use it to further your investigation. But the fact that your signing up for 23andMe or Ancestry.com means that you and all of your current and future family members could become genetic criminal suspects is not something most users probably have in mind when trying to find out where their ancestors came from.

Posted on October 22, 2015 at 6:40 AMView Comments

Automatic Face Recognition and Surveillance

ID checks were a common response to the terrorist attacks of 9/11, but they’ll soon be obsolete. You won’t have to show your ID, because you’ll be identified automatically. A security camera will capture your face, and it’ll be matched with your name and a whole lot of other information besides. Welcome to the world of automatic facial recognition. Those who have access to databases of identified photos will have the power to identify us. Yes, it’ll enable some amazing personalized services; but it’ll also enable whole new levels of surveillance. The underlying technologies are being developed today, and there are currently no rules limiting their use.

Walk into a store, and the salesclerks will know your name. The store’s cameras and computers will have figured out your identity, and looked you up in both their store database and a commercial marketing database they’ve subscribed to. They’ll know your name, salary, interests, what sort of sales pitches you’re most vulnerable to, and how profitable a customer you are. Maybe they’ll have read a profile based on your tweets and know what sort of mood you’re in. Maybe they’ll know your political affiliation or sexual identity, both predictable by your social media activity. And they’re going to engage with you accordingly, perhaps by making sure you’re well taken care of or possibly by trying to make you so uncomfortable that you’ll leave.

Walk by a policeman, and she will know your name, address, criminal record, and with whom you routinely are seen. The potential for discrimination is enormous, especially in low-income communities where people are routinely harassed for things like unpaid parking tickets and other minor violations. And in a country where people are arrested for their political views, the use of this technology quickly turns into a nightmare scenario.

The critical technology here is computer face recognition. Traditionally it has been pretty poor, but it’s slowly improving. A computer is now as good as a person. Already Google’s algorithms can accurately match child and adult photos of the same person, and Facebook has an algorithm that works by recognizing hair style, body shape, and body language ­- and works even when it can’t see faces. And while we humans are pretty much as good at this as we’re ever going to get, computers will continue to improve. Over the next years, they’ll continue to get more accurate, making better matches using even worse photos.

Matching photos with names also requires a database of identified photos, and we have plenty of those too. Driver’s license databases are a gold mine: all shot face forward, in good focus and even light, with accurate identity information attached to each photo. The enormous photo collections of social media and photo archiving sites are another. They contain photos of us from all sorts of angles and in all sorts of lighting conditions, and we helpfully do the identifying step for the companies by tagging ourselves and our friends. Maybe this data will appear on handheld screens. Maybe it’ll be automatically displayed on computer-enhanced glasses. Imagine salesclerks ­—or politicians ­—being able to scan a room and instantly see wealthy customers highlighted in green, or policemen seeing people with criminal records highlighted in red.

Science fiction writers have been exploring this future in both books and movies for decades. Ads followed people from billboard to billboard in the movie Minority Report. In John Scalzi’s recent novel Lock In, characters scan each other like the salesclerks I described above.

This is no longer fiction. High-tech billboards can target ads based on the gender of who’s standing in front of them. In 2011, researchers at Carnegie Mellon pointed a camera at a public area on campus and were able to match live video footage with a public database of tagged photos in real time. Already government and commercial authorities have set up facial recognition systems to identify and monitor people at sporting events, music festivals, and even churches. The Dubai police are working on integrating facial recognition into Google Glass, and more US local police forces are using the technology.

Facebook, Google, Twitter, and other companies with large databases of tagged photos know how valuable their archives are. They see all kinds of services powered by their technologies ­ services they can sell to businesses like the stores you walk into and the governments you might interact with.

Other companies will spring up whose business models depend on capturing our images in public and selling them to whoever has use for them. If you think this is farfetched, consider a related technology that’s already far down that path: license-plate capture.

Today in the US there’s a massive but invisible industry that records the movements of cars around the country. Cameras mounted on cars and tow trucks capture license places along with date/time/location information, and companies use that data to find cars that are scheduled for repossession. One company, Vigilant Solutions, claims to collect 70 million scans in the US every month. The companies that engage in this business routinely share that data with the police, giving the police a steady stream of surveillance information on innocent people that they could not legally collect on their own. And the companies are already looking for other profit streams, selling that surveillance data to anyone else who thinks they have a need for it.

This could easily happen with face recognition. Finding bail jumpers could even be the initial driving force, just as finding cars to repossess was for license plate capture.

Already the FBI has a database of 52 million faces, and describes its integration of facial recognition software with that database as “fully operational.” In 2014, FBI Director James Comey told Congress that the database would not include photos of ordinary citizens, although the FBI’s own documents indicate otherwise. And just last month, we learned that the FBI is looking to buy a system that will collect facial images of anyone an officer stops on the street.

In 2013, Facebook had a quarter of a trillion user photos in its database. There’s currently a class-action lawsuit in Illinois alleging that the company has over a billion “face templates” of people, collected without their knowledge or consent.

Last year, the US Department of Commerce tried to prevail upon industry representatives and privacy organizations to write a voluntary code of conduct for companies using facial recognition technologies. After 16 months of negotiations, all of the consumer-focused privacy organizations pulled out of the process because industry representatives were unable to agree on any limitations on something as basic as nonconsensual facial recognition.

When we talk about surveillance, we tend to concentrate on the problems of data collection: CCTV cameras, tagged photos, purchasing habits, our writings on sites like Facebook and Twitter. We think much less about data analysis. But effective and pervasive surveillance is just as much about analysis. It’s sustained by a combination of cheap and ubiquitous cameras, tagged photo databases, commercial databases of our actions that reveal our habits and personalities, and ­—most of all ­—fast and accurate face recognition software.

Don’t expect to have access to this technology for yourself anytime soon. This is not facial recognition for all. It’s just for those who can either demand or pay for access to the required technologies ­—most importantly, the tagged photo databases. And while we can easily imagine how this might be misused in a totalitarian country, there are dangers in free societies as well. Without meaningful regulation, we’re moving into a world where governments and corporations will be able to identify people both in real time and backwards in time, remotely and in secret, without consent or recourse.

Despite protests from industry, we need to regulate this budding industry. We need limitations on how our images can be collected without our knowledge or consent, and on how they can be used. The technologies aren’t going away, and we can’t uninvent these capabilities. But we can ensure that they’re used ethically and responsibly, and not just as a mechanism to increase police and corporate power over us.

This essay previously appeared on Forbes.com.

EDITED TO ADD: Two articles that say much the same thing.

Posted on October 5, 2015 at 6:11 AMView Comments

Stealing Fingerprints

The news from the Office of Personnel Management hack keeps getting worse. In addition to the personal records of over 20 million US government employees, we’ve now learned that the hackers stole fingerprint files for 5.6 million of them.

This is fundamentally different from the data thefts we regularly read about in the news, and should give us pause before we entrust our biometric data to large networked databases.

There are three basic kinds of data that can be stolen. The first, and most common, is authentication credentials. These are passwords and other information that allows someone else access into our accounts and—usually—our money. An example would be the 56 million credit card numbers hackers stole from Home Depot in 2014, or the 21.5 million Social Security numbers hackers stole in the OPM breach. The motivation is typically financial. The hackers want to steal money from our bank accounts, process fraudulent credit card charges in our name, or open new lines of credit or apply for tax refunds.

It’s a huge illegal business, but we know how to deal with it when it happens. We detect these hacks as quickly as possible, and update our account credentials as soon as we detect an attack. (We also need to stop treating Social Security numbers as if they were secret.)

The second kind of data stolen is personal information. Examples would be the medical data stolen and exposed when Sony was hacked in 2014, or the very personal data from the infidelity website Ashley Madison stolen and published this year. In these instances, there is no real way to recover after a breach. Once the data is public, or in the hands of an adversary, it’s impossible to make it private again.

This is the main consequence of the OPM data breach. Whoever stole the data—we suspect it was the Chinese—got copies the security-clearance paperwork of all those government employees. This documentation includes the answers to some very personal and embarrassing questions, and now opens these employees up to blackmail and other types of coercion.

Fingerprints are another type of data entirely. They’re used to identify people at crime scenes, but increasingly they’re used as an authentication credential. If you have an iPhone, for example, you probably use your fingerprint to unlock your phone. This type of authentication is increasingly common, replacing a password—something you know—with a biometric: something you are. The problem with biometrics is that they can’t be replaced. So while it’s easy to update your password or get a new credit card number, you can’t get a new finger.

And now, for the rest of their lives, 5.6 million US government employees need to remember that someone, somewhere, has their fingerprints. And we really don’t know the future value of this data. If, in twenty years, we routinely use our fingerprints at ATM machines, that fingerprint database will become very profitable to criminals. If fingerprints start being used on our computers to authorize our access to files and data, that database will become very profitable to spies.

Of course, it’s not that simple. Fingerprint readers employ various technologies to prevent being fooled by fake fingers: detecting temperature, pores, a heartbeat, and so on. But this is an arms race between attackers and defenders, and there are many ways to fool fingerprint readers. When Apple introduced its iPhone fingerprint reader, hackers figured out how to fool it within days, and have continued to fool each new generation of phone readers equally quickly.

Not every use of biometrics requires the biometric data to be stored in a central server somewhere. Apple’s system, for example, only stores the data locally: on your phone. That way there’s no central repository to be hacked. And many systems don’t store the biometric data at all, only a mathematical function of the data that can be used for authentication but can’t be used to reconstruct the actual biometric. Unfortunately, OPM stored copies of actual fingerprints.

Ashley Madison has taught us all the dangers of entrusting our intimate secrets to a company’s computers and networks, because once that data is out there’s no getting it back. All biometric data, whether it be fingerprints, retinal scans, voiceprints, or something else, has that same property. We should be skeptical of any attempts to store this data en masse, whether by governments or by corporations. We need our biometrics for authentication, and we can’t afford to lose them to hackers.

This essay previously appeared on Motherboard.

Posted on October 2, 2015 at 6:35 AMView Comments

The Security Risks of Third-Party Data

Most of us get to be thoroughly relieved that our e-mails weren’t in the Ashley Madison database. But don’t get too comfortable. Whatever secrets you have, even the ones you don’t think of as secret, are more likely than you think to get dumped on the Internet. It’s not your fault, and there’s largely nothing you can do about it.

Welcome to the age of organizational doxing.

Organizational doxing—stealing data from an organization’s network and indiscriminately dumping it all on the Internet—is an increasingly popular attack against organizations. Because our data is connected to the Internet, and stored in corporate networks, we are all in the potential blast-radius of these attacks. While the risk that any particular bit of data gets published is low, we have to start thinking about what could happen if a larger-scale breach affects us or the people we care about. It’s going to get a lot uglier before security improves.

We don’t know why anonymous hackers broke into the networks of Avid Life Media, then stole and published 37 million—so far—personal records of AshleyMadison.com users. The hackers say it was because of the company’s deceptive practices. They expressed indifference to the “cheating dirtbags” who had signed up for the site. The primary target, the hackers said, was the company itself. That philanderers were exposed, marriages were ruined, and people were driven to suicide was apparently a side effect.

Last November, the North Korean government stole and published gigabytes of corporate e-mail from Sony Pictures. This was part of a much larger doxing—a hack aimed at punishing the company for making a movie parodying the North Korean leader Kim Jong-un. The press focused on Sony’s corporate executives, who had sniped at celebrities and made racist jokes about President Obama. But also buried in those e-mails were loves, losses, confidences, and private conversations of thousands of innocent employees. The press didn’t bother with those e-mails—and we know nothing of any personal tragedies that resulted from their friends’ searches. They, too, were caught in the blast radius of the larger attack.

The Internet is more than a way for us to get information or connect with our friends. It has become a place for us to store our personal information. Our e-mail is in the cloud. So are our address books and calendars, whether we use Google, Apple, Microsoft, or someone else. We store to-do lists on Remember the Milk and keep our jottings on Evernote. Fitbit and Jawbone store our fitness data. Flickr, Facebook, and iCloud are the repositories for our personal photos. Facebook and Twitter store many of our intimate conversations.

It often feels like everyone is collecting our personal information. Smartphone apps collect our location data. Google can draw a surprisingly intimate portrait of what we’re thinking about from our Internet searches. Dating sites (even those less titillating than Ashley Madison), medical-information sites, and travel sites all have detailed portraits of who we are and where we go. Retailers save records of our purchases, and those databases are stored on the Internet. Data brokers have detailed dossiers that can include all of this and more.

Many people don’t think about the security implications of this information existing in the first place. They might be aware that it’s mined for advertising and other marketing purposes. They might even know that the government can get its hands on such data, with different levels of ease depending on the country. But it doesn’t generally occur to people that their personal information might be available to anyone who wants to look.

In reality, all these networks are vulnerable to organizational doxing. Most aren’t any more secure than Ashley Madison or Sony were. We could wake up one morning and find detailed information about our Uber rides, our Amazon purchases, our subscriptions to pornographic websites—anything we do on the Internet—published and available. It’s not likely, but it’s certainly possible.

Right now, you can search the Ashley Madison database for any e-mail address, and read that person’s details. You can search the Sony data dump and read the personal chatter of people who work for the company. Tempting though it may be, there are many reasons not to search for people you know on Ashley Madison. The one I most want to focus on is context. An e-mail address might be in that database for many reasons, not all of them lascivious. But if you find your spouse or your friend in there, you don’t necessarily know the context. It’s the same with the Sony employee e-mails, and the data from whatever company is doxed next. You’ll be able to read the data, but without the full story, it can be hard to judge the meaning of what you’re reading.

Even so, of course people are going to look. Reporters will search for public figures. Individuals will search for people they know. Secrets will be read and passed around. Anguish and embarrassment will result. In some cases, lives will be destroyed.

Privacy isn’t about hiding something. It’s about being able to control how we present ourselves to the world. It’s about maintaining a public face while at the same time being permitted private thoughts and actions. It’s about personal dignity.

Organizational doxing is a powerful attack against organizations, and one that will continue because it’s so effective. And while the network owners and the hackers might be battling it out for their own reasons, sometimes it’s our data that’s the prize. Having information we thought private turn out to be public and searchable is what happens when the hackers win. It’s a result of the information age that hasn’t been fully appreciated, and one that we’re still not prepared to face.

This essay previously appeared on the Atlantic.

Posted on September 9, 2015 at 8:42 AMView Comments

Are Data Breaches Getting Larger?

This research says that data breaches are not getting larger over time.

Hype and Heavy Tails: A Closer Look at Data Breaches,” by Benjamin Edwards, Steven Hofmeyr, and Stephanie Forrest:

Abstract: Recent widely publicized data breaches have exposed the
personal information of hundreds of millions of people. Some reports point to alarming increases in both the size and frequency of data breaches, spurring institutions around the world to address what appears to be a worsening situation. But, is the problem actually growing worse? In this paper, we study a popular public dataset and develop Bayesian Generalized Linear Models to investigate trends in data breaches. Analysis of the model shows that neither size nor frequency of data breaches has increased over the past decade. We find that the increases that have attracted attention can be explained by the heavy-tailed statistical distributions underlying the dataset. Specifically, we find that data breach size is log-normally distributed and that the daily frequency of breaches is described by a negative binomial distribution. These distributions may provide clues to the generative mechanisms that are responsible for the breaches. Additionally, our model predicts the likelihood of breaches of a particular size in the future. For example, we find that in the next year there is only a 31% chance of a breach of 10 million records or more in the US. Regardless of any trend, data breaches are costly, and we combine the model with two different cost models to project that in the next three years breaches could cost up to $55 billion.

The paper was presented at WEIS 2015.

Posted on August 25, 2015 at 6:27 AMView Comments

More about the NSA's XKEYSCORE

I’ve been reading through the 48 classified documents about the NSA’s XKEYSCORE system released by the Intercept last week. From the article:

The NSA’s XKEYSCORE program, first revealed by The Guardian, sweeps up countless people’s Internet searches, emails, documents, usernames and passwords, and other private communications. XKEYSCORE is fed a constant flow of Internet traffic from fiber optic cables that make up the backbone of the world’s communication network, among other sources, for processing. As of 2008, the surveillance system boasted approximately 150 field sites in the United States, Mexico, Brazil, United Kingdom, Spain, Russia, Nigeria, Somalia, Pakistan, Japan, Australia, as well as many other countries, consisting of over 700 servers.

These servers store “full-take data” at the collection sites—meaning that they captured all of the traffic collected—and, as of 2009, stored content for 3 to 5 days and metadata for 30 to 45 days. NSA documents indicate that tens of billions of records are stored in its database. “It is a fully distributed processing and query system that runs on machines around the world,” an NSA briefing on XKEYSCORE says. “At field sites, XKEYSCORE can run on multiple computers that gives it the ability to scale in both processing power and storage.”

There seems to be no access controls at all restricting how analysts can use XKEYSCORE. Standing queries—called “workflows”—and new fingerprints have an approval process, presumably for load issues, but individual queries are not approved beforehand but may be audited after the fact. These are things which are supposed to be low latency, and you can’t have an approval process for low latency analyst queries. Since a query can get at the recorded raw data, a single query is effectively a retrospective wiretap.

All this means that the Intercept is correct when it writes:

These facts bolster one of Snowden’s most controversial statements, made in his first video interview published by The Guardian on June 9, 2013. “I, sitting at my desk,” said Snowden, could “wiretap anyone, from you or your accountant, to a federal judge to even the president, if I had a personal email.”

You’ll only get the data if it’s in the NSA’s databases, but if it is there you’ll get it.

Honestly, there’s not much in these documents that’s a surprise to anyone who studied the 2013 XKEYSCORE leaks and knows what can be done with a highly customizable Intrusion Detection System. But it’s always interesting to read the details.

One document—”Intro to Context Sensitive Scanning with X-KEYSCORE Fingerprints (2010)—talks about some of the queries an analyst can run. A sample scenario: “I want to look for people using Mojahedeen Secrets encryption from an iPhone” (page 6).

Mujahedeen Secrets is an encryption program written by al Qaeda supporters. It has been around since 2007. Last year, Stuart Baker cited its increased use as evidence that Snowden harmed America. I thought the opposite, that the NSA benefits from al Qaeda using this program. I wrote: “There’s nothing that screams ‘hack me’ more than using specially designed al Qaeda encryption software.”

And now we see how it’s done. In the document, we read about the specific XKEYSCORE queries an analyst can use to search for traffic encrypted by Mujahedeen Secrets. Here are some of the program’s fingerprints (page 10):

encryption/mojahaden2
encryption/mojahaden2/encodedheader
encryption/mojahaden2/hidden
encryption/mojahaden2/hidden2
encryption/mojahaden2/hidden44
encryption/mojahaden2/secure_file_cendode
encryption/mojahaden2/securefile

So if you want to search for all iPhone users of Mujahedeen Secrets (page 33):

fingerprint(‘demo/scenario4’)=

fingerprint(‘encryption/mojahdeen2’ and fingerprint(‘browser/cellphone/iphone’)

Or you can search for the program’s use in the encrypted text, because (page 37): “…many of the CT Targets are now smart enough not to leave the Mojahedeen Secrets header in the E-mails they send. How can we detect that the E-mail (which looks like junk) is in fact Mojahedeen Secrets encrypted text.” Summary of the answer: there are lots of ways to detect the use of this program that users can’t detect. And you can combine the use of Mujahedeen Secrets with other identifiers to find targets. For example, you can specifically search for the program’s use in extremist forums (page 9). (Note that the NSA wrote that comment about Mujahedeen Secrets users increasing their opsec in 2010, two years before Snowden supposedly told them that the NSA was listening on their communications. Honestly, I would not be surprised if the program turned out to have been a US operation to get Islamic radicals to make their traffic stand out more easily.)

It’s not just Mujahedeen Secrets. Nicholas Weaver explains how you can use XKEYSCORE to identify co-conspirators who are all using PGP.

And these searches are just one example. Other examples from the documents include:

  • “Targets using mail.ru from a behind a large Iranian proxy” (here, page 7).
  • Usernames and passwords of people visiting gov.ir (here, page 26 and following).
  • People in Pakistan visiting certain German-language message boards (here, page 1).
  • HTTP POST traffic from Russia in the middle of the night—useful for finding people trying to steal our data (here, page 16).
  • People doing web searches on jihadist topics from Kabul (here).

E-mails, chats, web-browsing traffic, pictures, documents, voice calls, webcam photos, web searches, advertising analytics traffic, social media traffic, botnet traffic, logged keystrokes, file uploads to online services, Skype sessions and more: if you can figure out how to form the query, you can ask XKEYSCORE for it. For an example of how complex the searches can be, look at this XKEYSCORE query published in March, showing how New Zealand used the system to spy on the World Trade Organization: automatically track any email body with any particular WTO-related content for the upcoming election. (Good new documents to read include this, this, and this.)

I always read these NSA documents with an assumption that other countries are doing the same thing. The NSA is not made of magic, and XKEYSCORE is not some super-advanced NSA-only technology. It is the same sort of thing that every other country would use with its surveillance data. For example, Russia explicitly requires ISPs to install similar monitors as part of its SORM Internet surveillance system. As a home user, you can build your own XKEYSCORE using the public-domain Bro Security Monitor and the related Network Time Machine attached to a back-end data-storage system. (Lawrence Berkeley National Laboratory uses this system to store three months’ worth of Internet traffic for retrospective surveillance—it used the data to study Heartbleed.) The primary advantage the NSA has is that it sees more of the Internet than anyone else, and spends more money to store the data it intercepts for longer than anyone else. And if these documents explain XKEYSCORE in 2009 and 2010, expect that it’s much more powerful now.

Back to encryption and Mujahedeen Secrets. If you want to stay secure, whether you’re trying to evade surveillance by Russia, China, the NSA, criminals intercepting large amounts of traffic, or anyone else, try not to stand out. Don’t use some homemade specialized cryptography that can be easily identified by a system like this. Use reasonably strong encryption software on a reasonably secure device. If you trust Apple’s claims (pages 35-6), use iMessage and FaceTime on your iPhone. I really like Moxie Marlinspike’s Signal for both text and voice, but worry that it’s too obvious because it’s still rare. Ubiquitous encryption is the bane of listeners worldwide, and it’s the best thing we can deploy to make the world safer.

Posted on July 7, 2015 at 6:38 AMView Comments

Office of Personnel Management Data Hack

I don’t have much to say about the recent hack of the US Office of Personnel Management, which has been attributed to China (and seems to be getting worse all the time). We know that government networks aren’t any more secure than corporate networks, and might even be less secure.

I agree with Ben Wittes here (although not the imaginary double standard he talks about in the rest of the essay):

For the record, I have no problem with the Chinese going after this kind of data. Espionage is a rough business and the Chinese owe as little to the privacy rights of our citizens as our intelligence services do to the employees of the Chinese government. It’s our government’s job to protect this material, knowing it could be used to compromise, threaten, or injure its people­—not the job of the People’s Liberation Army to forebear collection of material that may have real utility.

Former NSA Director Michael Hayden says much the same thing:

If Hayden had had the ability to get the equivalent Chinese records when running CIA or NSA, he says, “I would not have thought twice. I would not have asked permission. I’d have launched the star fleet. And we’d have brought those suckers home at the speed of light.” The episode, he says, “is not shame on China. This is shame on us for not protecting that kind of information.” The episode is “a tremendously big deal, and my deepest emotion is embarrassment.”

My question is this: Has anyone thought about the possibility of the attackers manipulating data in the database? What are the potential attacks that could stem from adding, deleting, and changing data? I don’t think they can add a person with a security clearance, but I’d like someone who knows more than I do to understand the risks.

Posted on July 1, 2015 at 6:32 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.