Entries Tagged "secrecy"

Page 3 of 21

Automatically Identifying Government Secrets

Interesting research: “Using Artificial Intelligence to Identify State Secrets,” by Renato Rocha Souza, Flavio Codeco Coelho, Rohan Shah, and Matthew Connelly.

Abstract: Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification.

Posted on November 11, 2016 at 1:18 PMView Comments

Intelligence Oversight and How It Can Fail

Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word “archived.” (For the record, I would have defaulted to the NSA’s interpretation, which feels more accurate technically.) But while the story is worth reading, what’s especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization—especially if the oversight must largely be conducted in secret.

From the article:

Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that “all collected data must be deleted after two years.” NSA engineers must then make a list for the NSA attorneys:

  1. What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?
  2. What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it’s largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?
  3. When does the timer start?
  4. What’s the legally-relevant unit of measurement for timestamp computation­—a day, an hour, a second, a millisecond?
  5. If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? ….
  6. What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?
  7. What about hard copy printouts?

And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions—­in particular the answers­—require significant interpretations to be coordinated with DOJ and which determinations can be made internally?

Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?

In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? I believe that the NSA tries very hard to follow the rules it’s given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.

The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.

Posted on October 18, 2016 at 2:29 PMView Comments

The Mathematics of Conspiracy

This interesting study tries to build a mathematical model for the continued secrecy of conspiracies, and tries to predict how long before they will be revealed to the general public, either wittingly or unwittingly.

The equation developed by Dr Grimes, a post-doctoral physicist at Oxford, relied upon three factors: the number of conspirators involved, the amount of time that has passed, and the intrinsic probability of a conspiracy failing.

He then applied his equation to four famous conspiracy theories: The belief that the Moon landing was faked, the belief that climate change is a fraud, the belief that vaccines cause autism, and the belief that pharmaceutical companies have suppressed a cure for cancer.

Dr Grimes’s analysis suggests that if these four conspiracies were real, most are very likely to have been revealed as such by now.

Specifically, the Moon landing “hoax” would have been revealed in 3.7 years, the climate change “fraud” in 3.7 to 26.8 years, the vaccine-autism “conspiracy” in 3.2 to 34.8 years, and the cancer “conspiracy” in 3.2 years.

He also ran the model against two actual conspiracies: the NSA’s PRISM program and the Tuskegee syphilis experiment.

From the paper:

Abstract: Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants­—the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.

Lots on the psychology of conspiracy theories here.

EDITED TO ADD (3/12): This essay debunks the above research.

Posted on March 2, 2016 at 12:39 PMView Comments

UK Government Promoting Backdoor-Enabled Voice Encryption Protocol

The UK government is pushing something called the MIKEY-SAKKE protocol to secure voice. Basically, it’s an identity-based system that necessarily requires a trusted key-distribution center. So key escrow is inherently built in, and there’s no perfect forward secrecy. The only reasonable explanation for designing a protocol with these properties is third-party eavesdropping.

Steven Murdoch has explained the details. The upshot:

The design of MIKEY-SAKKE is motivated by the desire to allow undetectable and unauditable mass surveillance, which may be a requirement in exceptional scenarios such as within government departments processing classified information. However, in the vast majority of cases the properties that MIKEY-SAKKE offers are actively harmful for security. It creates a vulnerable single point of failure, which would require huge effort, skill and cost to secure ­ requiring resource beyond the capability of most companies. Better options for voice encryption exist today, though they are not perfect either. In particular, more work is needed on providing scalable and usable protection against man-in-the-middle attacks, and protection of metadata for contact discovery and calls. More broadly, designers of protocols and systems need to appreciate the ethical consequences of their actions in terms of the political and power structures which naturally follow from their use. MIKEY-SAKKE is the latest example to raise questions over the policy of many governments, including the UK, to put intelligence agencies in charge of protecting companies and individuals from spying, given the conflict of interest it creates.

And GCHQ previously rejected a more secure standard, MIKEY-IBAKE, because it didn’t allow undetectable spying.

Both the NSA and GCHQ repeatedly choose surveillance over security. We need to reject that decision.

Posted on January 22, 2016 at 2:23 PMView Comments

Replacing Judgment with Algorithms

China is considering a new “social credit” system, designed to rate everyone’s trustworthiness. Many fear that it will become a tool of social control—but in reality it has a lot in common with the algorithms and systems that score and classify us all every day.

Human judgment is being replaced by automatic algorithms, and that brings with it both enormous benefits and risks. The technology is enabling a new form of social control, sometimes deliberately and sometimes as a side effect. And as the Internet of Things ushers in an era of more sensors and more data—and more algorithms—we need to ensure that we reap the benefits while avoiding the harms.

Right now, the Chinese government is watching how companies use “social credit” scores in state-approved pilot projects. The most prominent one is Sesame Credit, and it’s much more than a financial scoring system.

Citizens are judged not only by conventional financial criteria, but by their actions and associations. Rumors abound about how this system works. Various news sites are speculating that your score will go up if you share a link from a state-sponsored news agency and go down if you post pictures of Tiananmen Square. Similarly, your score will go up if you purchase local agricultural products and down if you purchase Japanese anime. Right now the worst fears seem overblown, but could certainly come to pass in the future.

This story has spread because it’s just the sort of behavior you’d expect from the authoritarian government in China. But there’s little about the scoring systems used by Sesame Credit that’s unique to China. All of us are being categorized and judged by similar algorithms, both by companies and by governments. While the aim of these systems might not be social control, it’s often the byproduct. And if we’re not careful, the creepy results we imagine for the Chinese will be our lot as well.

Sesame Credit is largely based on a US system called FICO. That’s the system that determines your credit score. You actually have a few dozen different ones, and they determine whether you can get a mortgage, car loan or credit card, and what sorts of interest rates you’re offered. The exact algorithm is secret, but we know in general what goes into a FICO score: how much debt you have, how good you’ve been at repaying your debt, how long your credit history is and so on.

There’s nothing about your social network, but that might change. In August, Facebook was awarded a patent on using a borrower’s social network to help determine if he or she is a good credit risk. Basically, your creditworthiness becomes dependent on the creditworthiness of your friends. Associate with deadbeats, and you’re more likely to be judged as one.

Your associations can be used to judge you in other ways as well. It’s now common for employers to use social media sites to screen job applicants. This manual process is increasingly being outsourced and automated; companies like Social Intelligence, Evolv and First Advantage automatically process your social networking activity and provide hiring recommendations for employers. The dangers of this type of system—from discriminatory biases resulting from the data to an obsession with scores over more social measures—are too many.

The company Klout tried to make a business of measuring your online influence, hoping its proprietary system would become an industry standard used for things like hiring and giving out free product samples.

The US government is judging you as well. Your social media postings could get you on the terrorist watch list, affecting your ability to fly on an airplane and even get a job. In 2012, a British tourist’s tweet caused the US to deny him entry into the country. We know that the National Security Agency uses complex computer algorithms to sift through the Internet data it collects on both Americans and foreigners.

All of these systems, from Sesame Credit to the NSA’s secret algorithms, are made possible by computers and data. A couple of generations ago, you would apply for a home mortgage at a bank that knew you, and a bank manager would make a determination of your creditworthiness. Yes, the system was prone to all sorts of abuses, ranging from discrimination to an old-boy network of friends helping friends. But the system also couldn’t scale. It made no sense for a bank across the state to give you a loan, because they didn’t know you. Loans stayed local.

FICO scores changed that. Now, a computer crunches your credit history and produces a number. And you can take that number to any mortgage lender in the country. They don’t need to know you; your score is all they need to decide whether you’re trustworthy.

This score enabled the home mortgage, car loan, credit card and other lending industries to explode, but it brought with it other problems. People who don’t conform to the financial norm—having and using credit cards, for example—can have trouble getting loans when they need them. The automatic nature of the system enforces conformity.

The secrecy of the algorithms further pushes people toward conformity. If you are worried that the US government will classify you as a potential terrorist, you’re less likely to friend Muslims on Facebook. If you know that your Sesame Credit score is partly based on your not buying “subversive” products or being friends with dissidents, you’re more likely to overcompensate by not buying anything but the most innocuous books or corresponding with the most boring people.

Uber is an example of how this works. Passengers rate drivers and drivers rate passengers; both risk getting booted out of the system if their rankings get too low. This weeds out bad drivers and passengers, but also results in marginal people being blocked from the system, and everyone else trying to not make any special requests, avoid controversial conversation topics, and generally behave like good corporate citizens.

Many have documented a chilling effect among American Muslims, with them avoiding certain discussion topics lest they be taken the wrong way. Even if nothing would happen because of it, their free speech has been curtailed because of the secrecy surrounding government surveillance. How many of you are reluctant to Google “pressure cooker bomb”? How many are a bit worried that I used it in this essay?

This is what social control looks like in the Internet age. The Cold-War-era methods of undercover agents, informants living in your neighborhood, and agents provocateur is too labor-intensive and inefficient. These automatic algorithms make possible a wholly new way to enforce conformity. And by accepting algorithmic classification into our lives, we’re paving the way for the same sort of thing China plans to put into place.

It doesn’t have to be this way. We can get the benefits of automatic algorithmic systems while avoiding the dangers. It’s not even hard.

The first step is to make these algorithms public. Companies and governments both balk at this, fearing that people will deliberately try to game them, but the alternative is much worse.

The second step is for these systems to be subject to oversight and accountability. It’s already illegal for these algorithms to have discriminatory outcomes, even if they’re not deliberately designed in. This concept needs to be expanded. We as a society need to understand what we expect out of the algorithms that automatically judge us and ensure that those expectations are met.

We also need to provide manual systems for people to challenge their classifications. Automatic algorithms are going to make mistakes, whether it’s by giving us bad credit scores or flagging us as terrorists. We need the ability to clear our names if this happens, through a process that restores human judgment.

Sesame Credit sounds like a dystopia because we can easily imagine how the Chinese government can use a system like this to enforce conformity and stifle dissent. Our own systems seem safer, because we don’t believe the corporations and governments that run them are malevolent. But the dangers are inherent in the technologies. As we move into a world where we are increasingly judged by algorithms, we need to ensure that they do so fairly and properly.

This essay previously appeared on CNN.com.

Posted on January 8, 2016 at 5:21 AMView Comments

A History of Privacy

This New Yorker article traces the history of privacy from the mid 1800s to today:

As a matter of historical analysis, the relationship between secrecy and privacy can be stated in an axiom: the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late. The horse is out of the barn. The post office has opened your mail. Your photograph is on Facebook. Google already knows that, notwithstanding your demographic, you hate kale.

Posted on November 30, 2015 at 12:47 PMView Comments

"The Declining Half-Life of Secrets"

Several times I’ve mentioned Peter Swire’s concept of “the declining half-life of secrets.” He’s finally written it up:

The nature of secrets is changing. Secrets that would once have survived the 25 or 50 year test of time are more and more prone to leaks. The declining half-life of secrets has implications for the intelligence community and other secretive agencies, as they must now wrestle with new challenges posed by the transformative power of information technology innovation as well as the changing methods and targets of intelligence collection.

Posted on September 3, 2015 at 8:43 AMView Comments

No-Fly List Uses Predictive Assessments

The US government has admitted that it uses predictive assessments to put people on the no-fly list:

In a little-noticed filing before an Oregon federal judge, the US Justice Department and the FBI conceded that stopping US and other citizens from travelling on airplanes is a matter of “predictive assessments about potential threats,” the government asserted in May.

“By its very nature, identifying individuals who ‘may be a threat to civil aviation or national security’ is a predictive judgment intended to prevent future acts of terrorism in an uncertain context,” Justice Department officials Benjamin C Mizer and Anthony J Coppolino told the court on 28 May.

“Judgments concerning such potential threats to aviation and national security call upon the unique prerogatives of the Executive in assessing such threats.”

It is believed to be the government’s most direct acknowledgement to date that people are not allowed to fly because of what the government believes they might do and not what they have already done.

When you have a secret process that can judge and penalize people without due process or oversight, this is the kind of thing that happens.

Posted on August 20, 2015 at 6:19 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.