Entries Tagged "operational security"

Page 4 of 7

Government Secrecy and the Generation Gap

Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.

Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do—and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.

As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.

You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo—the documents are riddled with codename—its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.

Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.

Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.

Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways—and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.

Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.

When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.

Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age—the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks—whistleblowing is easier than ever.

Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.

A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.

Hey, NSA, you’ve got a problem.

This essay previously appeared in the Financial Times.

EDITED TO ADD (9/14): Blog comments on this essay are particularly interesting.

Posted on September 9, 2013 at 1:30 PMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Opsec Details of Snowden Meeting with Greenwald and Poitras

I don’t like stories about the personalities in the Snowden affair, because it detracts from the NSA and the policy issues. But I’m a sucker for operational security, and just have to post this detail from their first meeting in Hong Kong:

Snowden had instructed them that once they were in Hong Kong, they were to go at an appointed time to the Kowloon district and stand outside a restaurant that was in a mall connected to the Mira Hotel. There, they were to wait until they saw a man carrying a Rubik’s Cube, then ask him when the restaurant would open. The man would answer their question, but then warn that the food was bad.

Actually, the whole article is interesting. The author is writing a book about surveillance and privacy, one of probably a half dozen about the Snowden affair that will come out this year.

EDITED TO ADD (8/31): While we’re on the topic, here’s some really stupid opsec on the part of Greenwald and Poitras:

  • Statement from senior Cabinet Office civil servant to #miranda case says material was 58000 ‘highly classified UK intelligence documents
  • Police who seized documents from #miranda found among them a piece of paper with the decryption password, the statement says
  • This password allowed them to decrypt one file on his seized hard drive, adds Oliver Robbins, Cabinet Office security adviser #miranda

You can’t do this kind of stuff when you’re playing with the big boys.

Posted on August 30, 2013 at 1:54 PMView Comments

Protecting Against Leakers

Ever since Edward Snowden walked out of a National Security Agency facility in May with electronic copies of thousands of classified documents, the finger-pointing has concentrated on government’s security failures. Yet the debacle illustrates the challenge with trusting people in any organization.

The problem is easy to describe. Organizations require trusted people, but they don’t necessarily know whether those people are trustworthy. These individuals are essential, and can also betray organizations.

So how does an organization protect itself?

Securing trusted people requires three basic mechanisms (as I describe in my book Beyond Fear). The first is compartmentalization. Trust doesn’t have to be all or nothing; it makes sense to give relevant workers only the access, capabilities and information they need to accomplish their assigned tasks. In the military, even if they have the requisite clearance, people are only told what they “need to know.” The same policy occurs naturally in companies.

This isn’t simply a matter of always granting more senior employees a higher degree of trust. For example, only authorized armored-car delivery people can unlock automated teller machines and put money inside; even the bank president can’t do so. Think of an employee as operating within a sphere of trust—a set of assets and functions he or she has access to. Organizations act in their best interest by making that sphere as small as possible.

The idea is that if someone turns out to be untrustworthy, he or she can only do so much damage. This is where the NSA failed with Snowden. As a system administrator, he needed access to many of the agency’s computer systems—and he needed access to everything on those machines. This allowed him to make copies of documents he didn’t need to see.

The second mechanism for securing trust is defense in depth: Make sure a single person can’t compromise an entire system. NSA Director General Keith Alexander has said he is doing this inside the agency by instituting what is called two-person control: There will always be two people performing system-administration tasks on highly classified computers.

Defense in depth reduces the ability of a single person to betray the organization. If this system had been in place and Snowden’s superior had been notified every time he downloaded a file, Snowden would have been caught well before his flight to Hong Kong.

The final mechanism is to try to ensure that trusted people are, in fact, trustworthy. The NSA does this through its clearance process, which at high levels includes lie-detector tests (even though they don’t work) and background investigations. Many organizations perform reference and credit checks and drug tests when they hire new employees. Companies may refuse to hire people with criminal records or noncitizens; they might hire only those with a particular certification or membership in certain professional organizations. Some of these measures aren’t very effective—it’s pretty clear that personality profiling doesn’t tell you anything useful, for example—but the general idea is to verify, certify and test individuals to increase the chance they can be trusted.

These measures are expensive. It costs the U.S. government about $4,000 to qualify someone for top-secret clearance. Even in a corporation, background checks and screenings are expensive and add considerable time to the hiring process. Giving employees access to only the information they need can hamper them in an agile organization in which needs constantly change. Security audits are expensive, and two-person control is even more expensive: it can double personnel costs. We’re always making trade-offs between security and efficiency.

The best defense is to limit the number of trusted people needed within an organization. Alexander is doing this at the NSA—albeit too late—by trying to reduce the number of system administrators by 90 percent. This is just a tiny part of the problem; in the U.S. government, as many as 4 million people, including contractors, hold top-secret or higher security clearances. That’s far too many.

More surprising than Snowden’s ability to get away with taking the information he downloaded is that there haven’t been dozens more like him. His uniqueness—along with the few who have gone before him and how rare whistle-blowers are in general—is a testament to how well we normally do at building security around trusted people.

Here’s one last piece of advice, specifically about whistle-blowers. It’s much harder to keep secrets in a networked world, and whistle-blowing has become the civil disobedience of the information age. A public or private organization’s best defense against whistle-blowers is to refrain from doing things it doesn’t want to read about on the front page of the newspaper. This may come as a shock in a market-based system, in which morally dubious behavior is often rewarded as long as it’s legal and illegal activity is rewarded as long as you can get away with it.

No organization, whether it’s a bank entrusted with the privacy of its customer data, an organized-crime syndicate intent on ruling the world, or a government agency spying on its citizens, wants to have its secrets disclosed. In the information age, though, it may be impossible to avoid.

This essay previously appeared on Bloomberg.com.

EDITED TO ADD 8/22: A commenter on the Bloomberg site added another security measure: pay your people more. Better paid people are less likely to betray the organization that employs them. I should have added that, especially since I make that exact point in Liars and Outliers.

Posted on August 26, 2013 at 1:19 PMView Comments

Management Issues in Terrorist Organizations

Terrorist organizations have the same management problems as other organizations, and new ones besides:

Terrorist leaders also face a stubborn human resources problem: Their talent pool is inherently unstable. Terrorists are obliged to seek out recruits who are predisposed to violence—that is to say, young men with a chip on their shoulder. Unsurprisingly, these recruits are not usually disposed to following orders or recognizing authority figures. Terrorist managers can craft meticulous long-term strategies, but those are of little use if the people tasked with carrying them out want to make a name for themselves right now.

Terrorist managers are also obliged to place a premium on bureaucratic control, because they lack other channels to discipline the ranks. When Walmart managers want to deal with an unruly employee or a supplier who is defaulting on a contract, they can turn to formal legal procedures. Terrorists have no such option. David Ervine, a deceased Irish Unionist politician and onetime bomb maker for the Ulster Volunteer Force (UVF), neatly described this dilemma to me in 2006. “We had some very heinous and counterproductive activities being carried out that the leadership didn’t punish because they had to maintain the hearts and minds within the organization,” he said….

EDITED TO ADD (9/13): More on the economics of terrorism.

Posted on August 16, 2013 at 7:31 AMView Comments

NSA Implements Two-Man Control for Sysadmins

In an effort to lock the barn door after the horse has escaped, the NSA is implementing two-man control for sysadmins:

NSA chief Keith Alexander said his agency had implemented a “two-man rule,” under which any system administrator like Snowden could only access or move key information with another administrator present. With some 15,000 sites to fix, Alexander said, it would take time to spread across the whole agency.

[…]

Alexander said that server rooms where such data is stored are now locked and require a two-man team to access them—safeguards that he said would be implemented at the Pentagon and intelligence agencies after a pilot at the NSA.

This kind of thing has happened before. After USN Chief Warrant Officer John Walker sold encryption keys to the Soviets, the Navy implemented two-man control for key material.

It’s an effective, if expensive, security measure—and an easy one for the NSA to implement while it figures out what it really has to do to secure information from IT insiders.

Posted on July 24, 2013 at 6:18 AMView Comments

The US Uses Vulnerability Data for Offensive Purposes

Companies allow US intelligence to exploit vulnerabilities before it patches them:

Microsoft Corp. (MSFT), the world’s largest software company, provides intelligence agencies with information about bugs in its popular software before it publicly releases a fix, according to two people familiar with the process. That information can be used to protect government computers and to access the computers of terrorists or military foes.

Redmond, Washington-based Microsoft (MSFT) and other software or Internet security companies have been aware that this type of early alert allowed the U.S. to exploit vulnerabilities in software sold to foreign governments, according to two U.S. officials. Microsoft doesn’t ask and can’t be told how the government uses such tip-offs, said the officials, who asked not to be identified because the matter is confidential.

No word on whether these companies would delay a patch if asked nicely—or if there’s any way the government can require them to. Anyone feel safer because of this?

Posted on June 20, 2013 at 6:04 AMView Comments

Bad CIA Operational Security

I have no idea if this story about CIA spies in Lebanon is true, and it will almost certainly never be confirmed or denied:

But others inside the American intelligence community say sloppy “tradecraft”—the method of covert operations—by the CIA is also to blame for the disruption of the vital spy networks.

In Beirut, two Hezbollah double agents pretended to go to work for the CIA. Hezbollah then learned of the restaurant where multiple CIA officers were meeting with several agents, according to the four current and former officials briefed on the case. The CIA used the codeword “PIZZA” when discussing where to meet with the agents, according to U.S. officials. Two former officials describe the location as a Beirut Pizza Hut. A current US official denied that CIA officers met their agents at Pizza Hut.

Posted on November 30, 2011 at 6:57 AMView Comments

Bin Laden Maintained Computer Security with an Air Gap

From the Associated Press:

Bin Laden’s system was built on discipline and trust. But it also left behind an extensive archive of email exchanges for the U.S. to scour. The trove of electronic records pulled out of his compound after he was killed last week is revealing thousands of messages and potentially hundreds of email addresses, the AP has learned.

Holed up in his walled compound in northeast Pakistan with no phone or Internet capabilities, bin Laden would type a message on his computer without an Internet connection, then save it using a thumb-sized flash drive. He then passed the flash drive to a trusted courier, who would head for a distant Internet cafe.

At that location, the courier would plug the memory drive into a computer, copy bin Laden’s message into an email and send it. Reversing the process, the courier would copy any incoming email to the flash drive and return to the compound, where bin Laden would read his messages offline.

I’m impressed. It’s hard to maintain this kind of COMSEC discipline.

It was a slow, toilsome process. And it was so meticulous that even veteran intelligence officials have marveled at bin Laden’s ability to maintain it for so long. The U.S. always suspected bin Laden was communicating through couriers but did not anticipate the breadth of his communications as revealed by the materials he left behind.

Navy SEALs hauled away roughly 100 flash memory drives after they killed bin Laden, and officials said they appear to archive the back-and-forth communication between bin Laden and his associates around the world.

Posted on May 18, 2011 at 8:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.