Entries Tagged "history of security"

Page 5 of 10

1971 FBI Burglary

Interesting story:

…burglars took a lock pick and a crowbar and broke into a Federal Bureau of Investigation office in a suburb of Philadelphia, making off with nearly every document inside.

They were never caught, and the stolen documents that they mailed anonymously to newspaper reporters were the first trickle of what would become a flood of revelations about extensive spying and dirty-tricks operations by the F.B.I. against dissident groups.

Video article. And the book.

Interesting precursor to Edward Snowden.

Posted on January 10, 2014 at 6:45 AMView Comments

World War II Anecdote about Trust and Security

This is an interesting story from World War II about trust:

Jones notes that the Germans doubted their system because they knew the British could radio false orders to the German bombers with no trouble. As Jones recalls, “In fact we did not do this, but it seemed such an easy countermeasure that the German crews thought that we might, and they therefore began to be suspicious about the instructions that they received.”

The implications of this are perhaps obvious but worth stating nonetheless: a lack of trust can exist even if an adversary fails to exploit a weakness in the system. More importantly, this doubt can become a shadow adversary. According to Jones, “…it was not long before the crews found substance to their theory [that is, their doubt].” In support of this, he offers the anecdote of a German pilot who, returning to base after wandering off course, grumbled that “the British had given him a false order.”

I think about this all the time with respect to our IT systems and the NSA. Even though we don’t know which companies the NSA has compromised — or by what means — knowing that they could have compromised any of them is enough to make us mistrustful of all of them. This is going to make it hard for large companies like Google and Microsoft to get back the trust they lost. Even if they succeed in limiting government surveillance. Even if they succeed in improving their own internal security. The best they’ll be able to say is: “We have secured ourselves from the NSA, except for the parts that we either don’t know about or can’t talk about.”

Posted on December 13, 2013 at 11:20 AMView Comments

Government Secrecy and the Generation Gap

Big-government secrets require a lot of secret-keepers. As of October 2012, almost 5m people in the US have security clearances, with 1.4m at the top-secret level or higher, according to the Office of the Director of National Intelligence.

Most of these people do not have access to as much information as Edward Snowden, the former National Security Agency contractor turned leaker, or even Chelsea Manning, the former US army soldier previously known as Bradley who was convicted for giving material to WikiLeaks. But a lot of them do — and that may prove the Achilles heel of government. Keeping secrets is an act of loyalty as much as anything else, and that sort of loyalty is becoming harder to find in the younger generations. If the NSA and other intelligence bodies are going to survive in their present form, they are going to have to figure out how to reduce the number of secrets.

As the writer Charles Stross has explained, the old way of keeping intelligence secrets was to make it part of a life-long culture. The intelligence world would recruit people early in their careers and give them jobs for life. It was a private club, one filled with code words and secret knowledge.

You can see part of this in Mr Snowden’s leaked documents. The NSA has its own lingo — the documents are riddled with codename — its own conferences, its own awards and recognitions. An intelligence career meant that you had access to a new world, one to which “normal” people on the outside were completely oblivious. Membership of the private club meant people were loyal to their organisations, which were in turn loyal back to them.

Those days are gone. Yes, there are still the codenames and the secret knowledge, but a lot of the loyalty is gone. Many jobs in intelligence are now outsourced, and there is no job-for-life culture in the corporate world any more. Workforces are flexible, jobs are interchangeable and people are expendable.

Sure, it is possible to build a career in the classified world of government contracting, but there are no guarantees. Younger people grew up knowing this: there are no employment guarantees anywhere. They see it in their friends. They see it all around them.

Many will also believe in openness, especially the hacker types the NSA needs to recruit. They believe that information wants to be free, and that security comes from public knowledge and debate. Yes, there are important reasons why some intelligence secrets need to be secret, and the NSA culture reinforces secrecy daily. But this is a crowd that is used to radical openness. They have been writing about themselves on the internet for years. They have said very personal things on Twitter; they have had embarrassing photographs of themselves posted on Facebook. They have been dumped by a lover in public. They have overshared in the most compromising ways — and they have got through it. It is a tougher sell convincing this crowd that government secrecy trumps the public’s right to know.

Psychologically, it is hard to be a whistleblower. There is an enormous amount of pressure to be loyal to our peer group: to conform to their beliefs, and not to let them down. Loyalty is a natural human trait; it is one of the social mechanisms we use to thrive in our complex social world. This is why good people sometimes do bad things at work.

When someone becomes a whistleblower, he or she is deliberately eschewing that loyalty. In essence, they are deciding that allegiance to society at large trumps that to peers at work. That is the difficult part. They know their work buddies by name, but “society at large” is amorphous and anonymous. Believing that your bosses ultimately do not care about you makes that switch easier.

Whistleblowing is the civil disobedience of the information age. It is a way that someone without power can make a difference. And in the information age — the fact that everything is stored on computers and potentially accessible with a few keystrokes and mouse clicks — whistleblowing is easier than ever.

Mr Snowden is 30 years old; Manning 25. They are members of the generation we taught not to expect anything long-term from their employers. As such, employers should not expect anything long-term from them. It is still hard to be a whistleblower, but for this generation it is a whole lot easier.

A lot has been written about the problem of over-classification in US government. It has long been thought of as anti-democratic and a barrier to government oversight. Now we know that it is also a security risk. Organizations such as the NSA need to change their culture of secrecy, and concentrate their security efforts on what truly needs to remain secret. Their default practice of classifying everything is not going to work any more.

Hey, NSA, you’ve got a problem.

This essay previously appeared in the Financial Times.

EDITED TO ADD (9/14): Blog comments on this essay are particularly interesting.

Posted on September 9, 2013 at 1:30 PMView Comments

"The Next Generation Communications Privacy Act"

Orin Kerr envisions what the ECPA should look like today:

Abstract: In 1986, Congress enacted the Electronic Communications Privacy Act (ECPA) to regulate government access to Internet communications and records. ECPA is widely seen as outdated, and ECPA reform is now on the Congressional agenda. At the same time, existing reform proposals retain the structure of the 1986 Act and merely tinker with a few small aspects of the statute. This Article offers a thought experiment about what might happen if Congress repealed ECPA and enacted a new privacy statute to replace it.

The new statute would look quite different from ECPA because overlooked changes in Internet technology have dramatically altered the assumptions on which the 1986 Act was based. ECPA was designed for a network world with high storage costs and only local network access. Its design reflects the privacy threats of such a network, including high privacy protection for real-time wiretapping, little protection for non-content records, and no attention to particularity or jurisdiction. Today’s Internet reverses all of these assumptions. Storage costs have plummeted, leading to a reality of almost total storage. Even United States-based services now serve a predominantly foreign customer base. A new statute would need to account for these changes.

The Article contends that a next generation privacy act should contain four features. First, it should impose the same requirement on access to all contents. Second, it should impose particularity requirements on the scope of disclosed metadata. Third, it should impose minimization rules on all accessed content. And fourth, it should impose a two-part territoriality regime with a mandatory rule structure for United States-based users and a permissive regime for users located abroad.

Posted on August 26, 2013 at 7:02 AMView Comments

Stories from MI5

This essay is filled with historical MI5 stories — often bizarre, sometimes amusing. My favorite:

It was recently revealed that back in the 1970s — at the height of the obsession with traitors — MI5 trained a specially bred group of Gerbils to detect spies. Gerbils have a very acute sense of smell and they were used in interrogations to tell whether the suspects were releasing adrenaline — because that would show they were under stress and lying.

Then they tried the Gerbils to see if they could detect terrorists who were about to carry a bomb onto a plane. But the gerbils got confused because they couldn’t tell the difference between the terrorists and ordinary people who were frightened of flying who were also pumping out adrenaline in their sweat.

So the gerbils failed as well.

Posted on August 14, 2013 at 12:06 PMView Comments

Pre-9/11 NSA Thinking

This quote is from the Spring 1997 issue of CRYPTOLOG, the internal NSA newsletter. The writer is William J. Black, Jr., the Director’s Special Assistant for Information Warfare.

Specifically, the focus is on the potential abuse of the Government’s applications of this new information technology that will result in an invasion of personal privacy. For us, this is difficult to understand. We are “the government,” and we have no interest in invading the personal privacy of U.S. citizens.

This is from a Seymour Hersh New Yorker interview with NSA Director General Michael Hayden in 1999:

When I asked Hayden about the agency’s capability for unwarranted spying on private citizens — in the unlikely event, of course, that the agency could somehow get the funding, the computer scientists, and the knowledge to begin making sense out of the Internet — his response was heated. “I’m a kid from Pittsburgh with two sons and a daughter who are closet libertarians,” he said. “I am not interested in doing anything that threatens the American people, and threatens the future of this agency. I can’t emphasize enough to you how careful we are. We have to be so careful — to make sure that America is never distrustful of the power and security we can provide.”

It’s easy to assume that both Black and Hayden were lying, but I believe them. I believe that, 15 years ago, the NSA was entirely focused on intercepting communications outside the US.

What changed? What caused the NSA to abandon its non-US charter and start spying on Americans? From what I’ve read, and from a bunch of informal conversations with NSA employees, it was the 9/11 terrorist attacks. That’s when everything changed, the gloves came off, and all the rules were thrown out the window. That the NSA’s interests coincided with the business model of the Internet is just a — lucky, in their view — coincidence.

Posted on June 27, 2013 at 11:49 AMView Comments

Hacking the Papal Election

As the College of Cardinals prepares to elect a new pope, security people like me wonder about the process. How does it work, and just how hard would it be to hack the vote?

The rules for papal elections are steeped in tradition. John Paul II last codified them in 1996, and Benedict XVI left the rules largely untouched. The “Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff” is surprisingly detailed.

Every cardinal younger than 80 is eligible to vote. We expect 117 to be voting. The election takes place in the Sistine Chapel, directed by the church chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is open.

First, there’s the “pre-scrutiny” phase.

“At least two or three” paper ballots are given to each cardinal, presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected from the cardinals: three “scrutineers” who count the votes; three “revisers” who verify the results of the scrutineers; and three “infirmarii” who collect the votes from those too sick to be in the chapel. Different sets of officials are chosen randomly for each ballot.

Each cardinal, including the nine officials, writes his selection for pope on a rectangular ballot paper “as far as possible in handwriting that cannot be identified as his.” He then folds the paper lengthwise and holds it aloft for everyone to see.

When everyone has written his vote, the “scrutiny” phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten — the shallow metal plate used to hold communion wafers during Mass — resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.

If a cardinal cannot walk to the altar, one of the scrutineers — in full view of everyone — does this for him.

If any cardinals are too sick to be in the chapel, the scrutineers give the infirmarii a locked empty box with a slot, and the three infirmarii together collect those votes. If a cardinal is too sick to write, he asks one of the infirmarii to do it for him. The box is opened, and the ballots are placed onto the paten and into the chalice, one at a time.

When all the ballots are in the chalice, the first scrutineer shakes it several times to mix them. Then the third scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.

To count the votes, each ballot is opened, and the vote is read by each scrutineer in turn, the third one aloud. Each scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals.

The total number of votes cast for each person is written on a separate sheet of paper. Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded as well.

Then there’s the “post-scrutiny” phase. The scrutineers tally the votes and determine whether there’s a winner. We’re not done yet, though.

The revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. That’s where the smoke comes from: white if a pope has been elected, black if not — the black smoke is created by adding water or a special chemical to the ballots.

Being elected pope requires a two-thirds plus one vote majority. This is where Pope Benedict made a change. Traditionally a two-thirds majority had been required for election. Pope John Paul II changed the rules so that after roughly 12 days of fruitless votes, a simple majority was enough to elect a pope. Benedict reversed this rule.

How hard would this be to hack?

First, the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky.

Second, the small group of voters — all of whom know each other — makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In short, the voter verification process is about as good as you’re ever going to find.

A cardinal can’t stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once — his ballot is visible — and also keeps his hand out of the chalice holding the other votes. Not that they haven’t thought about this: The cardinals are in “choir dress” during the voting, which has translucent lace sleeves under a short red cape, making sleight-of-hand tricks much harder. Additionally, the total would be wrong.

The rules anticipate this in another way: “If during the opening of the ballots the scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name, they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled.” This surprises me, as if it seems more likely to happen by accident and result in two cardinals’ votes not being counted.

Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there’s one wrinkle: “If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote.” I assume that’s done so there’s only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting.

The scrutineers are in the best position to modify votes, but it’s difficult. The counting is conducted in public, and there are multiple people checking every step. It’d be possible for the first scrutineer, if he were good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third scrutineer to swap ballots during the counting process. Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.

There’s so much checking and rechecking that it’s just not possible for a scrutineer to misrecord the votes. And since they’re chosen randomly for each ballot, the probability of a cabal being selected is extremely low. More interesting would be to try to attack the system of selecting scrutineers, which isn’t well-defined in the document. Influencing the selection of scrutineers and revisers seems a necessary first step toward influencing the election.

If there’s a weak step, it’s the counting of the ballots.

There’s no real reason to do a precount, and it gives the scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. Shaking the chalice to randomize the ballots is smart, but putting the ballots in a wire cage and spinning it around would be more secure — albeit less reverent.

I would also add some kind of white-glove treatment to prevent a scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate’s name in full provides some resistance against this sort of attack.

Probably the biggest risk is complacency. What might seem beautiful in its tradition and ritual during the first ballot could easily become cumbersome and annoying after the twentieth ballot, and there will be a temptation to cut corners to save time. If the Cardinals do that, the election process becomes more vulnerable.

A 1996 change in the process lets the cardinals go back and forth from the chapel to their dorm rooms, instead of being locked in the chapel the whole time, as was done previously. This makes the process slightly less secure but a lot more comfortable.

Of course, one of the infirmarii could do what he wanted when transcribing the vote of an infirm cardinal. There’s no way to prevent that. If the infirm cardinal were concerned about that but not privacy, he could ask all three infirmarii to witness the ballot.

There are also enormous social — religious, actually — disincentives to hacking the vote. The election takes place in a chapel and at an altar. The cardinals swear an oath as they are casting their ballot — further discouragement. The chalice and paten are the implements used to celebrate the Eucharist, the holiest act of the Catholic Church. And the scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election, under pain of excommunication.

The other major security risk in the process is eavesdropping from the outside world. The election is supposed to be a completely closed process, with nothing communicated to the world except a winner. In today’s high-tech world, this is very difficult. The rules explicitly state that the chapel is to be checked for recording and transmission devices “with the help of trustworthy individuals of proven technical ability.” That was a lot easier in 2005 than it will be in 2013.

What are the lessons here?

First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything.

Second, small and simple elections are easier to secure. This kind of process works to elect a pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems could work for a larger group would be through a pyramid-like mechanism, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.

And third: When an election process is left to develop over the course of a couple of thousand years, you end up with something surprisingly good.

This essay previously appeared on CNN.com, and is an update of an essay I wrote for the previous papal election in 2005.

Posted on February 22, 2013 at 11:12 AMView Comments

Hacking Marconi's Wireless in 1903

A great story:

Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked — and this was more than 100 years before the mischief playing out on the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?

Posted on December 29, 2011 at 9:47 AMView Comments

1 3 4 5 6 7 10

Sidebar photo of Bruce Schneier by Joe MacInnis.