Entries Tagged "history of security"

Page 5 of 11

History of Hacktivism

Nice article by Dorothy Denning.

Hacktivism emerged in the late 1980s at a time when hacking for fun and profit were becoming noticeable threats. Initially it took the form of computer viruses and worms that spread messages of protest. A good example of early hacktivism is “Worms Against Nuclear Killers (WANK),” a computer worm that anti-nuclear activists in Australia unleashed into the networks of the National Aeronautics and Space Administration and the US Department of Energy in 1989 to protest the launch of a shuttle which carried radioactive plutonium.

By the mid-1990s, denial of service (DoS) attacks had been added to the hacktivist’s toolbox, usually taking the form of message or traffic floods. In 1994, journalist Joshua Quittner lost access to his e-mail after thousands of messages slamming “capitalistic pig” corporations swamped his inbox, and a group called itself “The Zippies” flooded e-mail accounts in the United Kingdom with traffic to protest a bill that would have outlawed outdoor dance festivals. Then in 1995, an international group called Strano Network organized a one-hour “Net’strike” against French government websites to protest nuclear and social policies. At the designated time, participants visited the target websites and hit the “reload” button over and over in an attempt to tie up traffic to the sites.

Her conclusion comes as no surprise:

Hacktivism, including state-sponsored or conducted hacktivism, is likely to become an increasingly common method for voicing dissent and taking direct action against adversaries. It offers an easy and inexpensive means to make a statement and inflict harm without seriously risking prosecution under criminal law or a response under international law. Hacking gives non-state actors an attractive alternative to street protests and state actors an appealing substitute for armed attacks. It has become not only a popular means of activism, but also an instrument of national power that is challenging international relations and international law.

Posted on September 21, 2015 at 6:34 AMView Comments

The Changing Economics of Surveillance

Cory Doctorow examines the changing economics of surveillance and what it means:

The Stasi employed one snitch for every 50 or 60 people it watched. We can’t be sure of the size of the entire Five Eyes global surveillance workforce, but there are only about 1.4 million Americans with Top Secret clearance, and many of them don’t work at or for the NSA, which means that the number is smaller than that (the other Five Eyes states have much smaller workforces than the US). This million-ish person workforce keeps six or seven billion people under surveillance—a ratio approaching 1:10,000. What’s more, the US has only (“only”!) quadrupled its surveillance budget since the end of the Cold War: tooling up to give the spies their toys wasn’t all that expensive, compared to the number of lives that gear lets them pry into.

IT has been responsible for a 2-3 order of magnitude productivity gain in surveillance efficiency. The Stasi used an army to surveil a nation; the NSA uses a battalion to surveil a planet.

I am reminded of this paper on the changing economics of surveillance.

Posted on March 12, 2015 at 6:22 AMView Comments

1971 FBI Burglary

Interesting story:

…burglars took a lock pick and a crowbar and broke into a Federal Bureau of Investigation office in a suburb of Philadelphia, making off with nearly every document inside.

They were never caught, and the stolen documents that they mailed anonymously to newspaper reporters were the first trickle of what would become a flood of revelations about extensive spying and dirty-tricks operations by the F.B.I. against dissident groups.

Video article. And the book.

Interesting precursor to Edward Snowden.

Posted on January 10, 2014 at 6:45 AMView Comments

World War II Anecdote about Trust and Security

This is an interesting story from World War II about trust:

Jones notes that the Germans doubted their system because they knew the British could radio false orders to the German bombers with no trouble. As Jones recalls, “In fact we did not do this, but it seemed such an easy countermeasure that the German crews thought that we might, and they therefore began to be suspicious about the instructions that they received.”

The implications of this are perhaps obvious but worth stating nonetheless: a lack of trust can exist even if an adversary fails to exploit a weakness in the system. More importantly, this doubt can become a shadow adversary. According to Jones, “…it was not long before the crews found substance to their theory [that is, their doubt].” In support of this, he offers the anecdote of a German pilot who, returning to base after wandering off course, grumbled that “the British had given him a false order.”

I think about this all the time with respect to our IT systems and the NSA. Even though we don’t know which companies the NSA has compromised—or by what means—knowing that they could have compromised any of them is enough to make us mistrustful of all of them. This is going to make it hard for large companies like Google and Microsoft to get back the trust they lost. Even if they succeed in limiting government surveillance. Even if they succeed in improving their own internal security. The best they’ll be able to say is: “We have secured ourselves from the NSA, except for the parts that we either don’t know about or can’t talk about.”

Posted on December 13, 2013 at 11:20 AMView Comments

1 3 4 5 6 7 11

Sidebar photo of Bruce Schneier by Joe MacInnis.