Entries Tagged "academic papers"

Page 61 of 86

Top Secret America on the Post-9/11 Cycle of Fear and Funding

I’m reading Top Secret America: The Rise of the New American Security State, by Dana Priest and William M. Arkin. Both work for The Washington Post. The book talks about the rise of the security-industrial complex in post 9/11 America. This short quote is from Chapter 3:

Such dread was a large part of the post-9/11 decade. A culture of fear had created a culture of spending to control it, which, in turn, had led to a belief that the government had to be able to stop every single plot before it took place, regardless of whether it involved one network of twenty terrorists or one single deranged person. This expectation propelled more spending and even more zero-defect expectations. There were tens of thousands of unsolved murders in the United States by 2010, but few newspapers ever blared this across their front pages or even tried to investigate how their police departments had to failed to solve them all over the years. But when it came to terrorism, newspaper and other media outlets amplified each mistake, which amplified the threat, which amplified the fear, which prompted more spending, and on and on and on.

It’s a really good book so far. I recommend it.

EDITED TO ADD (7/13): The project’s website has a lot of interesting information as well.

Posted on June 27, 2012 at 6:35 AMView Comments

Resilience

There was a conference on resilience (highlights here, and complete videos here) earlier this year. Here’s an interview with professor Sander van der Leeuw on the topic. Although he never mentions security, it’s all about security.

Any system, whether it’s the financial system, the environmental system, or something else, is always subject to all kinds of pressures. If it can withstand those pressures without really changing its behavior, then it’s robust. When a system can’t withstand them anymore but can deal with them by integrating some changes so the pressures fall off and it can keep going, then it’s resilient. If it comes to the point where the only choices are to make fundamental structural changes or to cease existence, then it becomes vulnerable.

And:

I’ve worked a lot on the end of the Roman Empire. Let’s go back to sometime before the end. The Roman Empire expands all around the Mediterranean and becomes very, very big. It can do that because wherever it goes, it finds and then takes away existing treasure that has been accumulated over the centuries before. That treasure pays for the army, it pays for the administration, it pays for everything. But there’s a certain moment, beginning in the third century, when there is no more treasure to be had. The empire has already taken in all of the civilized world. At that point, to maintain its administration and military and feed its poor, it must depend basically on the annual yield of agriculture, or the actual product of solar energy. At the same time, the empire becomes less attractive because it has less to offer, because it has less extra energy. So now it has to deal with all kinds of unrest, and ultimately, the energy that it has available for its administration is no longer sufficient to maintain the empire. So between the third century and the fifth century, the empire has to make changes. That is the period when it adapts its behavior to all kinds of pressures. That is the resilience period. At the end of that period, when it is no longer able to maintain that, it quickly becomes vulnerable and falls apart.

And here’s sort of a counter-argument, that resilience in national security is overrated:

But it can go wrong. Rebuilding a community that sits in a flood zone shows plenty of resilience but less wisdom. American Idol contestants who have no singing ability but compete year after year are resilient—and delusional. Winston Churchill once joked that success is the ability to go from failure to failure without losing your enthusiasm. But there is a fine line between perseverance and stupidity. Sometimes it is better to give up and pursue a different course than continuing down the same failing path in the face of adversity.

The potential problems are particularly acute in foreign affairs, where effective resilience requires a tireless effort to adapt to changes in the threat environment. In the world of national security, bad things don’t just happen. Thinking, scheming people cause them. Allies and adversaries are constantly devising new ways to serve their own interests and gain advantage. Each player’s move generates countermoves, unintended consequences, and unforeseen ripple effects. Forging an alliance with one insurgent group alienates another. Hardening some terrorist targets leaves others more vulnerable. Supporting today’s freedom fighters could be arming tomorrow’s enemies. Effective resilience in this realm is not just bouncing back and trying again. It is bouncing back, closing the weaknesses that got you there in the first place, and trying things differently the next time. Adaptation is key. A country’s resilience hinges on being able to adapt to continuously changing threats in the world.

Honestly, this essay doesn’t make much sense to me. Yes, resilience can be done badly. Yes, relying solely on reslience can be sub-optimal. But that doesn’t make resilience bad, or even overrated.

EDITED TO ADD (7/14): Paper on resilience and control systems.

Posted on June 25, 2012 at 11:17 AMView Comments

Far-Fetched Scams Separate the Gullible from Everyone Else

Interesting conclusion by Cormac Herley, in this paper: “Why Do Nigerian Scammers Say They are From Nigeria?

Abstract: False positives cause many promising detection technologies to be unworkable in practice. Attackers, we show, face this problem too. In deciding who to attack true positives are targets successfully attacked, while false positives are those that are attacked but yield nothing. This allows us to view the attacker’s problem as a binary classification. The most profitable strategy requires accurately distinguishing viable from non-viable users, and balancing the relative costs of true and false positives. We show that as victim density decreases the fraction of viable users than can be profitably attacked drops dramatically. For example, a 10x reduction in density can produce a 1000x reduction in the number of victims found. At very low victim densities the attacker faces a seemingly intractable Catch-22: unless he can distinguish viable from non-viable users with great accuracy the attacker cannot find enough victims to be profitable. However, only by finding large numbers of victims can he learn how to accurately distinguish the two.

Finally, this approach suggests an answer to the question in the title. Far-fetched tales of West African riches strike most as comical. Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.

Posted on June 21, 2012 at 1:03 PMView Comments

Teaching the Security Mindset

In 2008, I wrote about the security mindset and how difficult it is to teach. Two professors teaching a cyberwarfare class gave an exam where they expected their students to cheat:

Our variation of the Kobayashi Maru utilized a deliberately unfair exam—write the first 100 digits of pi (3.14159…) from memory and took place in the pilot offering of a governmental cyber warfare course. The topic of the test itself was somewhat arbitrary; we only sought a scenario that would be too challenging to meet through traditional studying. By design, students were given little advance warning for the exam. Insurrection immediately followed. Why were we giving them such an unfair exam? What conceivable purpose would it serve? Now that we had their attention, we informed the class that we had no expectation that they would actually memorize the digits of pi, we expected them to cheat. How they chose to cheat was entirely up to the student. Collaborative cheating was also encouraged, but importantly, students would fail the exam if caught.

Excerpt:

Students took diverse approaches to cheating, and of the 20 students in the course, none were caught. One student used his Mandarin Chinese skills to hide the answers. Another built a small PowerPoint presentation consisting of three slides (all black slide, digits of pi slide, all black slide). The idea being that the student could flip to the answer when the proctor wasn’t looking and easily flip forwards or backward to a blank screen to hide the answer. Several students chose to hide answers on a slip of paper under the keyboards on their desks. One student hand wrote the answers on a blank sheet of paper (in advance) and simply turned it in, exploiting the fact that we didn’t pass out a formal exam sheet. Another just memorized the first ten digits of pi and randomly filled in the rest, assuming the instructors would be too lazy to
check every digit. His assumption was correct.

Read the whole paper. This is the conclusion:

Teach yourself and your students to cheat. We’ve always been taught to color inside the lines, stick to the rules, and never, ever, cheat. In seeking cyber security, we must drop that mindset. It is difficult to defeat a creative and determined adversary who must find only a single flaw among myriad defensive measures to be successful. We must not tie our hands, and our intellects, at the same time. If we truly wish to create the best possible information security professionals, being able to think like an adversary is an essential skill. Cheating exercises provide long term remembrance, teach students how to effectively evaluate a system, and motivate them to think imaginatively. Cheating will challenge students’ assumptions about security and the trust models they envision. Some will find the process uncomfortable. That is
OK and by design. For it is only by learning the thought processes of our adversaries that we can hope to unleash the creative thinking needed to build the best secure systems, become effective at red teaming and penetration testing, defend against attacks, and conduct ethical hacking activities.

Here’s a Boing Boing post, including a video of a presentation about the exercise.

Posted on June 13, 2012 at 12:08 PMView Comments

Changing Surveillance Techniques for Changed Communications Technologies

New paper by Peter P. Swire—”From Real-Time Intercepts to Stored Records: Why Encryption Drives the Government to Seek Access to the Cloud”:

Abstract: This paper explains how changing technology, especially the rising adoption of encryption, is shifting law enforcement and national security lawful access to far greater emphasis on stored records, notably records stored in the cloud. The major and growing reliance on surveillance access to stored records results from the following changes:

(1) Encryption. Adoption of strong encryption is becoming much more common for data and voice communications, via virtual private networks, encrypted webmail, SSL web sessions, and encrypted Voice over IP voice communications.

(2) Declining effectiveness of traditional wiretaps. Traditional wiretap techniques at the ISP or local telephone network increasingly encounter these encrypted communications, blocking the effectiveness of the traditional techniques.

(3) New importance of the cloud. Government access to communications thus increasingly relies on a new and limited set of methods, notably featuring access to stored records in the cloud.

(4) The “haves” and “have-nots.” The first three changes create a new division between the “haves” and “have-nots” when it comes to government access to communications. The “have-nots” become increasingly dependent, for access to communications, on cooperation from the “have” jurisdictions.

Part 1 of the paper describes the changing technology of wiretaps and government access. Part 2 documents the growing adoption of strong encryption in a wide and growing range of settings of interest to government agencies. Part 3 explains how these technological trends create a major shift from real-time intercepts to stored records, especially in the cloud.

Posted on June 11, 2012 at 6:36 AMView Comments

Backdoor Found (Maybe) in Chinese-Made Military Silicon Chips

We all knew this was possible, but researchers have found the exploit in the wild:

Claims were made by the intelligence agencies around the world, from MI5, NSA and IARPA, that silicon chips could be infected. We developed breakthrough silicon chip scanning technology to investigate these claims. We chose an American military chip that is highly secure with sophisticated encryption standard, manufactured in China. Our aim was to perform advanced code breaking and to see if there were any unexpected features on the chip. We scanned the silicon chip in an affordable time and found a previously unknown backdoor inserted by the manufacturer. This backdoor has a key, which we were able to extract. If you use this key you can disable the chip or reprogram it at will, even if locked by the user with their own key. This particular chip is prevalent in many systems from weapons, nuclear power plants to public transport. In other words, this backdoor access could be turned into an advanced Stuxnet weapon to attack potentially millions of systems. The scale and range of possible attacks has huge implications for National Security and public infrastructure.

Here’s the draft paper:

Abstract. This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips. The backdoor was found to exist on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), a technique pioneered by our sponsor, we were able to extract the secret key to activate the backdoor. This way an attacker can disable all the security on the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact it can be easily compromised or it will have to be physically replaced after a redesign of the silicon itself.

The chip in question was designed in the U.S. by a U.S. company, but manufactured in China. News stories. Comment threads.

One researcher maintains that this is not malicious:

Backdoors are a common problem in software. About 20% of home routers have a backdoor in them, and 50% of industrial control computers have a backdoor. The cause of these backdoors isn’t malicious, but a byproduct of software complexity. Systems need to be debugged before being shipped to customers. Therefore, the software contains debuggers. Often, programmers forget to disable the debugger backdoors before shipping. This problem is notoriously bad for all embedded operating systems (VxWorks, QNX, WinCE, etc.).

[…]

It could just be part of the original JTAG building-block. Actel didn’t design their own, but instead purchased the JTAG design and placed it on their chips. They are not aware of precisely all the functionality in that JTAG block, or how it might interact with the rest of the system.

But I’m betting that Microsemi/Actel know about the functionality, but thought of it as a debug feature, rather than a backdoor.

It’s remotely possible that the Chinese manufacturer added the functionality, but highly improbable. It’s prohibitively difficult to change a chip design to add functionality of this complexity. On the other hand, it’s easy for a manufacturer to flip bits. Consider that the functionality is part of the design, but that Actel intended to disable it by flipping a bit turning it off. A manufacturer could easily flip a bit and turn it back on again. In other words, it’s extraordinarily difficult to add complex new functionality, but they may get lucky and be able to make small tweaks to accomplish their goals.

EDITED TO ADD (5/29): Two more articles.

EDITED TO ADD (6/8): Three more articles.

EDITED TO ADD (6/10): A response from the chip manufacturer.

The researchers assertion is that with the discovery of a security key, a hacker can gain access to a privileged internal test facility typically reserved for initial factory testing and failure analysis. Microsemi verifies that the internal test facility is disabled for all shipped devices. The internal test mode can only be entered in a customer-programmed device when the customer supplies their passcode, thus preventing unauthorized access by Microsemi or anyone else. In addition, Microsemi’s customers who are concerned about the possibility of a hacker using DPA have the ability to program their FPGAs with its highest level of security settings. This security setting will disable the use of any type of passcode to gain access to all device configuration, including the internal test facility.

A response from the researchers.

In order to gain access to the backdoor and other features a special key is required. This key has very robust DPA protection, in fact, one of the best silicon-level protections we have ever encountered. With our breakthrough PEA technique we extracted the key in one day and we found that the key is the same in all ProASIC3, Igloo, Fusion and SmartFusion FPGAs. Customers have an option to program their chosen passcode to increase the security; however, Actel/Microsemi does not tell its customers that a special fuse must be programmed in order to get the backdoor protected with both the passcode and backdoor keys. At the same time, the passcode key can be extracted with our PEA technique which is public and covered in our patent so everyone can independently verify our claims. That means that given physical access to the device an attacker can extract all the embedded IP within hours.

There is an option for the highest level of security settings – Permanent Lock. However, if the AES reprogramming option is left it still exposes the device to IP stealing. If not, the Permanent Lock itself is vulnerable to fault attacks and can be disabled opening up the path to the backdoor access as before, but without the need for any passcode.

Posted on May 29, 2012 at 2:07 PMView Comments

Privacy Concerns Around "Social Reading"

Interesting paper: “The Perils of Social Reading,” by Neil M. Richards, from the Georgetown Law Journal.

Abstract: Our law currently treats records of our reading habits under two contradictory rules ­ rules mandating confidentiality, and rules permitting disclosure. Recently, the rise of the social Internet has created more of these records and more pressures on when and how they should be shared. Companies like Facebook, in collaboration with many newspapers, have ushered in the era of “social reading,” in which what we read may be “frictionlessly shared” with our friends and acquaintances. Disclosure and sharing are on the rise.

This Article sounds a cautionary note about social reading and frictionless sharing. Social reading can be good, but the ways in which we set up the defaults for sharing matter a great deal. Our reader records implicate our intellectual privacy ­ the protection of reading from surveillance and interference so that we can read freely, widely, and without inhibition. I argue that the choices we make about how to share have real consequences, and that “frictionless sharing” is not frictionless, nor it is really sharing. Although sharing is important, the sharing of our reading habits is special. Such sharing should be conscious and only occur after meaningful notice.

The stakes in this debate are immense. We are quite literally rewiring the public and private spheres for a new century. Choices we make now about the boundaries between our individual and social selves, between consumers and companies, between citizens and the state, will have unforeseeable ramifications for the societies our children and grandchildren inherit. We should make choices that preserve our intellectual privacy, not destroy it. This Article suggests practical ways to do just that.

Posted on May 23, 2012 at 7:25 AMView Comments

Alan Turing Cryptanalysis Papers

GCHQ, the UK government’s communications headquarters, has released two new—well, 70 years old, but new to us—cryptanalysis documents by Alan Turing.

The papers, one entitled The Applications of Probability to Crypt, and the other entitled Paper on the Statistics of Repetitions, discuss mathematical approaches to code breaking.

[…]

According to the GCHQ mathematician, who identified himself only as Richard, the papers detailed using “mathematical analysis to try and determine which are the more likely settings so that they can be tried as quickly as possible.”

The papers don’t seem to be online yet, but here’s their National Archives data.

EDITED TO ADD (5/12): The papers are available for download at GBP 3.50 each.

Posted on April 23, 2012 at 6:18 AMView Comments

Outliers in Intelligence Analysis

From the CIA journal Studies in Intelligence: “Capturing the Potential of Outlier Ideas in the Intelligence Community.”

In war you will generally find that the enemy has at any time three courses of action open to him. Of those three, he will invariably choose the fourth.

—Helmuth Von Moltke

With that quip, Von Moltke may have launched a spirited debate within his intelligence staff. The modern version of the debate can be said to exist in the cottage industry that has been built on the examination and explanation of intelligence failures, surprises, omissions, and shortcomings. The contributions of notable scholars to the discussion span multiple analytic generations, and each expresses points with equal measures of regret, fervor, and hope. Their diagnoses and their prescriptions are sadly similar, however, suggesting that the lessons of the past are lost on each succeeding generation of analysts and managers or that the processes and culture of intelligence analysis are incapable of evolution. It is with the same regret, fervor, and hope that we offer our own observations on avoiding intelligence omissions and surprise. Our intent is to explore the ingrained bias against outliers, the potential utility of outliers, and strategies for deliberately considering them.

Posted on April 17, 2012 at 6:15 AMView Comments

How Information Warfare Changes Warfare

Really interesting paper on the moral and ethical implications of cyberwar, and the use of information technology in war (drones, for example):

Information Warfare: A Philosophical Perspective,” by Mariarosaria Taddeo, Philosophy and Technology, 2012.

Abstract: This paper focuses on Information Warfare—the warfare characterised by the use of information and communication technologies. This is a fast growing phenomenon, which poses a number of issues ranging from the military use of such technologies to its political and ethical implications. The paper presents a conceptual analysis of this phenomenon with the goal of investigating its nature. Such an analysis is deemed to be necessary in order to lay the groundwork for future investigations into this topic, addressing the ethical problems engendered by this kind of warfare. The conceptual analysis is developed in three parts. First, it delineates the relation between Information Warfare and the Information revolution. It then focuses attention on the effects that the diffusion of this phenomenon has on the concepts of war. On the basis of this analysis, a definition of Information Warfare is provided as a phenomenon not necessarily sanguinary and violent, and rather transversal concerning the environment in which it is waged, the way it is waged and the ontological and social status of its agents. The paper concludes by taking into consideration the Just War Theory and the problems arising from its application to the case of Information Warfare.

Here’s an interview with the author.

Posted on April 16, 2012 at 5:55 AMView Comments

1 59 60 61 62 63 86

Sidebar photo of Bruce Schneier by Joe MacInnis.