Blog: September 2011 Archives

Insecure Chrome Extensions

An analysis of extensions to the Chrome browser shows that 25% of them are insecure:

We reviewed 100 Chrome extensions and found that 27 of the 100 extensions leak all of their privileges to a web or WiFi attacker. Bugs in extensions put users at risk by leaking private information (like passwords and history) to web and WiFi attackers. Web sites may be evil or contain malicious content from users or advertisers. Attackers on public WiFi networks (like in coffee shops and airports) can change all HTTP content.

Posted on September 29, 2011 at 7:07 AM24 Comments

Tor Arms Race

Iran blocks Tor, and Tor releases a workaround on the same day.

How did the filter work technically? Tor tries to make its traffic look like a web browser talking to an https web server, but if you look carefully enough you can tell some differences. In this case, the characteristic of Tor’s SSL handshake they looked at was the expiry time for our SSL session certificates: we rotate the session certificates every two hours, whereas normal SSL certificates you get from a certificate authority typically last a year or more. The fix was to simply write a larger expiration time on the certificates, so our certs have more plausible expiry times.

Posted on September 26, 2011 at 6:41 AM18 Comments

Friday Squid Blogging: Sex Life of Deep-Sea Squid

There’s evidence of indiscriminate fertilization in deep-sea squid. They mate with any other squid the encounter, male or female.

This unusual behaviour, they said, may be explained by the fact the squid is boosting its chances of successfully passing on its genes in the challenging environment it lives in.

In the Royal Society paper the team writes: “In the deep, dark habitat where O. deletron lives, potential mates are few and far between.

“We suggest that same-sex mating behaviour by O. deletron is part of a reproductive strategy that maximises success by inducing males to indiscriminately and swiftly inseminate every [squid] that they encounter.”

Basically, they can’t tell males from females in the dark waters, so it just makes sense to mate with everybody.

The press is reporting this as homosexuality or bisexuality, but it’s not. It’s indiscriminate fertilization. PZ Myers explains.

Posted on September 23, 2011 at 4:28 PM34 Comments

Man-in-the-Middle Attack Against SSL 3.0/TLS 1.0

It’s the Browser Exploit Against SSL/TLS Tool, or BEAST:

The tool is based on a blockwise-adaptive chosen-plaintext attack, a man-in-the-middle approach that injects segments of plain text sent by the target’s browser into the encrypted request stream to determine the shared key. The code can be injected into the user’s browser through JavaScript associated with a malicious advertisement distributed through a Web ad service or an IFRAME in a linkjacked site, ad, or other scripted elements on a webpage.

Using the known text blocks, BEAST can then use information collected to decrypt the target’s AES-encrypted requests, including encrypted cookies, and then hijack the no-longer secure connection. That decryption happens slowly, however; BEAST currently needs sessions of at least a half-hour to break cookies using keys over 1,000 characters long.

The attack, according to Duong, is capable of intercepting sessions with PayPal and other services that still use TLS 1.0­which would be most secure sites, since follow-on versions of TLS aren’t yet supported in most browsers or Web server implementations.

While Rizzo and Duong believe BEAST is the first attack against SSL 3.0 that decrypts HTTPS requests, the vulnerability that BEAST exploits is well-known; BT chief security technology officer Bruce Schneier and UC Berkeley’s David Wagner pointed out in a 1999 analysis of SSL 3.0 that “SSL will provide a lot of known plain-text to the eavesdropper, but there seems to be no better alternative.” And TLS’s vulnerability to man-in-the middle attacks was made public in 2009. The IETF’s TLS Working Group published a fix for the problem, but the fix is unsupported by SSL.

Another article.

EDITED TO ADD: Good analysis.

Posted on September 23, 2011 at 1:37 PM34 Comments

Three Emerging Cyber Threats

On Monday, I participated in a panel at the Information Systems Forum in Berlin. The moderator asked us what the top three emerging threats were in cyberspace. I went last, and decided to focus on the top three threats that are not criminal:

  1. The Rise of Big Data. By this I mean industries that trade on our data. These include traditional credit bureaus and data brokers, but also data-collection companies like Facebook and Google. They’re collecting more and more data about everyone, often without their knowledge and explicit consent, and selling it far and wide: to both other corporate users and to government. Big data is becoming a powerful industry, resisting any calls to regulate its behavior.
  2. Ill-Conceived Regulations from Law Enforcement. We’re seeing increasing calls to regulate cyberspace in the mistaken belief that this will fight crime. I’m thinking about data retention laws, Internet kill switches, and calls to eliminate anonymity. None of these will work, and they’ll all make us less safe.
  3. The Cyberwar Arms Race. I’m not worried about cyberwar, but I am worried about the proliferation of cyber weapons. Arms races are fundamentally destabilizing, especially when their development can be so easily hidden. I worry about cyberweapons being triggered by accident, cyberweapons getting into the wrong hands and being triggered on purpose, and the inability to reliably trace a cyberweapon leading to increased distrust. Plus, arms races are expensive.

That’s my list, and they all have the potential to be more dangerous than cybercriminals.

Posted on September 23, 2011 at 6:53 AM30 Comments

An Interesting Software Liability Proposal

This proposal is worth thinking about.

Clause 1. If you deliver software with complete and buildable source code and a license that allows disabling any functionality or code by the licensee, then your liability is limited to a refund.

This clause addresses how to avoid liability: license your users to inspect and chop off any and all bits of your software they do not trust or do not want to run, and make it practical for them to do so.

The word disabling is chosen very carefully. This clause grants no permission to change or modify how the program works, only to disable the parts of it that the licensee does not want. There is also no requirement that the licensee actually look at the source code, only that it was received.

All other copyrights are still yours to control, and your license can contain any language and restriction you care to include, leaving the situation unchanged with respect to hardware locking, confidentiality, secrets, software piracy, magic numbers, etc. Free and open source software is obviously covered by this clause, and it does not change its legal situation in any way.

Clause 2. In any other case, you are liable for whatever damage your software causes when used normally.

If you do not want to accept the information sharing in Clause 1, you would fall under Clause 2 and have to live with normal product liability, just as manufacturers of cars, blenders, chainsaws, and hot coffee do.

Posted on September 23, 2011 at 5:22 AM60 Comments

U.S.-Australia Cyberwar Treaty

The long-standing ANZUS military treaty now includes cyberspace attacks:

According to Reuters, the decision was made in discussions between the two countries this week. The extension of the treaty would mean that a cyber-attack on either country would be considered an attack on both.

Exactly what this means in practice is less clear: practically every government with a connection to the Internet is subject to pretty much constant attack, and both Australia and America regularly accuse China and North Korea of playing host to many such attacks (China just as regularly denies any government involvement in Internet-borne attacks).

According to Reuters, it’s the first time any non-NATO defense pact has extended to the Internet. US Defence Secretary Leon Panetta is quoted as saying “cyber is the battlefield of the future.”

Posted on September 22, 2011 at 7:09 AM28 Comments

Shifting Risk Instead of Reducing Risk

Risks of teen driving:

For more than a decade, California and other states have kept their newest teen drivers on a tight leash, restricting the hours when they can get behind the wheel and whom they can bring along as passengers. Public officials were confident that their get-tough policies were saving lives.

Now, though, a nationwide analysis of crash data suggests that the restrictions may have backfired: While the number of fatal crashes among 16- and 17-year-old drivers has fallen, deadly accidents among 18-to-19-year-olds have risen by an almost equal amount. In effect, experts say, the programs that dole out driving privileges in stages, however well-intentioned, have merely shifted the ranks of inexperienced drivers from younger to older teens.

Posted on September 21, 2011 at 6:58 AM57 Comments

Complex Electronic Banking Fraud in Malaysia

The interesting thing about this attack is how it abuses a variety of different security systems.

Investigations revealed that the syndicate members had managed to retrieve personal particulars including the usernames, passwords from an online banking kiosk at a bank in Petaling Jaya and even obtained the transaction authorisation code (TAC) which is sent out by the bank to the registered handphones of online banking users to execute cash transfers from their victims’ accounts.

Federal CCID director, Commissioner Datuk Syed Ismail Syed Azizan told a press conference today that the syndicate had skimmed the personal online details of those who had used the kiosk by secrets attaching a thumbdrive with a spy software which downloaded and stored the usernames and passwords when the bank customers logged into their online accounts.

He said the syndicate members would discreetly remove the thumbdrive and later downloaded the confidential information into their computer from where they logged on to user accounts to find out the registered handphone numbers of the bank customers.

Then, using fake MyKad, police report or authorisation letters from the target customers, the crooks would report the handphones lost and applied for new SIM cards from the unsuspecting telecommunications companies.

“This new tactic is a combination of phishing and hijacking SIM cards. Obviously when a new SIM card is issued, the one used by the victim will be cancelled and this will raise their suspicions,” Syed Ismail said.

“To counter this, a syndicate member on the pretext of being a telco staff, will call up their victims a day ahead to inform them that they will face interruptions in their mobilephone services for about two hours.

It is during this two hours that the syndicate would get the new simcard and obtains the TAC numbers with which they can transfer all available cash in his victims account to another account of an accomplice. The biggest single loss was RM50,000.” he said.

MyKad is the Malaysian national ID card.

The criminals use a fake card to get a new cell phone SIM, which they then use to authenticate a fraudulent bank transfer made with stolen credentials.

Posted on September 20, 2011 at 6:36 AM30 Comments

The Effectiveness of Plagiarism Detection Software

As you’d expect, it’s not very good:

But this measure [Turnitin] captures only the most flagrant form of plagiarism, where passages are copied from one document and pasted unchanged into another. Just as shoplifters slip the goods they steal under coats or into pocketbooks, most plagiarists tinker with the passages they copy before claiming them as their own. In other words, they cloak their thefts by scrambling the passages and right-clicking on words to find synonyms. This isn’t writing; it is copying, cloaking and pasting; and it’s plagiarism.

Kerry Segrave is a right-clicker, changing “cellar of store” to “basement of shop.” Similarly, he changes goods to items, articles to goods, accomplice to confederate, neighborhood to area, and women to females. He is also a scrambler, changing “accidentally fallen” to “fallen accidentally;” “only with” to “with only;” and, “Leon and Klein,” to “Klein and Leon.” And, he scrambles phrases within sentences; in other words, the phases of his sentences are sometimes scrambled.

[…]

Turnitin offers another product called WriteCheck that allows students to “check [their] work against the same database as Turnitin.” I signed up and submitted the early pages of Shoplifting. WriteCheck matched many of Shoplifting’s phrases to those of the i>New York Times articles in its library of student papers. Remember, I submitted them as a student paper to help Turnitin find them; now WriteCheck has them too! WriteCheck warned me that “a significant amount of this paper is unoriginal” and advised me to revise it. After a few hours of right-clicking and scrambling, I resubmitted it and WriteCheck said it was okay, being cleansed of easily recognizable plagiarism.

Turnitin is playing both sides of the fence, helping instructors identify plagiarists while helping plagiarists avoid detection. It is akin to selling security systems to stores while allowing shoplifters to test whether putting tagged goods into bags lined with aluminum thwart the detectors.

Posted on September 19, 2011 at 6:35 AM42 Comments

Identifying Speakers in Encrypted Voice Communication

I’ve already written how it is possible to detect words and phrases in encrypted VoIP calls. Turns out it’s possible to detect speakers as well:

Abstract: Most of the voice over IP (VoIP) traffic is encrypted prior to its transmission over the Internet. This makes the identity tracing of perpetrators during forensic investigations a challenging task since conventional speaker recognition techniques are limited to unencrypted speech communications. In this paper, we propose techniques for speaker identification and verification from encrypted VoIP conversations. Our experimental results show that the proposed techniques can correctly identify the actual speaker for 70-75% of the time among a group of 10 potential suspects. We also achieve more than 10 fold improvement over random guessing in identifying a perpetrator in a group of 20 potential suspects. An equal error rate of 17% in case of speaker verification on the CSLU speaker recognition corpus is achieved.

Posted on September 16, 2011 at 12:31 PM25 Comments

Domain-in-the-Middle Attacks

It’s an easy attack. Register a domain that’s like your target except for a typo. So it would be countrpane.com instead of counterpane.com, or mailcounterpane.com instead of mail.counterpane.com. Then, when someone mistypes an e-mail address to someone at that company and you receive it, just forward it on as if nothing happened.

These are called “doppelganger domains.”

To test the vulnerability, the researchers set up 30 doppelganger accounts for various firms and found that the accounts attracted 120,000 e-mails in the six-month testing period.

The e-mails they collected included one that listed the full configuration details for the external Cisco routers of a large IT consulting firm, along with passwords for accessing the devices. Another e-mail going to a company outside the U.S. that manages motorway toll systems provided information for obtaining full VPN access into the system that supports the road tollways. The e-mail included information about the VPN software, usernames, and passwords.

They’re already being used to spy on companies:

Some of the companies whose doppelganger domains have already been taken by entities in China included Cisco, Dell, HP, IBM, Intel, Yahoo and Manpower. For example, someone whose registration data suggests he’s in China registered kscisco.com, a doppelganger for ks.cisco.com. Another user who appeared to be in China registered nayahoo.com ­ a variant of the legitimate na.yahoo.com (a subdomain for Yahoo in Namibia).

Kim said that out of the 30 doppelganger domains they set up, only one company noticed when they registered the domain and came after them threatening a lawsuit unless they released ownership of it, which they did.

He also said that out of the 120,000 e-mails that people had mistakenly sent to their doppelganger domains, only two senders indicated they were aware of the mistake. One of the senders sent a follow-up e-mail with a question mark in it, perhaps to see if it would bounce back. The other user sent out an e-mail query to the same address with a question asking where the e-mail had landed.

Defenses are few:

Companies can mitigate the issue by buying up any doppelganger domains that are still available for their company. But in the case of domains that may already have been purchased by outsiders, Kim recommends that companies configure their networks to block DNS and internal e-mails sent by employees that might get incorrectly addressed to the doppelganger domains. This won’t prevent someone from intercepting e-mail that outsiders send to the doppelganger domains, but at least it will cut down on the amount of e-mail the intruders might grab.

I suppose you can buy up the most common typos, but there will always be ones you didn’t think about—especially if you use a lot of subdomains.

Posted on September 16, 2011 at 5:22 AM43 Comments

Sharing Security Information and the Prisoner's Dilemma

New paper: Dengpan Liu, Yonghua Ji, and Vijay Mookerjee (2011), “Knowledge Sharing and Investment Decisions in Information Security,” Decision Support Systems, in press.

Abstract: We study the relationship between decisions made by two similar firms pertaining to knowledge sharing and investment in information security. The analysis shows that the nature of information assets possessed by the two firms, either complementary or substitutable, plays a crucial role in influencing these decisions. In the complementary case, we show that the firms have a natural incentive to share security knowledge and no external influence to induce sharing is needed. However, the investment levels chosen in equilibrium are lower than optimal, an aberration that can be corrected using coordination mechanisms that reward the firms for increasing their investment levels. In the substitutable case, the firms fall into a Prisoners’ Dilemma trap where they do not share security knowledge in equilibrium, despite the fact that it is beneficial for both of them to do so. Here, the beneficial role of a social planner to encourage the firms to share is indicated. However, even when the firms share in accordance to the recommendations of a social planner, the level of investment chosen by the firms is sub-optimal. The firms either enter into an “arms race” where they over-invest or reenact the under-investment behavior found in the complementary case. Once again, this sub-optimal behavior can be corrected using incentive mechanisms that penalize for over-investment and reward for increasing the investment level in regions of under-investment. The proposed coordination schemes, with some modifications, achieve the socially optimal outcome even when the firms are risk-averse. Implications for information security vendors, firms, and social planner are discussed.

Posted on September 15, 2011 at 12:45 PM4 Comments

A Status Report: "Liars and Outliers"

It’s been a long hard year, but the book is almost finished. It’s certainly the most difficult book I’ve ever written, mostly because I’ve had to learn academic fields I don’t have a lot of experience in. But the book is finally coming together as a coherent whole, and I am optimistic that the results will prove to be worth the effort.

Table of contents:

1. Introduction
2. A Natural History of Security
3. The Evolution of Cooperation
4. A Social History of Security
5. Societal Dilemmas
6. Societal Security
7. Moral Societal Security
8. Reputational Societal Security
9. Institutional Societal Security
10. Technological Societal Security
11. Competing Interest
12. Organizations and Societal Dilemmas
13. Corporations and Societal Dilemmas
14. Institutions and Societal Dilemmas
15. Understanding Societal Security Failures
16. Societal Security and the Information Age
17. The Future of Societal Security

The old title, “The Dishonest Minority,” has been completely expunged from the book. The phrase appears nowhere in the text—it’s only existence is in old blog posts about the book.

Lastly, I want to apologize to all my readers for the scant pickings on my blog and in Crypto-Gram. So much of my attention is going into writing my book that I don’t have time for much else. I promise to write more essays and blog posts once the book is finished. That’s likely to be the December issue of Crypto-Gram. Thank you for your patience.

The manuscript is due in 45 days; publication is still scheduled for mid February. Right now it’s 88,000 words long, with another 30,000 words in notes and references.

Posted on September 15, 2011 at 6:52 AM40 Comments

Risk Tolerance and Culture

This is an interesting study on cultural differences in risk tolerance.

The Cultures of Risk Tolerance

Abstract: This study explores the links between culture and risk tolerance, based on surveys conducted in 23 countries. Altogether, more than 4,000 individuals participated in the surveys. Risk tolerance is associated with culture. Risk tolerance is relatively low in countries where uncertainty avoidance is relatively high and in countries which are relatively individualistic. Risk tolerance is also relatively low in countries which are relatively egalitarian and harmonious. And risk tolerance is relatively high in countries where trust is relatively high. Culture is also associated with risk tolerance indirectly, through the association between culture and income-per-capita. People in countries with relatively high income-per-capita tend to be relatively individualistic, egalitarian, and trusting. Risk tolerance is relatively high in countries with relatively low income-per-capita.

Posted on September 14, 2011 at 2:02 PM13 Comments

TSA Administrator John Pistole on the Future of Airport Security

There’s a lot here that’s worth watching. He talks about expanding behavioral detection. He talks about less screening for “trusted travelers.”

So, what do the next 10 years hold for transportation security? I believe it begins with TSA’s continued movement toward developing and implementing a more risk-based security system, a phrase you may have heard the last few months. When I talk about risk-based, intelligence-driven security it’s important to note that this is not about a specific program per se, or a limited initiative being evaluated at a handful of airports.

On the contrary, risk-based security is much more comprehensive. It means moving further away from what may have seemed like a one-size-fits-all approach to security. It means focusing our agency’s resources on those we know the least about, and using intelligence in better ways to inform the screening process.

[…]

Another aspect of our risk-based, intelligence-driven security system is the trusted traveler proof-of-concept that will begin this fall. As part of this proof-of-concept, we are looking at how to expedite the screening process for travelers we know and trust the most, and travelers who are willing to voluntarily share more information with us before they travel. Doing so will then allow our officers to more effectively prioritize screening and focus our resources on those passengers we know the least about and those of course on watch lists.

[…]

We’re also working with airlines already testing a known-crewmember concept, and we are evaluating changes to the security screening process for children 12-and-under. Both of these concepts reflect the principles of risk-based security, considering that airline pilots are among our country’s most trusted travelers and the preponderance of intelligence indicates that children 12-and-under pose little risk to aviation security.

Finally, we are also evaluating the value of expanding TSA’s behavior detection program, to help our officers identify people exhibiting signs that may indicate a potential threat. This reflects an expansion of the agency’s existing SPOT program, which was developed by adapting global best practices. This effort also includes additional, specialized training for our organization’s Behavior Detection Officers and is currently being tested at Boston’s Logan International airport, where the SPOT program was first introduced.

Posted on September 14, 2011 at 6:55 AM34 Comments

Human Pattern-Matching Failures in Airport Screening

I’ve written about this before: the human brain just isn’t suited to finding rare anomalies in a screening situation.

The Role of the Human Operator in Image-Based Airport Security Technologies

Abstract: Heightened international concerns relating to security and identity management have led to an increased interest in security applications, such as face recognition and baggage and passenger screening at airports. A common feature of many of these technologies is that a human operator is presented with an image and asked to decide whether the passenger or baggage corresponds to a person or item of interest. The human operator is a critical component in the performance of the system and it is of considerable interest to not only better understand the performance of human operators on such tasks, but to also design systems with a human operator in mind. This paper discusses a number of human factors issues which will have an impact on human operator performance in the operational environment, as well as highlighting the variables which must be considered when evaluating the performance of these technologies in scenario or operational trials based on Defence Science and Technology Organisation’s experience in such testing.

Posted on September 13, 2011 at 1:46 PM12 Comments

More 9/11 Retrospectives

Joseph Stiglitz on the price of 9/11.

How 9/11 changed surveillance.

New scientific research as a result of 9/11.

A good controversial piece.

The day we lost our privacy and power.

The probability of another 9/11-magnitude terrorist attack.

To justify the current U.S. spending on homeland security—not including our various official and unofficial wars—we’d have to foil 1,667 Times Square-style plots per year.

Let’s Cancel 9/11.”

I didn’t write anything to commemorate the 9/11 anniversary. I couldn’t think of anything to say that I haven’t said a gazillion times already.

Anything else worth reading? Post links here.

EDITED TO ADD (9/14): “How to Beat Terrorism: Refuse to Be Terrorized” from Wired.

Ten Things I Want My Children To Learn from 9/11.”

The creator of the TSA says it should be dismantled and privatized:

Pat Buchanan on Bush after 9/11.

9/11: Was There an Alternative? by Noam Chomsky.

Comments from Al-Jazeera.

The Onion’s comment.

Posted on September 12, 2011 at 1:27 PM38 Comments

New Lows in Secret Questions

I’ve already written about secret questions, the easier-to-guess low-security backup password that sites want you to have in case you forget your harder-to-remember higher-security password. Here’s a new one, courtesy of the National Archives: “What is your preferred internet password?” I have been told that Priceline has the same one, which implies that this is some third-party login service or toolkit.

Posted on September 8, 2011 at 6:14 AM62 Comments

The Legality of Government Critical Infrastructure Monitoring

Mason Rice, Robert Miller, and Sujeet Shenoi (2011), “May the US Government Monitor Private Critical Infrastructure Assets to Combat Foreign Cyberspace Threats?International Journal of Critical Infrastructure Protection, 4 (April 2011): 3–13.

Abstract: The government “owns” the entire US airspace–it can install radar systems, enforce no-fly zones and interdict hostile aircraft. Since the critical infrastructure and the associated cyberspace are just as vital to national security, could the US government protect major assets–including privately-owned assets–by positioning sensors and defensive systems? This paper discusses the legal issues related to the government’s deployment of sensors in privately owned assets to gain broad situational awareness of foreign threats. This paper does not necessarily advocate pervasive government monitoring of the critical infrastructure; rather, it attempts to analyze the legal principles that would permit or preclude various forms of monitoring.

Posted on September 7, 2011 at 2:32 PM16 Comments

Optimizing Airport Security

New research: Adrian J. Lee and Sheldon H. Jacobson (2011), “The Impact of Aviation Checkpoint Queues on Optimizing Security Screening Effectiveness,” Reliability Engineering & System Safety, 96 (August): 900–911.

Abstract: Passenger screening at aviation security checkpoints is a critical component in protecting airports and aircraft from terrorist threats. Recent developments in screening device technology have increased the ability to detect these threats; however, the average amount of time it takes to screen a passenger still remains a concern. This paper models the queueing process for a multi-level airport checkpoint security system, where multiple security classes are formed through subsets of specialized screening devices. An optimal static assignment policy is obtained which minimizes the steady-state expected amount of time a passenger spends in the securitysystem. Then, an optimal dynamic assignment policy is obtained through a transient analysis that balances the expected number of true alarms with the expected amount of time a passenger spends in the security system. Performance of a two-class system is compared to that of a selective security system containing primary and secondary levels of screening. The key contribution is that the resulting optimal assignment policies increase security and passenger throughput by efficiently and effectively utilizing available screening resources.

Posted on September 6, 2011 at 3:29 PM21 Comments

Friday Squid Blogging: SQUIDS Game

It’s coming to the iPhone and iPad, then to other platforms:

In SQUIDS, players will command a small army of stretchy, springy sea creatures to protect an idyllic underwater kingdom from a sinister emerging threat. An infectious black ooze is spreading through the lush seascape, turning ordinary crustaceans into menacing monsters. Now a plucky team of Squids­each with unique personalities, skills, and ability-boosting attire­must defend their homeland and overturn the evil forces that jeopardize their aquatic utopia.

More:

…which they describe as Angry Birds meets Worms, with RPG elements. “For the universe, Audrey and I share a passion for cephalopods of all sorts, and that was a perfect match with the controls I had in mind,” Thoa said.

As before, use the comments to this post to write about and discuss security stories that don’t have their own post.

Posted on September 2, 2011 at 4:44 PM32 Comments

The Efficacy of Post-9/11 Counterterrorism

This is an interesting article. The authors argue that the whole war-on-terror nonsense is useless—that’s not new—but that the security establishment knows it doesn’t work and abandoned many of the draconian security measures years ago, long before Obama became president. All that’s left of the war on terror is political, as lawmakers fund unwanted projects in an effort to be tough on crime.

I wish it were true, but I don’t buy it. The war on terror is an enormous cash cow, and law enforcement is spending the money as fast as it can get it. It’s also a great stalking horse for increases in police powers, and I see no signs of agencies like the FBI or the TSA not grabbing all the power they can.

The second half of the article is better. The authors argue that openness, not secrecy, improves security:

The worst mistakes and abuses of the War on Terror were possible, in no small part, because national security is still practiced more as a craft than a science. Lacking rigorous evaluations of its practices, the national security establishment was particularly vulnerable to the panic, grandiosity, and overreach that colored policymaking in the wake of 9/11.

To avoid making those sorts of mistakes again, it is essential that we reimagine national security as an object of scientific inquiry. Over the last four centuries, virtually every other aspect of statecraft—from the economy to social policy to even domestic law enforcement—has been opened up to engagement with and evaluation by civil society. The practice of national security is long overdue for a similar transformation.

Maintaining the nation’s security of course will continue to require some degree of secrecy. But there is little reason to think that appropriate secrecy is inconsistent with a fact-based culture of robust and multiplicative inquiry. Indeed, to whatever partial extent that culture already exists within the national security establishment, it has led the move away from many of the counterproductive security measures established after 9/11.

Yet, in the ten years that Congress has been debating issues like coercive interrogation, ethnic profiling, and military tribunals, the House and Senate Intelligence committees, which have all the proper security clearances to evaluate such questions, have never established any formal process to consistently evaluate and improve the effectiveness of U.S. counterterrorism measures.

Establishing proper oversight and evaluation of the efficacy of our security practices will not come easily, for the security craft guards its claims to privileged knowledge jealously. But as long as the practice of security remains hidden behind a veil of classified documents and accepted wisdoms handed down from generation to generation of security agents, our national security apparatus will never become fully modern.

Here’s the report the article was based on.

Posted on September 2, 2011 at 1:34 PM18 Comments

A Professional ATM Theft

Fidelity National Information Services Inc. (FIS) lost $13M to an ATM theft earlier this year:

KrebsOnSecurity recently discovered previously undisclosed details of the successful escapade. According to sources close to the investigation, cyber thieves broke into the FIS network and targeted the Sunrise platform’s “open-loop” prepaid debit cards. The balances on these prepaid cards aren’t stored on the cards themselves; rather, the card numbers correspond to records in a central database, where the balances are recorded. Some prepaid cards cannot be used once their balance has been exhausted, but the prepaid cards used in this attack can be replenished by adding funds. Prepaid cards usually limit the amounts that cardholders can withdraw from a cash machine within a 24 hour period.

Apparently, the crooks were able to drastically increase or eliminate the withdrawal limits for 22 prepaid cards that they had obtained. The fraudsters then cloned the prepaid cards, and distributed them to co-conspirators in several major cities across Europe, Russia and Ukraine.

Sources say the thieves waited until the close of business in the United States on Saturday, March 5, 2011, to launch their attack. Working into Sunday evening, conspirators in Greece, Russia, Spain, Sweden, Ukraine and the United Kingdom used the cloned cards to withdraw cash from dozens of ATMs. Armed with unauthorized access to FIS’s card platform, the crooks were able to reload the cards remotely when the cash withdrawals brought their balances close to zero.

This reminds me of the RBS WorldPay theft from a couple of years ago.

Posted on September 2, 2011 at 6:38 AM8 Comments

Unredacted U.S. Diplomatic WikiLeaks Cables Published

It looks as if the entire mass of U.S. diplomatic cables that WikiLeaks had is available online somewhere. How this came about is a good illustration of how security can go wrong in ways you don’t expect.

Near as I can tell, this is what happened:

  1. In order to send the Guardian the cables, WikiLeaks encrypted them and put them on its website at a hidden URL.
  2. WikiLeaks sent the Guardian the URL.
  3. WikiLeaks sent the Guardian the encryption key.
  4. The Guardian downloaded and decrypted the file.
  5. WikiLeaks removed the file from their server.
  6. Somehow, the encrypted file ends up on BitTorrent. Perhaps someone found the hidden URL, downloaded the file, and then uploaded it to BitTorrent. Perhaps it is the “insurance file.” I don’t know.
  7. The Guardian published a book about WikiLeaks. Thinking the decryption key had no value, it published the key in the book.
  8. A reader used the key from the book to decrypt the archive from BitTorrent, and published the decrypted version: all the U.S. diplomatic cables in unredacted form.

Memo to the Guardian: Publishing encryption keys is almost always a bad idea. Memo to WikiLeaks: Using the same key for the Guardian and for the insurance file—if that’s what you did—was a bad idea.

EDITED TO ADD (9/1): From pp 138-9 of WikiLeaks:

Assange wrote down on a scrap of paper: ACollectionOfHistorySince_1966_ToThe_PresentDay#. “That’s the password,” he said. “But you have to add one extra word when you type it in. You have to put in the word ‘Diplomatic’ before the word ‘History’. Can you remember that?”

I think we can all agree that that’s a secure encryption key.

EDITED TO ADD (9/1): WikiLeaks says that the Guardian file and the insurance file are not encrypted with the same key. Which brings us back to the question: how did the encrypted Guardian file get loose?

EDITED TO ADD (9/1): Spiegel has the detailed story.

Posted on September 1, 2011 at 12:56 PM83 Comments

Forged Google Certificate

There’s been a forged Google certificate out in the wild for the past month and a half. Whoever has it—evidence points to the Iranian government—can, if they’re in the right place, launch man-in-the-middle attacks against Gmail users and read their mail. This isn’t Google’s mistake; the certificate was issued by a Dutch CA that has nothing to do with Google.

This attack illustrates one of the many security problems with SSL: there are too many single points of trust.

EDITED TO ADD (9/1): It seems that 200 forged certificates were generated, not just for Google.

EDITED TO ADD (9/14): More news.

Posted on September 1, 2011 at 5:46 AM74 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.