March 2013 Archives

Friday Squid Blogging: Bomb Discovered in Squid at Market

Really:

An unexploded bomb was found inside a squid when the fish was slaughtered at a fish market in Guangdong province.

Oddly enough, this doesn't seem to be the work of terrorists:

The stall owner, who has been selling fish for 10 years, told the newspaper the 1-meter-long squid might have mistaken the bomb for food.

Clearly there's much to this story that remains unreported.

More news articles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on March 29, 2013 at 4:19 PM35 Comments

The Dangers of Surveillance

Interesting article, "The Dangers of Surveillance," by Neil M. Richards, Harvard Law Review, 2013. From the abstract:

....We need a better account of the dangers of surveillance.

This article offers such an account. Drawing on law, history, literature, and the work of scholars in the emerging interdisciplinary field of "surveillance studies," I explain what those harms are and why they matter. At the level of theory, I explain when surveillance is particularly dangerous, and when it is not. Surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It is also gives the watcher power over the watched, creating the the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance.

At a practical level, I propose a set of four principles that should guide the future development of surveillance law, allowing for a more appropriate balance between the costs and benefits of government surveillance. First, we must recognize that surveillance transcends the public-private divide. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers. Second, we must recognize that secret surveillance is illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret. Third, we should recognize that total surveillance is illegitimate and reject the idea that it is acceptable for the government to record all Internet activity without authorization. Fourth, we must recognize that surveillance is harmful. Surveillance menaces intellectual privacy and increases the risk of blackmail, coercion, and discrimination; accordingly, we must recognize surveillance as a harm in constitutional standing doctrine.

EDITED TO ADD (4/12): Reply to the article.

Posted on March 29, 2013 at 12:25 PM27 Comments

New RC4 Attack

This is a really clever attack on the RC4 encryption algorithm as used in TLS.

We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions.

The attack is very specialized:

The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext.

The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably.

Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation -- for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized -- so it's likely to become faster and better.

There's no reason to panic here. But let's start to move away from RC4 to something like AES.

There are lots of press articles on the attack.

Posted on March 29, 2013 at 6:59 AM22 Comments

Unwitting Drug Smugglers

This is a story about a physicist who got taken in by an imaginary Internet girlfriend and ended up being arrested in Argentina for drug smuggling. Readers of this blog will see it coming, of course, but it's a still a good read.

I don't know whether the professor knew what he was doing -- it's pretty clear that the reporter believes he's guilty. What's more interesting to me is that there is a drug smuggling industry that relies on recruiting mules off the Internet by pretending to be romantically inclined pretty women. Could that possibly be a useful enough recruiting strategy?

EDITED TO ADD (4/12): Here's a similar story from New Zealand, with the sexes swapped.

Posted on March 28, 2013 at 8:36 AM31 Comments

Security Awareness Training

Should companies spend money on security awareness training for their employees? It's a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry's focus on training serves to obscure greater failings in security design.

In order to understand my argument, it's useful to look at training's successes and failures. One area where it doesn't work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren't very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald's Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they're a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it's much easier to just click "OK" and start chatting with your friends. In short: security is never salient.

Another reason health training works poorly is that it's hard to link behaviors with benefits. We can train anyone -- even laboratory rats -- with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you're unhealthy, what caused it? It might have been something you did or didn't do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.

Training laypeople in pharmacology also isn't very effective. We expect people to make all sorts of medical decisions at the drugstore, and they're not very good at it. Turns out that it's hard to teach expertise. We can't expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can't expect her to become an expert when most of the advice she's exposed to comes from manufacturers' advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.

One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they're right and maybe they're wrong, but they're how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.

On the other hand, we still have trouble teaching people to wash their hands -- even though it's easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it's not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.

Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test, to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive thirty years ago, but that knowledge is still relevant today. On the other hand, password advice from ten years ago isn't relevant today. Can I bank from my browser? Are PDFs safe? Are untrusted networks okay? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The 'interface' we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.

Food safety is my final example. We have a bunch of simple rules -- cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor -- that are mostly right, but often ignored. If we can't get people to follow these rules, what hope do we have for computer security training?

To those who think that training users in security is a good idea, I want to ask: "Have you ever met an actual user?" They're not experts, and we can't expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it's hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don't really address the threats.

Even if we could invent an effective computer security training program, there's one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won't make them more secure.

The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won't let users choose lousy passwords and don't care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That's how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.

If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.

This essay originally appeared on DarkReading.com.

There is lots of commentary on this one.

EDITED TO ADD (4/4): Another commentary.

EDITED TO ADD (4/8): more commentary.

EDITED TO ADD (4/23): Another opinion.

Posted on March 27, 2013 at 6:47 AM60 Comments

The NSA's Cryptolog

The NSA has published declassified versions of its Cryptolog newsletter. All the issues from Aug 1974 through Summer 1997 are on the web, although there are some pretty heavy redactions in places. (Here's a link to the documents on a non-government site, in case they disappear.)

I haven't even begun to go through these yet. If you find anything good, please post it in comments.

Posted on March 26, 2013 at 2:15 PM22 Comments

Identifying People from Mobile Phone Location Data

Turns out that it's pretty easy:

Researchers at the Massachusetts Institute of Technology (MIT) and the Catholic University of Louvain studied 15 months' worth of anonymised mobile phone records for 1.5 million individuals.

They found from the "mobility traces" - the evident paths of each mobile phone - that only four locations and times were enough to identify a particular user.

"In the 1930s, it was shown that you need 12 points to uniquely identify and characterise a fingerprint," said the study's lead author Yves-Alexandre de Montjoye of MIT.

"What we did here is the exact same thing but with mobility traces. The way we move and the behaviour is so unique that four points are enough to identify 95% of people," he told BBC News.

Here's the study.

EFF maintains a good page on the issues surrounding location privacy.

Posted on March 26, 2013 at 6:38 AM29 Comments

Our Internet Surveillance State

I'm going to start with three data points.

One: Some of the Chinese military hackers who were implicated in a broad set of attacks against the U.S. government and corporations were identified because they accessed Facebook from the same network infrastructure they used to carry out their attacks.

Two: Hector Monsegur, one of the leaders of the LulzSec hacker movement, was identified and arrested last year by the FBI. Although he practiced good computer security and used an anonymous relay service to protect his identity, he slipped up.

And three: Paula Broadwell, who had an affair with CIA director David Petraeus, similarly took extensive precautions to hide her identity. She never logged in to her anonymous e-mail service from her home network. Instead, she used hotel and other public networks when she e-mailed him. The FBI correlated hotel registration data from several different hotels -- and hers was the common name.

The Internet is a surveillance state. Whether we admit it to ourselves or not, and whether we like it or not, we're being tracked all the time. Google tracks us, both on its pages and on other pages it has access to. Facebook does the same; it even tracks non-Facebook users. Apple tracks us on our iPhones and iPads. One reporter used a tool called Collusion to track who was tracking him; 105 companies tracked his Internet use during one 36-hour period.

Increasingly, what we do on the Internet is being combined with other data about us. Unmasking Broadwell's identity involved correlating her Internet activity with her hotel stays. Everything we do now involves computers, and computers produce data as a natural by-product. Everything is now being saved and correlated, and many big-data companies make money by building up intimate profiles of our lives from a variety of sources.

Facebook, for example, correlates your online behavior with your purchasing habits offline. And there's more. There's location data from your cell phone, there's a record of your movements from closed-circuit TVs.

This is ubiquitous surveillance: All of us being watched, all the time, and that data being stored forever. This is what a surveillance state looks like, and it's efficient beyond the wildest dreams of George Orwell.

Sure, we can take measures to prevent this. We can limit what we search on Google from our iPhones, and instead use computer web browsers that allow us to delete cookies. We can use an alias on Facebook. We can turn our cell phones off and spend cash. But increasingly, none of it matters.

There are simply too many ways to be tracked. The Internet, e-mail, cell phones, web browsers, social networking sites, search engines: these have become necessities, and it's fanciful to expect people to simply refuse to use them just because they don't like the spying, especially since the full extent of such spying is deliberately hidden from us and there are few alternatives being marketed by companies that don't spy.

This isn't something the free market can fix. We consumers have no choice in the matter. All the major companies that provide us with Internet services are interested in tracking us. Visit a website and it will almost certainly know who you are; there are lots of ways to be tracked without cookies. Cell phone companies routinely undo the web's privacy protection. One experiment at Carnegie Mellon took real-time videos of students on campus and was able to identify one-third of them by comparing their photos with publicly available tagged Facebook photos.

Maintaining privacy on the Internet is nearly impossible. If you forget even once to enable your protections, or click on the wrong link, or type the wrong thing, you've permanently attached your name to whatever anonymous service you're using. Monsegur slipped up once, and the FBI got him. If the director of the CIA can't maintain his privacy on the Internet, we've got no hope.

In today's world, governments and corporations are working together to keep things that way. Governments are happy to use the data corporations collect -- occasionally demanding that they collect more and save it longer -- to spy on us. And corporations are happy to buy data from governments. Together the powerful spy on the powerless, and they're not going to give up their positions of power, despite what the people want.

Fixing this requires strong government will, but they're just as punch-drunk on data as the corporations. Slap-on-the-wrist fines notwithstanding, no one is agitating for better privacy laws.

So, we're done. Welcome to a world where Google knows exactly what sort of porn you all like, and more about your interests than your spouse does. Welcome to a world where your cell phone company knows exactly where you are all the time. Welcome to the end of private conversations, because increasingly your conversations are conducted by e-mail, text, or social networking sites.

And welcome to a world where all of this, and everything else that you do or is done on a computer, is saved, correlated, studied, passed around from company to company without your knowledge or consent; and where the government accesses it at will without a warrant.

Welcome to an Internet without privacy, and we've ended up here with hardly a fight.

This essay previously appeared on CNN.com, where it got 23,000 Facebook likes and 2,500 tweets -- by far the most widely distributed essay I've ever written.

Commentary.

EDITED TO ADD (3/26): More commentary.

EDITED TO ADD (3/28): This Communist commentary seems to be mostly semantic drivel, but parts of it are interesting. The author doesn’t seem to have a problem with State surveillance, but he thinks the incentives that cause businesses to use the same tools should be revisited. This seems just as wrong-headed as the Libertarians who have no problem with corporations using surveillance tools, but don't want governments to use them.

EDITED TO ADD (5/28): This essay has been translated into Polish.

Posted on March 25, 2013 at 6:28 AM74 Comments

Friday Squid Blogging: Giant Squid Genetics

Despite looking very different from each other and being distributed across the world's oceans, all giant squid are the same species. There's also not a lot of genetic diversity.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

EDITED TO ADD (3/25): More news stories.

Posted on March 22, 2013 at 4:12 PM30 Comments

Changes to the Blog

I have made a few changes to my blog that I'd like to talk about.

The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can obsessively watch the totals see how my writings are spreading out across the Internet.

The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site's own servers. This is partly for webmasters' convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you -- even if you don't click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.

What I'm using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don't click, nothing is loaded from the social media site, so it can't track your visit. If you don't care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.

It's not a perfect solution -- two clicks instead of one -- but it's much more privacy-friendly.

(If you're thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn't set cookies or even log IP addresses -- though you'll have to trust them on the logging part. The problem is that it can't display the aggregate totals.)

The second change is the search function. I changed the site's search engine from Google to DuckDuckGo, which doesn't even store IP addresses. Again, you have to trust them on that, but I'm inclined to.

The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you'll be subscribing to a feed that's hosted locally on schneier.com, instead of one produced by Google's Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, this legacy link will always point to the Feedburner version.

Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it's important to try.

Posted on March 22, 2013 at 3:46 PM77 Comments

FBI Secretly Spying on Cloud Computer Users

Both Google and Microsoft have admitted it. Presumably every other major cloud service provider is getting these National Security Letters as well.

If you've been following along, you know that a U.S. District Court recently ruled National Security Letters unconstitutional. Not that this changes anything yet.

Posted on March 22, 2013 at 7:10 AM31 Comments

Text Message Retention Policies

The FBI wants cell phone carriers to store SMS messages for a long time, enabling them to conduct surveillance backwards in time. Nothing new there -- data retention laws are being debated in many countries around the world -- but this was something I did not know:

Wireless providers' current SMS retention policies vary. An internal Justice Department document (PDF) that the ACLU obtained through the Freedom of Information Act shows that, as of 2010, AT&T, T-Mobile, and Sprint did not store the contents of text messages. Verizon did for up to five days, a change from its earlier no-logs-at-all position, and Virgin Mobile kept them for 90 days. The carriers generally kept metadata such as the phone numbers associated with the text for 90 days to 18 months; AT&T was an outlier, keeping it for as long as seven years.

An e-mail message from a detective in the Baltimore County Police Department, leaked by Antisec and reproduced in a 2011 Wired article, says that Verizon keeps "text message content on their servers for 3-5 days." And: "Sprint stores their text message content going back 12 days and Nextel content for 7 days. AT&T/Cingular do not preserve content at all. Us Cellular: 3-5 days Boost Mobile LLC: 7 days"

That second set of data is from 2009.

Leaks seems to be the primary way we learn how our privacy is being violated these days -- we need more of them.

EDITED TO ADD (4/12): Discussion of Canadian policy.

Posted on March 21, 2013 at 1:17 PM19 Comments

When Technology Overtakes Security

A core, not side, effect of technology is its ability to magnify power and multiply force -- for both attackers and defenders. One side creates ceramic handguns, laser-guided missiles, and new-identity theft techniques, while the other side creates anti-missile defense systems, fingerprint databases, and automatic facial recognition systems.

The problem is that it's not balanced: Attackers generally benefit from new security technologies before defenders do. They have a first-mover advantage. They're more nimble and adaptable than defensive institutions like police forces. They're not limited by bureaucracy, laws, or ethics. They can evolve faster. And entropy is on their side -- it's easier to destroy something than it is to prevent, defend against, or recover from that destruction.

For the most part, though, society still wins. The bad guys simply can't do enough damage to destroy the underlying social system. The question for us is: can society still maintain security as technology becomes more advanced?

I don't think it can.

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious...and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

As the destructive power of individual actors and fringe groups increases, so do the calls for -- and society's acceptance of -- increased security.

Traditional security largely works "after the fact". We tend not to ban or restrict the objects that can do harm; instead, we punish the people who do harm with objects. There are exceptions, of course, but they're exactly that: exceptions. This system works as long as society can tolerate the destructive effects of those objects (for example, allowing people to own baseball bats and arresting them after they use them in a riot is only viable if society can tolerate the potential for riots).

When that isn't enough, we resort to "before-the-fact" security measures. These come in two basic varieties: general surveillance of people in an effort to stop them before they do damage, and specific interdictions in an effort to stop people from using those technologies to do damage.

But these measures work better at keeping dangerous technologies out of the hands of amateurs than at keeping them out of the hands of professionals.

And in the global interconnected world we live in, they're not anywhere close to foolproof. Still, a climate of fear causes governments to try. Lots of technologies are already restricted: entire classes of drugs, entire classes of munitions, explosive materials, biological agents. There are age restrictions on vehicles and training restrictions on complex systems like aircraft. We're already almost entirely living in a surveillance state, though we don't realize it or won't admit it to ourselves. This will only get worse as technology advances… today's Ph.D. theses are tomorrow's high-school science-fair projects.

Increasingly, broad prohibitions on technologies, constant ubiquitous surveillance, and Minority Report-like preemptive security will become the norm. We can debate the effectiveness of various security measures in different circumstances. But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

If security won't work in the end, what is the solution?

Resilience -- building systems able to survive unexpected and devastating attacks -- is the best answer we have right now. We need to recognize that large-scale attacks will happen, that society can survive more than we give it credit for, and that we can design systems to survive these sorts of attacks. Calling terrorism an existential threat is ridiculous in a country where more people die each month in car crashes than died in the 9/11 terrorist attacks.

If the U.S. can survive the destruction of an entire city -- witness New Orleans after Hurricane Katrina or even New York after Sandy -- we need to start acting like it, and planning for it. Still, it's hard to see how resilience buys us anything but additional time. Technology will continue to advance, and right now we don't know how to adapt any defenses -- including resilience -- fast enough.

We need a more flexible and rationally reactive approach to these problems and new regimes of trust for our information-interconnected world. We're going to have to figure this out if we want to survive, and I'm not sure how many decades we have left.

This essay originally appeared on Wired.com.

Commentary.

Posted on March 21, 2013 at 7:02 AM48 Comments

Lessons From the FBI's Insider Threat Program

This article is worth reading. One bit:

For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.

"We would have done better hiring Punxsutawney Phil and waving him in front of someone and saying, 'Is this an insider or not an insider?'" he says.

Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence.

Posted on March 20, 2013 at 11:51 AM39 Comments

FinSpy

Twenty five countries are using the FinSpy surveillance software package (also called FinFisher) to spy on their own citizens:

The list of countries with servers running FinSpy is now Australia, Bahrain, Bangladesh, Britain, Brunei, Canada, the Czech Republic, Estonia, Ethiopia, Germany, India, Indonesia, Japan, Latvia, Malaysia, Mexico, Mongolia, Netherlands, Qatar, Serbia, Singapore, Turkmenistan, the United Arab Emirates, the United States and Vietnam.

It's sold by the British company Gamma Group.

Older news.

EDITED TO ADD (3/20): The report.

EDITED TO ADD (4/12): Some more links.

Posted on March 19, 2013 at 1:34 PM22 Comments

A 1962 Speculative Essay on Computers and Intelligence

From the CIA archives: Orrin Clotworthy, "Some Far-out Thoughts on Computers," Studies in Intelligence v. 6 (1962).

EDITED TO ADD (4/12): A transcript of the original, scanned article.

Posted on March 18, 2013 at 1:00 PM21 Comments

Stuxnet is Much Older than We Thought

Symantec has found evidence of Stuxnet variants from way back in 2005. That's much older than the 2009 creation date we originally thought it had. More here and here.

What's impressive is how advanced the cyberattack capabilities of the U.S. and/or Israel were back then.

Posted on March 15, 2013 at 5:46 AM25 Comments

On Secrecy

Interesting law paper: "The Implausibility of Secrecy," by Mark Fenster.

Abstract: Government secrecy frequently fails. Despite the executive branch’s obsessive hoarding of certain kinds of documents and its constitutional authority to do so, recent high-profile events ­ among them the WikiLeaks episode, the Obama administration’s celebrated leak prosecutions, and the widespread disclosure by high-level officials of flattering confidential information to sympathetic reporters ­ undercut the image of a state that can classify and control its information. The effort to control government information requires human, bureaucratic, technological, and textual mechanisms that regularly founder or collapse in an administrative state, sometimes immediately and sometimes after an interval. Leaks, mistakes, open sources ­ each of these constitutes a path out of the government’s informational clutches. As a result, permanent, long-lasting secrecy of any sort and to any degree is costly and difficult to accomplish.

This article argues that information control is an implausible goal. It critiques some of the foundational assumptions of constitutional and statutory laws that seek to regulate information flows, in the process countering and complicating the extensive literature on secrecy, transparency, and leaks that rest on those assumptions. By focusing on the functional issues relating to government information and broadening its study beyond the much-examined phenomenon of leaks, the article catalogs and then illustrates in a series of case studies the formal and informal means by which information flows out of the state. These informal means play an especially important role in limiting both the ability of state actors to keep secrets and the extent to which formal legal doctrines can control the flow of government information. The same bureaucracy and legal regime that keep open government laws from creating a transparent state also keep the executive branch from creating a perfect informational dam. The article draws several implications from this descriptive, functional argument for legal reform and for the study of administrative and constitutional law.

Posted on March 14, 2013 at 12:19 PM11 Comments

Nationalism on the Internet

For technology that was supposed to ignore borders, bring the world closer together, and sidestep the influence of national governments, the Internet is fostering an awful lot of nationalism right now. We've started to see increased concern about the country of origin of IT products and services; U.S. companies are worried about hardware from China; European companies are worried about cloud services in the U.S; no one is sure whether to trust hardware and software from Israel; Russia and China might each be building their own operating systems out of concern about using foreign ones.

I see this as an effect of all the cyberwar saber-rattling that's going on right now. The major nations of the world are in the early years of a cyberwar arms race, and we're all being hurt by the collateral damage.

A commentator on Al Jazeera makes a similar point.

Our nationalist worries have recently been fueled by a media frenzy surrounding attacks from China. These attacks aren't new -- cyber-security experts have been writing about them for at least a decade, and the popular media reported about similar attacks in 2009 and again in 2010 -- and the current allegations aren't even very different than what came before. This isn't to say that the Chinese attacks aren't serious. The country's espionage campaign is sophisticated, and ongoing. And because they're in the news, people are understandably worried about them.

But it's not just China. International espionage works in both directions, and I'm sure we are giving just as good as we're getting. China is certainly worried about the U.S. Cyber Command's recent announcement that it was expanding from 900 people to almost 5,000, and the NSA's massive new data center in Utah. The U.S. even admits that it can spy on non-U.S. citizens freely.

The fact is that governments and militaries have discovered the Internet; everyone is spying on everyone else, and countries are ratcheting up offensive actions against other countries.

At the same time, many nations are demanding more control over the Internet within their own borders. They reserve the right to spy and censor, and to limit the ability of others to do the same. This idea is now being called the "cyber sovereignty movement," and gained traction at the International Telecommunications Union meeting last December in Dubai. One analyst called that meeting the "Internet Yalta," where the Internet split between liberal-democratic and authoritarian countries. I don't think he's exaggerating.

Not that this is new, either. Remember 2010, when the governments of the UAE, Saudi Arabia, and India demanded that RIM give them the ability to spy on BlackBerry PDAs within their borders? Or last year, when Syria used the Internet to surveil its dissidents? Information technology is a surprisingly powerful tool for oppression: not just surveillance, but censorship and propaganda as well. And countries are getting better at using that tool.

But remember: none of this is cyberwar. It's all espionage, something that's been going on between countries ever since countries were invented. What moves public opinion is less the facts and more the rhetoric, and the rhetoric of war is what we're hearing.

The result of all this saber-rattling is a severe loss of trust, not just amongst nation-states but between people and nation-states. We know we're nothing more than pawns in this game, and we figure we'll be better off sticking with our own country.

Unfortunately, both the reality and the rhetoric play right into the hands of the military and corporate interests that are behind the cyberwar arms race in the first place. There is an enormous amount of power at stake here: not only power within governments and militaries, but power and profit amongst the corporations that supply the tools and infrastructure for cyber-attack and cyber-defense. The more we believe we are "at war" and believe the jingoistic rhetoric, the more willing we are to give up our privacy, freedoms, and control over how the Internet is run.

Arms races are fueled by two things: ignorance and fear. We don't know the capabilities of the other side, and we fear that they are more capable than we are. So we spend more, just in case. The other side, of course, does the same. That spending will result in more cyber weapons for attack and more cyber-surveillance for defense. It will result in more government control over the protocols of the Internet, and less free-market innovation over the same. At its worst, we might be about to enter an information-age Cold War: one with more than two "superpowers." Aside from this being a bad future for the Internet, this is inherently destabilizing. It's just too easy for this amount of antagonistic power and advanced weaponry to get used: for a mistaken attribution to be reacted to with a counterattack, for a misunderstanding to become a cause for offensive action, or for a minor skirmish to escalate into a full-fledged cyberwar.

Nationalism is rife on the Internet, and it's getting worse. We need to damp down the rhetoric and-more importantly-stop believing the propaganda from those who profit from this Internet nationalism. Those who are beating the drums of cyberwar don't have the best interests of society, or the Internet, at heart.

This essay previously appeared at Technology Review.

Posted on March 14, 2013 at 6:11 AM39 Comments

Security Theater on the Wells Fargo Website

Click on the "Establishing secure connection" link at the top of this page. It's a Wells Fargo page that displays a progress bar with a bunch of security phrases -- "Establishing Secure Connection," "Sending credentials," "Building Secure Environment," and so on -- and closes after a few seconds. It's complete security theater; it doesn't actually do anything but make account holders feel better.

Posted on March 13, 2013 at 1:30 PM29 Comments

Cisco IP Phone Hack

Nice work:

All current Cisco IP phones, including the ones seen on desks in the White House and aboard Air Force One, have a vulnerability that allows hackers to take complete control of the devices.

Posted on March 12, 2013 at 1:43 PM23 Comments

"The Logic of Surveillance"

Interesting essay:

Surveillance is part of the system of control. "The more surveillance, the more control" is the majority belief amongst the ruling elites. Automated surveillance requires fewer "watchers", and since the watchers cannot watch all the surveillance, long term storage increases the ability to find some "crime" anyone is guilty of.

[...]

This is one of the biggest problems the current elites face: they want the smallest enforcer class possible, so as to spend surplus on other things. The enforcer class is also insular, primarily concerned with itself (see Dorner) and is paid in large part by practical immunity to many laws and a license to abuse ordinary people. Not being driven primarily by justice or a desire to serve the public and with a code of honor which appears to largely center around self-protection and fraternity within the enforcer class, the enforcers' reliability is in question: they are blunt tools and their fear for themselves makes them remarkably inefficient.

Surveillance expands the reach of the enforcer class and thus of the elites. Every camera, drone and so on reduces the number of eyes needed on the ground. The Stasi had millions of informers; surveillance reduces that requirement and the cost of the enforcer class.

Posted on March 12, 2013 at 6:45 AM37 Comments

Dead Drop from the 1870s

Hats:

De Blowitz was staying at the Kaiserhof. Each day his confederate went there for lunch and dinner. The two never acknowledged one another, but they hung their hats on neighboring pegs. At the end of the meal the confederate departed with de Blowitz's hat, and de Blowitz innocently took the confederate's. The communications were hidden in the hat's lining.

Posted on March 11, 2013 at 12:58 PM14 Comments

Is Software Security a Waste of Money?

I worry that comments about the value of software security made at the RSA Conference last week will be taken out of context. John Viega did not say that software security wasn't important. He said:

For large software companies or major corporations such as banks or health care firms with large custom software bases, investing in software security can prove to be valuable and provide a measurable return on investment, but that's probably not the case for smaller enterprises, said John Viega, executive vice president of products, strategy and services at SilverSky and an authority on software security. Viega, who formerly worked on product security at McAfee and as a consultant at Cigital, said that when he was at McAfee he could not find a return on investment for software security.

I agree with that. For small companies, it's not worth worrying much about software security. But for large software companies, it's vital.

Posted on March 11, 2013 at 6:12 AM50 Comments

Ross Anderson's Security Engineering Online

The second edition of Ross Anderson's fantastic book, Security Engineering, is now free online. Required reading for any security engineer.

Posted on March 8, 2013 at 12:08 PM15 Comments

Oxford University Blocks Google Docs

Google Docs is being used for phishing. Oxford University felt that it had to block the service because Google isn't responding to takedown requests quickly enough.

Think about this in light of my essay on feudal security. Oxford University has to trust that Google will act in its best interest, and has no other option if it doesn't.

Posted on March 8, 2013 at 6:23 AM13 Comments

How the FBI Intercepts Cell Phone Data

Good article on "Stingrays," which the FBI uses to monitor cell phone data. Basically, they trick the phone into joining a fake network. And, since cell phones inherently trust the network -- as opposed to computers which inherently do not trust the Internet -- it's easy to track people and collect data. There are lots of questions about whether or not it is illegal for the FBI to do this without a warrant. We know that the FBI has been doing this for almost twenty years, and that they know that they're on shaky legal ground.

The latest release, amounting to some 300 selectively redacted pages, not only suggests that sophisticated cellphone spy gear has been widely deployed since the mid-'90s. It reveals that the FBI conducted training sessions on cell tracking techniques in 2007 and around the same time was operating an internal "secret" website with the purpose of sharing information and interactive media about "effective tools" for surveillance. There are also some previously classified emails between FBI agents that show the feds joking about using the spy gear. "Are you smart enough to turn the knobs by yourself?" one agent asks a colleague.

Of course, if a policeman actually has your phone, he can suck pretty much everything out of it -- again, without a warrant.

Using a single "data extraction session" they were able to pull:

  • call activity
  • phone book directory information
  • stored voicemails and text messages
  • photos and videos
  • apps
  • eight different passwords
  • 659 geolocation points, including 227 cell towers and 403 WiFi networks with which the cell phone had previously connected.

Posted on March 7, 2013 at 1:39 PM29 Comments

The NSA's Ragtime Surveillance Program and the Need for Leaks

A new book reveals details about the NSA's Ragtime surveillance program:

A book published earlier this month, "Deep State: Inside the Government Secrecy Industry," contains revelations about the NSA's snooping efforts, based on information gleaned from NSA sources. According to a detailed summary by Shane Harris at the Washingtonian yesterday, the book discloses that a codename for a controversial NSA surveillance program is "Ragtime" -- and that as many as 50 companies have apparently participated, by providing data as part of a domestic collection initiative.

Deep State, which was authored by Marc Ambinder and D.B. Grady, also offers insight into how the NSA deems individuals a potential threat. The agency uses an automated data-mining process based on "a computerized analysis that assigns probability scores to each potential target," as Harris puts it in his summary. The domestic version of the program, dubbed "Ragtime-P," can process as many as 50 different data sets at one time, focusing on international communications from or to the United States. Intercepted metadata, such as email headers showing "to" and "from" fields, is stored in a database called "Marina," where it generally stays for five years.

About three dozen NSA officials have access to Ragtime's intercepted data on domestic counter-terrorism, the book claims, though outside the agency some 1000 people "are privy to the full details of the program." Internally, the NSA apparently only employs four or five individuals as "compliance staff" to make sure the snooping is falling in line with laws and regulations. Another section of the Ragtime program, "Ragtime-A," is said to involve U.S.-based interception of foreign counterterrorism data, while "Ragtime-B" collects data from foreign governments that transits through the U.S., and "Ragtime-C" monitors counter proliferation activity.

The whole article is interesting, as is the detailed summary, but I thought this comment was particularly important:

The fact that NSA keeps applying separate codenames to programs that inevitably are closely intertwined is an important clue to what's really going on. The government wants to pretend they are discrete surveillance programs in order to conceal, especially from Congressional oversight, how monstrous they are in sum. So they'll give a separate briefing on Trailblazer or what have you, and for an hour everybody in the room acts as if the whole thing is carefully circumscribed and under control. And then if somebody ever finds out about another program (say 'Moonraker' or what have you), then they go ahead and offer a similarly reassuring briefing on that. And nobody in Congress has to acknowledge that the Total Information Awareness Program that was exposed and met with howls of protest...actually wasn't shut down at all, just went back under the radar after being renamed (and renamed and renamed).

He's right. The real threat isn't any one particular secret program, it's all of them put together. And by dividing up the programs into different code names, the big picture remains secret and we only ever get glimpses of it.

We need whistleblowers. Much of the information we have about the NSA's and the Justice Department's plans and capabilities -- think Echelon, Total Information Awareness, and the post-9/11 telephone eavesdropping program -- is over a decade old.

Frank Rieger of the Chaos Computer Club got it right in 2006:

We also need to know how the intelligence agencies work today. It is of highest priority to learn how the "we rather use backdoors than waste time cracking your keys"-methods work in practice on a large scale and what backdoors have been intentionally built into or left inside our systems....

Of course, the risk of publishing this kind of knowledge is high, especially for those on the dark side. So we need to build structures that can lessen the risk. We need anonymous submission systems for documents, methods to clean out eventual document fingerprinting (both on paper and electronic). And, of course, we need to develop means to identify the inevitable disinformation that will also be fed through these channels to confuse us.

Unfortunately, the Obama Administration's mistreatment of Bradley Manning and its aggressive prosecution of other whistleblowers has probably succeeded in scaring any copycats. Yochai Benkler writes:

The prosecution will likely not accept Manning's guilty plea to lesser offenses as the final word. When the case goes to trial in June, they will try to prove that Manning is guilty of a raft of more serious offenses. Most aggressive and novel among these harsher offenses is the charge that by giving classified materials to WikiLeaks Manning was guilty of "aiding the enemy." That's when the judge will have to decide whether handing over classified materials to ProPublica or the New York Times, knowing that Al Qaeda can read these news outlets online, is indeed enough to constitute the capital offense of "aiding the enemy."

Aiding the enemy is a broad and vague offense. In the past, it was used in hard-core cases where somebody handed over information about troop movements directly to someone the collaborator believed to be "the enemy," to American POWs collaborating with North Korean captors, or to a German American citizen who was part of a German sabotage team during WWII. But the language of the statute is broad. It prohibits not only actually aiding the enemy, giving intelligence, or protecting the enemy, but also the broader crime of communicating -- directly or indirectly -- with the enemy without authorization. That's the prosecution's theory here: Manning knew that the materials would be made public, and he knew that Al Qaeda or its affiliates could read the publications in which the materials would be published. Therefore, the prosecution argues, by giving the materials to WikiLeaks, Manning was "indirectly" communicating with the enemy. Under this theory, there is no need to show that the defendant wanted or intended to aid the enemy. The prosecution must show only that he communicated the potentially harmful information, knowing that the enemy could read the publications to which he leaked the materials. This would be true whether Al Qaeda searched the WikiLeaks database or the New York Times'....

This theory is unprecedented in modern American history.

[...]

If Bradley Manning is convicted of aiding the enemy, the introduction of a capital offense into the mix would dramatically elevate the threat to whistleblowers. The consequences for the ability of the press to perform its critical watchdog function in the national security arena will be dire. And then there is the principle of the thing. However technically defensible on the language of the statute, and however well-intentioned the individual prosecutors in this case may be, we have to look at ourselves in the mirror of this case and ask: Are we the America of Japanese Internment and Joseph McCarthy, or are we the America of Ida Tarbell and the Pentagon Papers? What kind of country makes communicating with the press for publication to the American public a death-eligible offense?

A country that's much less free and much less secure.

Posted on March 6, 2013 at 1:24 PM67 Comments

Al Qaeda Document on Avoiding Drone Strikes

Interesting:

3 – Spreading the reflective pieces of glass on a car or on the roof of the building.

4 – Placing a group of skilled snipers to hunt the drone, especially the reconnaissance
ones because they fly low, about six kilometers or less.

5 – Jamming of and confusing of electronic communication using the ordinary water-lifting dynamo fitted with a 30-meter copper pole.

6 – Jamming of and confusing of electronic communication using old equipment and
keeping them 24-hour running because of their strong frequencies and it is possible using simple ideas of deception of equipment to attract the electronic waves devices similar to that used by the Yugoslav army when they used the microwave (oven) in attracting and confusing the NATO missiles fitted with electromagnetic searching devices.

Posted on March 6, 2013 at 6:50 AM63 Comments

Marketing at the RSA Conference

Marcus Ranum has an interesting screed on "booth babes" in the RSA Conference exhibition hall:

I'm not making a moral argument about sexism in our industry or the objectification of women. I could (and probably should) but it's easier to just point out the obvious: the only customers that will be impressed by anyone's ability to hire pretty models to work their booth aren't going to be the ones signing the big purchase orders. And, it's possible that they're thinking your sales team are going to be a bunch of testosterone-laden assholes who'd be better off selling used tires. If some company wants to appeal to the consumer that's going to jump at the T&A maybe they should relocate up the street to O'Farrell where they can include a happy ending with their product demo.

Mark Rothman on the same topic.

EDITED TO ADD (3/11): Winn Schwartau makes a similar point.

Posted on March 5, 2013 at 1:58 PM43 Comments

Technologies of Surveillance

It's a new day for the New York Police Department, with technology increasingly informing the way cops do their jobs. With innovation comes new possibilities but also new concerns.

For one, the NYPD is testing a new type of security apparatus that uses terahertz radiation to detect guns under clothing from a distance. As Police Commissioner Ray Kelly explained to the Daily News back in January, If something is obstructing the flow of that radiation -- a weapon, for example -- the device will highlight that object.

Ignore, for a moment, the glaring constitutional concerns, which make the stop-and-frisk debate pale in comparison: virtual strip-searching, evasion of probable cause, potential racial profiling. Organizations like the American Civil Liberties Union are all over those, even though their opposition probably won't make a difference. We're scared of both terrorism and crime, even as the risks decrease; and when we're scared, we're willing to give up all sorts of freedoms to assuage our fears. Often, the courts go along.

A more pressing question is the effectiveness of technologies that are supposed to make us safer. These include the NYPD's Domain Awareness System, developed by Microsoft, which aims to integrate massive quantities of data to alert cops when a crime may be taking place. Other innovations are surely in the pipeline, all promising to make the city safer. But are we being sold a bill of goods?

For example, press reports make the gun-detection machine look good. We see images from the camera that pretty clearly show a gun outlined under someone's clothing. From that, we can imagine how this technology can spot gun-toting criminals as they enter government buildings or terrorize neighborhoods. Given the right inputs, we naturally construct these stories in our heads. The technology seems like a good idea, we conclude.

The reality is that we reach these conclusions much in the same way we decide that, say, drinking Mountain Dew makes you look cool. These are, after all, the products of for-profit companies, pushed by vendors looking to make sales. As such, they're marketed no less aggressively than soda pop and deodorant. Those images of criminals with concealed weapons were carefully created both to demonstrate maximum effectiveness and push our fear buttons. These companies deliberately craft stories of their effectiveness, both through advertising and placement on television and movies, where police are often showed using high-powered tools to catch high-value targets with minimum complication.

The truth is that many of these technologies are nowhere near as reliable as claimed. They end up costing us gazillions of dollars and open the door for significant abuse. Of course, the vendors hope that by the time we realize this, they're too embedded in our security culture to be removed.

The current poster child for this sort of morass is the airport full-body scanner. Rushed into airports after the underwear bomber Umar Farouk Abdulmutallab nearly blew up a Northwest Airlines flight in 2009, they made us feel better, even though they don't work very well and, ironically, wouldn't have caught Abdulmutallab with his underwear bomb. Both the Transportation Security Administration and vendors repeatedly lied about their effectiveness, whether they stored images, and how safe they were. In January, finally, backscatter X-ray scanners were removed from airports because the company who made them couldn't sufficiently blur the images so they didn't show travelers naked. Now, only millimeter-wave full-body scanners remain.

Another example is closed-circuit television (CCTV) cameras. These have been marketed as a technological solution to both crime and understaffed police and security organizations. London, for example, is rife with them, and New York has plenty of its own. To many, it seems apparent that they make us safer, despite cries of Big Brother. The problem is that in study after study, researchers have concluded that they don't.

Counterterrorist data mining and fusion centers: nowhere near as useful as those selling the technologies claimed. It's the same with DNA testing and fingerprint technologies: both are far less accurate than most people believe. Even torture has been oversold as a security system -- this time by a government instead of a company -- despite decades of evidence that it doesn't work and makes us all less safe.

It's not that these technologies are totally useless. It's that they're expensive, and none of them is a panacea. Maybe there's a use for a terahertz radar, and maybe the benefits of the technology are worth the costs. But we should not forget that there's a profit motive at work, too.


An edited version of this essay, without links, appeared in the New York Daily News.

EDITED TO ADD (2/13): IBM's version massive data policing system is being tested in Rio de Jeneiro.

Posted on March 5, 2013 at 6:28 AM33 Comments

New Internet Porn Scam

I hadn't heard of this one before. In New Zealand, people viewing adult websites -- it's unclear whether these are honeypot sites, or malware that notices the site being viewed -- get a pop-up message claiming it's from the NZ Police and demanding payment of an instant fine for viewing illegal pornography.

EDITED TO ADD (2/12): There's a Japanese variant of this called "one-click fraud."

Posted on March 4, 2013 at 2:04 PM18 Comments

Getting Security Incentives Right

One of the problems with motivating proper security behavior within an organization is that the incentives are all wrong. It doesn't matter how much management tells employees that security is important, employees know when it really isn't -- when getting the job done cheaply and on schedule is much more important.

It seems to me that his co-workers understand the risks better than he does. They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren't serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That's what the company rewards, and that's what the company actually wants.

"Fire someone who breaks security procedure, quickly and publicly," I suggested to the presenter. "That'll increase security awareness faster than any of your posters or lectures or newsletters." If the risks are real, people will get it.

Similarly, there's a supposedly an old Chinese proverb that goes "hang one, warn a thousand." Or to put it another way, we're really good at risk management. And there's John Byng, whose execution gave rise to the Voltaire quote (in French): "in this country, it is good to kill an admiral from time to time, in order to encourage the others."

I thought of all this when I read about the new security procedures surrounding the upcoming papal election:

According to the order, which the Vatican made available in English on Monday afternoon, those few who are allowed into the secret vote to act as aides will be required to take an oath of secrecy.

"I will observe absolute and perpetual secrecy with all who are not part of the College of Cardinal electors concerning all matters directly or indirectly related to the ballots cast and their scrutiny for the election of the Supreme Pontiff," the oath reads.

"I declare that I take this oath fully aware that an infraction thereof will make me subject to the penalty of excommunication 'latae sententiae', which is reserved to the Apostolic See," it continues.

Excommunication is like being fired, only it lasts for eternity.

I'm not optimistic about the College of Cardinals being able to maintain absolute secrecy during the election, because electronic devices have become so small, and electronic communications so ubiquitous. Unless someone wins on one of the first ballots -- a 2/3 majority is required to elect the next pope, so if the various factions entrench they could be at it for a while -- there are going to be leaks. Perhaps accidental, perhaps strategic: these cardinals are fallible men, after all.

Posted on March 4, 2013 at 6:38 AM42 Comments

Phishing Has Gotten Very Good

This isn't phishing; it's not even spear phishing. It's laser-guided precision phishing:

One of the leaked diplomatic cables referred to one attack via email on US officials who were on a trip in Copenhagen to debate issues surrounding climate change.

"The message had the subject line 'China and Climate Change' and was spoofed to appear as if it were from a legitimate international economics columnist at the National Journal."

The cable continued: "In addition, the body of the email contained comments designed to appeal to the recipients as it was specifically aligned with their job function."

[...]

One example which demonstrates the group's approach is that of Coca-Cola, which towards the end was revealed in media reports to have been the victim of a hack.

And not just any hack, it was a hack which industry experts said may have derailed an acquisition effort to the tune of $2.4bn (£1.5bn).

The US giant was looking into taking over China Huiyuan Juice Group, China's largest soft drinks company -- but a hack, believed to be by the Comment Group, left Coca-Cola exposed.

How was it done? Bloomberg reported that one executive -- deputy president of Coca-Cola's Pacific Group, Paul Etchells -- opened an email he thought was from the company's chief executive.

In it, a link which when clicked downloaded malware onto Mr Etchells' machine. Once inside, hackers were able to snoop about the company's activity for over a month.

Also, a new technique:

"It is known as waterholing," he explained. "Which basically involves trying to second guess where the employees of the business might actually go on the web.

"If you can compromise a website they're likely to go to, hide some malware on there, then whether someone goes to that site, that malware will then install on that person's system."

These sites could be anything from the website of an employee's child's school - or even a page showing league tables for the corporate five-a-side football team.

I wrote this over a decade ago: "Only amateurs attack machines; professionals target people." And the professionals are getting better and better.

This is the problem. Against a sufficiently skilled, funded, and motivated adversary, no network is secure. Period. Attack is much easier than defense, and the reason we've been doing so well for so long is that most attackers are content to attack the most insecure networks and leave the rest alone.

It's a matter of motive. To a criminal, all files of credit card numbers are equally good, so your security depends in part on how much better or worse you are than those around you. If the attacker wants you specifically -- as in the examples above -- relative security is irrelevant. What matters is whether or not your security is better than the attackers' skill. And so often it's not.

I am reminded of this great quote from former NSA Information Assurance Director Brian Snow: "Your cyber systems continue to function and serve you not due to the expertise of your security staff but solely due to the sufferance of your opponents."

Actually, that whole essay is worth reading. It says much of what I've been saying, but it's nice to read someone else say it.

One of the often unspoken truths of security is that large areas of it are currently unsolved problems. We don't know how to write large applications securely yet. We don't know how to secure entire organizations with reasonable cost effective measures yet. The honest answer to almost any security question is: "it's complicated!". But there is no shortage of gungho salesmen in expensive suits peddling their security wares and no shortage of clients willing to throw money at the problem (because doing something must be better than doing nothing, right?)

Wrong. Peddling hard in the wrong direction doesn't help just because you want it to.

For a long time, anti virus vendors sold the idea that using their tools would keep users safe. Some pointed out that anti virus software could be described as "necessary but not sufficient" at best, and horribly ineffective snake oil at the least, but AV vendors have big PR budgets and customers need to feel like they are doing something. Examining the AV industry is a good proxy for the security industry in general. Good arguments can be made for the industry and indulging it certainly seems safer than not, but the truth is that none of the solutions on offer from the AV industry give us any hope against a determined targeted attack. While the AV companies all gave talks around the world dissecting the recent publicly discovered attacks like Stuxnet or Flame, most glossed over the simple fact that none of them discovered the virus till after it had done it's work. Finally after many repeated public spankings, this truth is beginning to emerge and even die hards like the charismatic chief research officer of anti virus firm FSecure (Mikko Hypponen) have to concede their utility (or lack thereof). In a recent post he wrote: "What this means is that all of us had missed detecting this malware for two years, or more. That's a spectacular failure for our company, and for the antivirus industry in general.. This story does not end with Flame. It's highly likely there are other similar attacks already underway that we havn't detected yet. Put simply, attacks like these work.. Flame was a failure for the anti-virus industry. We really should have been able to do better. But we didn't. We were out of our league, in our own game."

Posted on March 1, 2013 at 5:05 AM48 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..