Schneier on Security
A blog covering security and security technology.
November 2013 Archives
This squid-like worm -- Teuthidodrilus samae -- is new to science.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet's smaller and simpler attack routine -- the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and "forgotten" routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later -- and was discovered in comparatively short order.
Stuxnet also provided a useful blueprint to future attackers by highlighting the royal road to infiltration of hard targets. Rather than trying to infiltrate directly by crawling through 15 firewalls, three data diodes, and an intrusion detection system, the attackers acted indirectly by infecting soft targets with legitimate access to ground zero: contractors. However seriously these contractors took their cybersecurity, it certainly was not on par with the protections at the Natanz fuel-enrichment facility. Getting the malware on the contractors' mobile devices and USB sticks proved good enough, as sooner or later they physically carried those on-site and connected them to Natanz's most critical systems, unchallenged by any guards.
Safeplug is an easy-to-use Tor appliance. I like that it can also act as a Tor exit node.
EDITED TO ADD: I know nothing about this appliance, nor do I endorse it. In fact, I would like it to be independently audited before we start trusting it. But it's a fascinating proof-of-concept of encapsulating security so that normal Internet users can use it.
This is a long article about the FBI's Data Intercept Technology Unit (DITU), which is basically its own internal NSA.
It carries out its own signals intelligence operations and is trying to collect huge amounts of email and Internet data from U.S. companies -- an operation that the NSA once conducted, was reprimanded for, and says it abandoned.
There is an enormous amount of information in the article, which exposes yet another piece of the vast US government surveillance infrastructure. It's good to read that "at least two" companies are fighting at least a part of this. Any legislation aimed at restoring security and trust in US Internet companies needs to address the whole problem, and not just a piece of it.
This story should get more publicity than it has.
Google recently announced that it would start including individual users' names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.
These changes come on the heels of Google's move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.
It shouldn't come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.
If these features don't sound particularly beneficial to you, it's because you're not the customer of any of these companies. You're the product, and you're being improved for their actual customers: their advertisers.
This is nothing new. For years, these sites and others have systematically improved their "product" by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.
The "Do Not Track" law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn't have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.
Really: if you tell a "Do Not Track"-enabled company that you don't want to be tracked, it will stop showing you personalized ads. But your activity will be tracked -- and your personal information collected, sold and used -- just like everyone else's. It's best to think of it as a "track me in secret" law.
Of course, people don't think of it that way. Most people aren't fully aware of how much of their data is collected by these sites. And, as the "Do Not Track" story illustrates, Internet companies are doing their best to keep it that way.
The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I'm thinking of than my wife does. But that's not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.
As the Edward Snowden revelations continue to expose the full extent of the National Security Agency's eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world's existing eavesdropping on the Internet.
The public/private surveillance partnership is fraying, but it's largely alive and well. The NSA didn't build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.
There are a lot of reasons why Internet surveillance is so prevalent and pervasive.
One, users like free things, and don't realize how much value they're giving away to get it. We know that "free" is a special price that confuses peoples' thinking.
Google's 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.
Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don't think about how much personal information you're revealing to the company; you're chatting with your friends. When you wake up in the morning, you don't think about how you're going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.
And three, the Internet's winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn't track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.
There are two types of changes that need to happen in order to fix this. First, there's the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that's not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.
Surveillance is the business model of the Internet -- Al Gore recently called it a "stalker economy." All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.
This essay previously appeared on CNN.com.
Neat photo. Video, too.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
I just did an AMA on Reddit.
Renesys is reporting that Internet traffic is being manipulatively rerouted, presumably for eavesdropping purposes. The attacks exploit flaws in the Border Gateway Protocol (BGP). Ars Technica has a good article explaining the details.
The odds that the NSA is not doing this sort of thing are basically zero, but I'm sure that their activities are going to be harder to discover.
The tips are more psychological than security.
EDITED TO ADD (12/13): A rebuttal and discussion.
Nicholas Weaver has a great essay explaining how the NSA's QUANTUM packet injection system works, what we know it does, what else it can possibly do, and how to defend against it. Remember that while QUANTUM is an NSA program, other countries engage in these sorts of attacks as well. By securing the Internet against QUANTUM, we protect ourselves against any government or criminal use of these sorts of techniques.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
My talk at the IETF Vancouver meeting on NSA and surveillance. I'm the first speaker after the administrivia.
The US government sets up secure tents for the president and other officials to deal with classified material while traveling abroad.
Even when Obama travels to allied nations, aides quickly set up the security tent -- which has opaque sides and noise-making devices inside -- in a room near his hotel suite. When the president needs to read a classified document or have a sensitive conversation, he ducks into the tent to shield himself from secret video cameras and listening devices.
The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users' and customers' data.
Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies' cooperation has become public, they're facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.
How much is unclear. In July, right after the PRISM revelations, the Cloud Security Alliance reported that US cloud companies could lose $35 billion over the next three years, mostly due to losses of foreign sales. Surely that number has increased as outrage over NSA spying continues to build in Europe and elsewhere. There is no similar report for software sales, although I have attended private meetings where several large US software companies complained about the loss of foreign sales. On the hardware side, IBM is losing business in China. The US telecom companies are also suffering: AT&T is losing business worldwide.
This is the new reality. The rules of secrecy are different, and companies have to assume that their responses to NSA data demands will become public. This means there is now a significant cost to cooperating, and a corresponding benefit to fighting.
Over the past few months, more companies have woken up to the fact that the NSA is basically treating them as adversaries, and are responding as such. In mid-October, it became public that the NSA was collecting e-mail address books and buddy lists from Internet users logging into different service providers. Yahoo, which didn't encrypt those user connections by default, allowed the NSA to collect much more of its data than Google, which did. That same day, Yahoo announced that it would implement SSL encryption by default for all of its users. Two weeks later, when it became public that the NSA was collecting data on Google users by eavesdropping on the company's trunk connections between its data centers, Google announced that it would encrypt those connections.
We recently learned that Yahoo fought a government order to turn over data. Lavabit fought its order as well. Apple is now tweaking the government. And we think better of those companies because of it.
Now Lavabit, which closed down its e-mail service rather than comply with the NSA's request for the master keys that would compromise all of its customers, has teamed with Silent Circle to develop a secure e-mail standard that is resistant to these kinds of tactics.
The Snowden documents made it clear how much the NSA relies on corporations to eavesdrop on the Internet. The NSA didn't build a massive Internet eavesdropping system from scratch. It noticed that the corporate world was already eavesdropping on every Internet user -- surveillance is the business model of the Internet, after all -- and simply got copies for itself.
Now, that secret ecosystem is breaking down. Supreme Court Justice Louis Brandeis wrote about transparency, saying "Sunlight is said to be the best of disinfectants." In this case, it seems to be working.
These developments will only help security. Remember that while Edward Snowden has given us a window into the NSA's activities, these sorts of tactics are probably also used by other intelligence services around the world. And today's secret NSA programs become tomorrow's PhD theses, and the next day's criminal hacker tools. It's impossible to build an Internet where the good guys can eavesdrop, and the bad guys cannot. We have a choice between an Internet that is vulnerable to all attackers, or an Internet that is safe from all attackers. And a safe and secure Internet is in everyone's best interests, including the US's.
This essay previously appeared on TheAtlantic.com.
I think this is a good move on Microsoft's part:
Microsoft is recommending that customers and CA's stop using SHA-1 for cryptographic applications, including use in SSL/TLS and code signing. Microsoft Security Advisory 2880823 has been released along with the policy announcement that Microsoft will stop recognizing the validity of SHA-1 based certificates after 2016.
Der Spiegel is reporting that the GCHQ used QUANTUMINSERT to direct users to fake LinkedIn and Slashdot pages run by -- this code name is not in the article -- FOXACID servers. There's not a lot technically new in the article, but we do get some information about popularity and jargon.
According to other secret documents, Quantum is an extremely sophisticated exploitation tool developed by the NSA and comes in various versions. The Quantum Insert method used with Belgacom is especially popular among British and US spies. It was also used by GCHQ to infiltrate the computer network of OPEC's Vienna headquarters.
Slashdot has reacted to the story.
This article argues that online gambling is a strategic national threat because terrorists could use it to launder money.
The Harper demonstration showed the technology and techniques that terror and crime organizations could use to operate untraceable money laundering built on a highly liquid legalized online poker industry -- just the environment that will result from the spread of poker online.
I'm impressed with the massive fear resonating in this essay.
This talk by Dan Geer explains the NSA mindset of "collect everything":
I previously worked for a data protection company. Our product was, and I believe still is, the most thorough on the market. By "thorough" I mean the dictionary definition, "careful about doing something in an accurate and exact way." To this end, installing our product instrumented every system call on the target machine. Data did not and could not move in any sense of the word "move" without detection. Every data operation was caught and monitored. It was total surveillance data protection. Its customers were companies that don't accept half-measures. What made this product stick out was that very thoroughness, but here is the point: Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.
The whole essay is well worth reading.
There's a story that Edward Snowden successfully socially engineered other NSA employees into giving him their passwords.
In the Information Age, it's easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.
When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.
Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets -- even while being unable to prevent them from publishing. It might seem abhorrent to help those who are seeking to bring your secrets to light, but it's the best way to ensure that the things that truly need to be secret remain secret, even as everything else becomes public.
The WikiLeaks cables serve as an excellent example of how a government should not deal with massive leaks of classified information.
WikiLeaks has said it asked US authorities for help in determining what should be redacted before publication of documents, although some government officials have challenged that statement. WikiLeaks' media partners did redact many documents, but eventually all 250,000 unredacted cables were released to the world as a result of a mistake.
Fast-forward to today, and we have an even bigger trove of classified documents. What Edward Snowden took -- "exfiltrated" is the National Security Agency term -- dwarfs the State Department cables, and contains considerably more important secrets. But again, the US government is doing nothing to prevent a massive data dump.
The government engages with the press on individual stories. The Guardian, the Washington Post, and the New York Times are all redacting the original Snowden documents based on discussions with the government. This isn't new. The US press regularly consults with the government before publishing something that might be damaging. In 2006, the New York Times consulted with both the NSA and the Bush administration before publishing Mark Klein's whistle-blowing about the NSA's eavesdropping on AT&T trunk circuits. In all these cases, the goal is to minimize actual harm to US security while ensuring the press can still report stories in the public interest, even if the government doesn't want it to.
In today's world of reduced secrecy, whistleblowing as civil disobedience, and massive document exfiltrations, negotiations over individual stories aren't enough. The government needs to develop a protocol to actively help news organizations expose their secrets safely and responsibly.
Here's what should have happened as soon as Snowden's whistle-blowing became public. The government should have told the reporters and publications with the classified documents something like this: "OK, you have them. We know that we can't undo the leak. But please let us help. Let us help you secure the documents as you write your stories, and securely dispose of the documents when you're done."
The people who have access to the Snowden documents say they don't want them to be made public in their raw form or to get in the hands of rival governments. But accidents happen, and reporters are not trained in military secrecy practices.
Copies of some of the Snowden documents are being circulated to journalists and others. With each copy, each person, each day, there's a greater chance that, once again, someone will make a mistake and some -- or all -- of the raw documents will appear on the Internet. A formal system of working with whistle-blowers could prevent that.
I'm sure the suggestion sounds odious to a government that is actively engaging in a war on whistle-blowers, and that views Snowden as a criminal and the reporters writing these stories as "helping the terrorists." But it makes sense. Harvard law professor Jonathan Zittrain compares this to plea bargaining.
The police regularly negotiate lenient sentences or probation for confessed criminals in order to convict more important criminals. They make deals with all sorts of unsavory people, giving them benefits they don't deserve, because the result is a greater good.
In the Snowden case, an agreement would safeguard the most important of NSA's secrets from other nations' intelligence agencies. It would help ensure that the truly secret information not be exposed. It would protect US interests.
Why would reporters agree to this? Two reasons. One, they actually do want these documents secured while they look for stories to publish. And two, it would be a public demonstration of that desire.
Why wouldn't the government just collect all the documents under the pretense of securing them and then delete them? For the same reason they don't renege on plea bargains: No one would trust them next time. And, of course, because smart reporters will probably keep encrypted backups under their own control.
We're nowhere near the point where this system could be put into practice, but it's worth thinking about how it could work. The government would need to establish a semi-independent group, called, say, a Leak Management unit, which could act as an intermediary. Since it would be isolated from the agencies that were the source of the leak, its officials would be less vested and -- this is important -- less angry over the leak. Over time, it would build a reputation, develop protocols that reporters could rely on. Leaks will be more common in the future, but they'll still be rare. Expecting each agency to develop expertise in this process is unrealistic.
If there were sufficient trust between the press and the government, this could work. And everyone would benefit.
This essay previously appeared on CNN.com.
I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:
The risk score estimates the risk associated with a log-in attempt based on a user's typical log-in and usage profile, taking into account their device and geographic location, the system they're trying to access, the time of day they typically log in, their device's IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.
The wings of the Goniurellia tridens fruit fly have images of an ant on them, to deceive predators: "When threatened, the fly flashes its wings to give the appearance of ants walking back and forth. The predator gets confused and the fly zips off."
Click on the link to see the photo.
This is well-written and very good.
This is interesting reading, but I'm left wanting more. What are the lessons here? How can we do this better next time? Clearly we won't be able to anticipate bombings; even Israel can't do that. We have to get better at responding.
Several years after 9/11, I conducted training with a military bomb unit charged with guarding Washington, DC. Our final exam was a nightmare scenario -- a homemade nuke at the Super Bowl. Our job was to defuse it while the fans were still in the stands, there being no way to quickly and safely clear out 80,000 people. That scenario made two fundamental assumptions that are no longer valid: that there would be one large device and that we would find it before it detonated.
This New York Times story on the NSA is very good, and contains lots of little tidbits of new information gleaned from the Snowden documents.
The agency’s Dishfire database -- nothing happens without a code word at the N.S.A. -- stores years of text messages from around the world, just in case. Its Tracfin collection accumulates gigabytes of credit card purchases. The fellow pretending to send a text message at an Internet cafe in Jordan may be using an N.S.A. technique code-named Polarbreeze to tap into nearby computers. The Russian businessman who is socially active on the web might just become food for Snacks, the acronym-mad agency’s Social Network Analysis Collaboration Knowledge Services, which figures out the personnel hierarchies of organizations from texts.
EDITED TO ADD (11/5): This Guardian story is related. It looks like both the New York Times and the Guardian wrote separate stories about the same source material.
EDITED TO ADD (11/5): New York Times reporter Scott Shane gave a 20-minute interview on Democracy Now on the NSA and his reporting.
Good story of badBIOS, a really nasty piece of malware. The weirdest part is how it uses ultrasonic sound to jump air gaps.
Ruiu said he arrived at the theory about badBIOS's high-frequency networking capability after observing encrypted data packets being sent to and from an infected machine that had no obvious network connection with -- but was in close proximity to -- another badBIOS-infected computer. The packets were transmitted even when one of the machines had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine's power cord to rule out the possibility it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped.
I'm not sure what to make of this. When I first read it, I thought it was a hoax. But enough others are taking it seriously that I think it's a real story. I don't know whether the facts are real, and I haven't seen anything about what this malware actually does.
EDITED TO ADD (11/14): A claimed debunking
Make your own 8-foot giant squid pillow.
Under a top secret program initiated by the Bush Administration after the Sept. 11 attacks, the [name of agency (FBI, CIA, NSA, etc.)] have been gathering a vast database of [type of records] involving United States citizens.
We've changed administrations -- we've changed political parties -- but nothing has changed.
In Spring Semester, I'm running a reading group -- which seems to be a formal variant of a study group -- at Harvard Law School on "Security, Power, and the Internet. I would like a good mix of people, so non law students and non Harvard students are both welcome to sign up.
This article talks about applications in retail, but the possibilities are endless.
Every smartphone these days comes equipped with a WiFi card. When the card is on and looking for networks to join, it's detectable by local routers. In your home, the router connects to your device, and then voila you have the Internet on your phone. But in a retail environment, other in-store equipment can pick up your WiFi card, learn your device's unique ID number and use it to keep tabs on that device over time as you move through the store.
Basically, the system is using the MAC address to identify individual devices. Another article on the system is here.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.