Blog: November 2013 Archives

More on Stuxnet

Ralph Langer has written the definitive analysis of Stuxnet: short, popular version, and long, technical version.

Stuxnet is not really one weapon, but two. The vast majority of the attention has been paid to Stuxnet’s smaller and simpler attack routine—the one that changes the speeds of the rotors in a centrifuge, which is used to enrich uranium. But the second and “forgotten” routine is about an order of magnitude more complex and stealthy. It qualifies as a nightmare for those who understand industrial control system security. And strangely, this more sophisticated attack came first. The simpler, more familiar routine followed only years later—and was discovered in comparatively short order.

Also:

Stuxnet also provided a useful blueprint to future attackers by highlighting the royal road to infiltration of hard targets. Rather than trying to infiltrate directly by crawling through 15 firewalls, three data diodes, and an intrusion detection system, the attackers acted indirectly by infecting soft targets with legitimate access to ground zero: contractors. However seriously these contractors took their cybersecurity, it certainly was not on par with the protections at the Natanz fuel-enrichment facility. Getting the malware on the contractors’ mobile devices and USB sticks proved good enough, as sooner or later they physically carried those on-site and connected them to Natanz’s most critical systems, unchallenged by any guards.

Any follow-up attacker will explore this infiltration method when thinking about hitting hard targets. The sober reality is that at a global scale, pretty much every single industrial or military facility that uses industrial control systems at some scale is dependent on its network of contractors, many of which are very good at narrowly defined engineering tasks, but lousy at cybersecurity. While experts in industrial control system security had discussed the insider threat for many years, insiders who unwittingly helped deploy a cyberweapon had been completely off the radar. Until Stuxnet.

And while Stuxnet was clearly the work of a nation-state—requiring vast resources and considerable intelligence—future attacks on industrial control and other so-called “cyber-physical” systems may not be. Stuxnet was particularly costly because of the attackers’ self-imposed constraints. Damage was to be disguised as reliability problems. I estimate that well over 50 percent of Stuxnet’s development cost went into efforts to hide the attack, with the bulk of that cost dedicated to the overpressure attack which represents the ultimate in disguise—at the cost of having to build a fully-functional mockup IR-1 centrifuge cascade operating with real uranium hexafluoride. Stuxnet-inspired attackers will not necessarily place the same emphasis on disguise; they may want victims to know that they are under cyberattack and perhaps even want to publicly claim credit for it.

Related: earlier this month, Eugene Kaspersky said that Stuxnet also damaged a Russian nuclear power station and the International Space Station.

Posted on November 29, 2013 at 6:18 AM32 Comments

Surveillance as a Business Model

Google recently announced that it would start including individual users’ names and photos in some ads. This means that if you rate some product positively, your friends may see ads for that product with your name and photo attached—without your knowledge or consent. Meanwhile, Facebook is eliminating a feature that allowed people to retain some portions of their anonymity on its website.

These changes come on the heels of Google’s move to explore replacing tracking cookies with something that users have even less control over. Microsoft is doing something similar by developing its own tracking technology.

More generally, lots of companies are evading the “Do Not Track” rules, meant to give users a say in whether companies track them. Turns out the whole “Do Not Track” legislation has been a sham.

It shouldn’t come as a surprise that big technology companies are tracking us on the Internet even more aggressively than before.

If these features don’t sound particularly beneficial to you, it’s because you’re not the customer of any of these companies. You’re the product, and you’re being improved for their actual customers: their advertisers.

This is nothing new. For years, these sites and others have systematically improved their “product” by reducing user privacy. This excellent infographic, for example, illustrates how Facebook has done so over the years.

The “Do Not Track” law serves as a sterling example of how bad things are. When it was proposed, it was supposed to give users the right to demand that Internet companies not track them. Internet companies fought hard against the law, and when it was passed, they fought to ensure that it didn’t have any benefit to users. Right now, complying is entirely voluntary, meaning that no Internet company has to follow the law. If a company does, because it wants the PR benefit of seeming to take user privacy seriously, it can still track its users.

Really: if you tell a “Do Not Track”-enabled company that you don’t want to be tracked, it will stop showing you personalized ads. But your activity will be tracked—and your personal information collected, sold and used—just like everyone else’s. It’s best to think of it as a “track me in secret” law.

Of course, people don’t think of it that way. Most people aren’t fully aware of how much of their data is collected by these sites. And, as the “Do Not Track” story illustrates, Internet companies are doing their best to keep it that way.

The result is a world where our most intimate personal details are collected and stored. I used to say that Google has a more intimate picture of what I’m thinking of than my wife does. But that’s not far enough: Google has a more intimate picture than I do. The company knows exactly what I am thinking about, how much I am thinking about it, and when I stop thinking about it: all from my Google searches. And it remembers all of that forever.

As the Edward Snowden revelations continue to expose the full extent of the National Security Agency’s eavesdropping on the Internet, it has become increasingly obvious how much of that has been enabled by the corporate world’s existing eavesdropping on the Internet.

The public/private surveillance partnership is fraying, but it’s largely alive and well. The NSA didn’t build its eavesdropping system from scratch; it got itself a copy of what the corporate world was already collecting.

There are a lot of reasons why Internet surveillance is so prevalent and pervasive.

One, users like free things, and don’t realize how much value they’re giving away to get it. We know that “free” is a special price that confuses peoples’ thinking.

Google’s 2013 third quarter profits were nearly $3 billion; that profit is the difference between how much our privacy is worth and the cost of the services we receive in exchange for it.

Two, Internet companies deliberately make privacy not salient. When you log onto Facebook, you don’t think about how much personal information you’re revealing to the company; you’re chatting with your friends. When you wake up in the morning, you don’t think about how you’re going to allow a bunch of companies to track you throughout the day; you just put your cell phone in your pocket.

And three, the Internet’s winner-takes-all market means that privacy-preserving alternatives have trouble getting off the ground. How many of you know that there is a Google alternative called DuckDuckGo that doesn’t track you? Or that you can use cut-out sites to anonymize your Google queries? I have opted out of Facebook, and I know it affects my social life.

There are two types of changes that need to happen in order to fix this. First, there’s the market change. We need to become actual customers of these sites so we can use purchasing power to force them to take our privacy seriously. But that’s not enough. Because of the market failures surrounding privacy, a second change is needed. We need government regulations that protect our privacy by limiting what these sites can do with our data.

Surveillance is the business model of the Internet—Al Gore recently called it a “stalker economy.” All major websites run on advertising, and the more personal and targeted that advertising is, the more revenue the site gets for it. As long as we users remain the product, there is minimal incentive for these companies to provide any real privacy.

This essay previously appeared on CNN.com.

Posted on November 25, 2013 at 6:53 AM85 Comments

Rerouting Internet Traffic by Attacking BGP

Renesys is reporting that Internet traffic is being manipulatively rerouted, presumably for eavesdropping purposes. The attacks exploit flaws in the Border Gateway Protocol (BGP). Ars Technica has a good article explaining the details.

The odds that the NSA is not doing this sort of thing are basically zero, but I’m sure that their activities are going to be harder to discover.

Posted on November 21, 2013 at 1:42 PM40 Comments

Explaining and Speculating About QUANTUM

Nicholas Weaver has a great essay explaining how the NSA’s QUANTUM packet injection system works, what we know it does, what else it can possibly do, and how to defend against it. Remember that while QUANTUM is an NSA program, other countries engage in these sorts of attacks as well. By securing the Internet against QUANTUM, we protect ourselves against any government or criminal use of these sorts of techniques.

Posted on November 18, 2013 at 7:35 AM51 Comments

Security Tents

The US government sets up secure tents for the president and other officials to deal with classified material while traveling abroad.

Even when Obama travels to allied nations, aides quickly set up the security tent—which has opaque sides and noise-making devices inside—in a room near his hotel suite. When the president needs to read a classified document or have a sensitive conversation, he ducks into the tent to shield himself from secret video cameras and listening devices.

[…]

Following a several-hundred-page classified manual, the rooms are lined with foil and soundproofed. An interior location, preferably with no windows, is recommended.

Posted on November 15, 2013 at 6:28 AM56 Comments

A Fraying of the Public/Private Surveillance Partnership

The public/private surveillance partnership between the NSA and corporate data collectors is starting to fray. The reason is sunlight. The publicity resulting from the Snowden documents has made companies think twice before allowing the NSA access to their users’ and customers’ data.

Pre-Snowden, there was no downside to cooperating with the NSA. If the NSA asked you for copies of all your Internet traffic, or to put backdoors into your security software, you could assume that your cooperation would forever remain secret. To be fair, not every corporation cooperated willingly. Some fought in court. But it seems that a lot of them, telcos and backbone providers especially, were happy to give the NSA unfettered access to everything. Post-Snowden, this is changing. Now that many companies’ cooperation has become public, they’re facing a PR backlash from customers and users who are upset that their data is flowing to the NSA. And this is costing those companies business.

How much is unclear. In July, right after the PRISM revelations, the Cloud Security Alliance reported that US cloud companies could lose $35 billion over the next three years, mostly due to losses of foreign sales. Surely that number has increased as outrage over NSA spying continues to build in Europe and elsewhere. There is no similar report for software sales, although I have attended private meetings where several large US software companies complained about the loss of foreign sales. On the hardware side, IBM is losing business in China. The US telecom companies are also suffering: AT&T is losing business worldwide.

This is the new reality. The rules of secrecy are different, and companies have to assume that their responses to NSA data demands will become public. This means there is now a significant cost to cooperating, and a corresponding benefit to fighting.

Over the past few months, more companies have woken up to the fact that the NSA is basically treating them as adversaries, and are responding as such. In mid-October, it became public that the NSA was collecting e-mail address books and buddy lists from Internet users logging into different service providers. Yahoo, which didn’t encrypt those user connections by default, allowed the NSA to collect much more of its data than Google, which did. That same day, Yahoo announced that it would implement SSL encryption by default for all of its users. Two weeks later, when it became public that the NSA was collecting data on Google users by eavesdropping on the company’s trunk connections between its data centers, Google announced that it would encrypt those connections.

We recently learned that Yahoo fought a government order to turn over data. Lavabit fought its order as well. Apple is now tweaking the government. And we think better of those companies because of it.

Now Lavabit, which closed down its e-mail service rather than comply with the NSA’s request for the master keys that would compromise all of its customers, has teamed with Silent Circle to develop a secure e-mail standard that is resistant to these kinds of tactics.

The Snowden documents made it clear how much the NSA relies on corporations to eavesdrop on the Internet. The NSA didn’t build a massive Internet eavesdropping system from scratch. It noticed that the corporate world was already eavesdropping on every Internet user—surveillance is the business model of the Internet, after all—and simply got copies for itself.

Now, that secret ecosystem is breaking down. Supreme Court Justice Louis Brandeis wrote about transparency, saying “Sunlight is said to be the best of disinfectants.” In this case, it seems to be working.

These developments will only help security. Remember that while Edward Snowden has given us a window into the NSA’s activities, these sorts of tactics are probably also used by other intelligence services around the world. And today’s secret NSA programs become tomorrow’s PhD theses, and the next day’s criminal hacker tools. It’s impossible to build an Internet where the good guys can eavesdrop, and the bad guys cannot. We have a choice between an Internet that is vulnerable to all attackers, or an Internet that is safe from all attackers. And a safe and secure Internet is in everyone’s best interests, including the US’s.

This essay previously appeared on TheAtlantic.com.

Posted on November 14, 2013 at 6:21 AM57 Comments

Microsoft Retiring SHA-1 in 2016

I think this is a good move on Microsoft’s part:

Microsoft is recommending that customers and CA’s stop using SHA-1 for cryptographic applications, including use in SSL/TLS and code signing. Microsoft Security Advisory 2880823 has been released along with the policy announcement that Microsoft will stop recognizing the validity of SHA-1 based certificates after 2016.

More news.

SHA-1 isn’t broken yet in a practical sense, but the algorithm is barely hanging on and attacks will only get worse. Migrating away from SHA-1 is the smart thing to do.

Posted on November 13, 2013 at 2:17 PM20 Comments

Another QUANTUMINSERT Attack Example

Der Spiegel is reporting that the GCHQ used QUANTUMINSERT to direct users to fake LinkedIn and Slashdot pages run by—this code name is not in the article—FOXACID servers. There’s not a lot technically new in the article, but we do get some information about popularity and jargon.

According to other secret documents, Quantum is an extremely sophisticated exploitation tool developed by the NSA and comes in various versions. The Quantum Insert method used with Belgacom is especially popular among British and US spies. It was also used by GCHQ to infiltrate the computer network of OPEC’s Vienna headquarters.

The injection attempts are known internally as “shots,” and they have apparently been relatively successful, especially the LinkedIn version. “For LinkedIn the success rate per shot is looking to be greater than 50 percent,” states a 2012 document.

Slashdot has reacted to the story.

I wrote about QUANTUMINSERT, and the whole infection process, here. We have a list of “implants” that the NSA uses to “exfiltrate” information here.

Posted on November 13, 2013 at 6:46 AM51 Comments

Bizarre Online Gambling Movie-Plot Threat

This article argues that online gambling is a strategic national threat because terrorists could use it to launder money.

The Harper demonstration showed the technology and techniques that terror and crime organizations could use to operate untraceable money laundering built on a highly liquid legalized online poker industry—just the environment that will result from the spread of poker online.

[…]

A single poker game takes just a few hours to transfer $5 million as was recently demonstrated—legally—by American player Brian Hastings with his Swedish competitor half a world away. An established al-Qaida poker network could extract from the United States enough untraceable money in six days to fund an operation like the 9/11 attack on the World Trade Center.

I’m impressed with the massive fear resonating in this essay.

Posted on November 12, 2013 at 6:35 AM54 Comments

Dan Geer Explains the Government Surveillance Mentality

This talk by Dan Geer explains the NSA mindset of “collect everything”:

I previously worked for a data protection company. Our product was, and I believe still is, the most thorough on the market. By “thorough” I mean the dictionary definition, “careful about doing something in an accurate and exact way.” To this end, installing our product instrumented every system call on the target machine. Data did not and could not move in any sense of the word “move” without detection. Every data operation was caught and monitored. It was total surveillance data protection. Its customers were companies that don’t accept half-measures. What made this product stick out was that very thoroughness, but here is the point: Unless you fully instrument your data handling, it is not possible for you to say what did not happen. With total surveillance, and total surveillance alone, it is possible to treat the absence of evidence as the evidence of absence. Only when you know everything that *did* happen with your data can you say what did *not* happen with your data.

The alternative to total surveillance of data handling is to answer more narrow questions, questions like “Can the user steal data with a USB stick?” or “Does this outbound e-mail have a Social Security Number in it?” Answering direct questions is exactly what a defensive mindset says you must do, and that is “never make the same mistake twice.” In other words, if someone has lost data because of misuse of some facility on the computer, then you either disable that facility or you wrap it in some kind of perimeter. Lather, rinse, and repeat. This extends all the way to such trivial matters as timer-based screen locking.

The difficulty with the defensive mindset is that it leaves in place the fundamental strategic asymmetry of cybersecurity, namely that while the workfactor for the offender is the price of finding a new method of attack, the workfactor for the defender is the cumulative cost of forever defending against all attack methods yet discovered. Over time, the curve for the cost of finding a new attack and the curve for the cost of defending against all attacks to date cross. Once those curves cross, the offender never has to worry about being out of the money. I believe that that crossing occurred some time ago.

The total surveillance strategy is, to my mind, an offensive strategy used for defensive purposes. It says “I don’t know what the opposition is going to try, so everything is forbidden unless we know it is good.” In that sense, it is like whitelisting applications. Taking either the application whitelisting or the total data surveillance approach is saying “That which is not permitted is forbidden.”

[…]

We all know the truism, that knowledge is power. We all know that there is a subtle yet important distinction between information and knowledge. We all know that a negative declaration like “X did not happen” can only proven true if you have the enumeration of *everything* that did happen and can show that X is not in it. We all know that when a President says “Never again” he is asking for the kind of outcome for which proving a negative, lots of negatives, is categorically essential. Proving a negative requires omniscience. Omniscience requires god-like powers.

The whole essay is well worth reading.

Posted on November 11, 2013 at 6:21 AM59 Comments

Why the Government Should Help Leakers

In the Information Age, it’s easier than ever to steal and publish data. Corporations and governments have to adjust to their secrets being exposed, regularly.

When massive amounts of government documents are leaked, journalists sift through them to determine which pieces of information are newsworthy, and confer with government agencies over what needs to be redacted.

Managing this reality is going to require that governments actively engage with members of the press who receive leaked secrets, helping them secure those secrets—even while being unable to prevent them from publishing. It might seem abhorrent to help those who are seeking to bring your secrets to light, but it’s the best way to ensure that the things that truly need to be secret remain secret, even as everything else becomes public.

The WikiLeaks cables serve as an excellent example of how a government should not deal with massive leaks of classified information.

WikiLeaks has said it asked US authorities for help in determining what should be redacted before publication of documents, although some government officials have challenged that statement. WikiLeaks’ media partners did redact many documents, but eventually all 250,000 unredacted cables were released to the world as a result of a mistake.

The damage was nowhere near as serious as government officials initially claimed, but it had been avoidable.

Fast-forward to today, and we have an even bigger trove of classified documents. What Edward Snowden took—”exfiltrated” is the National Security Agency term—dwarfs the State Department cables, and contains considerably more important secrets. But again, the US government is doing nothing to prevent a massive data dump.

The government engages with the press on individual stories. The Guardian, the Washington Post, and the New York Times are all redacting the original Snowden documents based on discussions with the government. This isn’t new. The US press regularly consults with the government before publishing something that might be damaging. In 2006, the New York Times consulted with both the NSA and the Bush administration before publishing Mark Klein’s whistle-blowing about the NSA’s eavesdropping on AT&T trunk circuits. In all these cases, the goal is to minimize actual harm to US security while ensuring the press can still report stories in the public interest, even if the government doesn’t want it to.

In today’s world of reduced secrecy, whistleblowing as civil disobedience, and massive document exfiltrations, negotiations over individual stories aren’t enough. The government needs to develop a protocol to actively help news organizations expose their secrets safely and responsibly.

Here’s what should have happened as soon as Snowden’s whistle-blowing became public. The government should have told the reporters and publications with the classified documents something like this: “OK, you have them. We know that we can’t undo the leak. But please let us help. Let us help you secure the documents as you write your stories, and securely dispose of the documents when you’re done.”

The people who have access to the Snowden documents say they don’t want them to be made public in their raw form or to get in the hands of rival governments. But accidents happen, and reporters are not trained in military secrecy practices.

Copies of some of the Snowden documents are being circulated to journalists and others. With each copy, each person, each day, there’s a greater chance that, once again, someone will make a mistake and some—or all—of the raw documents will appear on the Internet. A formal system of working with whistle-blowers could prevent that.

I’m sure the suggestion sounds odious to a government that is actively engaging in a war on whistle-blowers, and that views Snowden as a criminal and the reporters writing these stories as “helping the terrorists.” But it makes sense. Harvard law professor Jonathan Zittrain compares this to plea bargaining.

The police regularly negotiate lenient sentences or probation for confessed criminals in order to convict more important criminals. They make deals with all sorts of unsavory people, giving them benefits they don’t deserve, because the result is a greater good.

In the Snowden case, an agreement would safeguard the most important of NSA’s secrets from other nations’ intelligence agencies. It would help ensure that the truly secret information not be exposed. It would protect US interests.

Why would reporters agree to this? Two reasons. One, they actually do want these documents secured while they look for stories to publish. And two, it would be a public demonstration of that desire.

Why wouldn’t the government just collect all the documents under the pretense of securing them and then delete them? For the same reason they don’t renege on plea bargains: No one would trust them next time. And, of course, because smart reporters will probably keep encrypted backups under their own control.

We’re nowhere near the point where this system could be put into practice, but it’s worth thinking about how it could work. The government would need to establish a semi-independent group, called, say, a Leak Management unit, which could act as an intermediary. Since it would be isolated from the agencies that were the source of the leak, its officials would be less vested and—this is important—less angry over the leak. Over time, it would build a reputation, develop protocols that reporters could rely on. Leaks will be more common in the future, but they’ll still be rare. Expecting each agency to develop expertise in this process is unrealistic.

If there were sufficient trust between the press and the government, this could work. And everyone would benefit.

This essay previously appeared on CNN.com.

Posted on November 8, 2013 at 6:58 AM49 Comments

Risk-Based Authentication

I like this idea of giving each individual login attempt a risk score, based on the characteristics of the attempt:

The risk score estimates the risk associated with a log-in attempt based on a user’s typical log-in and usage profile, taking into account their device and geographic location, the system they’re trying to access, the time of day they typically log in, their device’s IP address, and even their typing speed. An employee logging into a CRM system using the same laptop, at roughly the same time of day, from the same location and IP address will have a low risk score. By contrast, an attempt to access a finance system from a tablet at night in Bali could potentially yield an elevated risk score.

Risk thresholds for individual systems are established based on the sensitivity of the information they store and the impact if the system were breached. Systems housing confidential financial data, for example, will have a low risk threshold.

If the risk score for a user’s access attempt exceeds the system’s risk threshold, authentication controls are automatically elevated, and the user may be required to provide a higher level of authentication, such as a PIN or token. If the risk score is too high, it may be rejected outright.

Posted on November 7, 2013 at 7:06 AM46 Comments

The Story of the Bomb Squad at the Boston Marathon

This is interesting reading, but I’m left wanting more. What are the lessons here? How can we do this better next time? Clearly we won’t be able to anticipate bombings; even Israel can’t do that. We have to get better at responding.

Several years after 9/11, I conducted training with a military bomb unit charged with guarding Washington, DC. Our final exam was a nightmare scenario—a homemade nuke at the Super Bowl. Our job was to defuse it while the fans were still in the stands, there being no way to quickly and safely clear out 80,000 people. That scenario made two fundamental assumptions that are no longer valid: that there would be one large device and that we would find it before it detonated.

Boston showed that there’s another threat, one that looks a lot different. “We used to train for one box in a doorway. We went into a slower and less aggressive mode, meticulous, surgical. Now we’re transitioning to a high-speed attack, more maneuverable gear, no bomb suit until the situation has stabilized,” Gutzmer says. “We’re not looking for one bomber who places a device and leaves. We’re looking for an active bomber with multiple bombs, and we need to attack fast.”

A post-Boston final exam will soon look a lot different. Instead of a nuke at the Super Bowl, how about this: Six small bombs have already detonated, and now your job is to find seven more—among thousands of bags—while the bomber hides among a crowd of the fleeing, responding, wounded, and dead. Meanwhile the entire city overwhelms your backup with false alarms. Welcome to the new era of bomb work.

Posted on November 5, 2013 at 6:53 AM39 Comments

More NSA Revelations

This New York Times story on the NSA is very good, and contains lots of little tidbits of new information gleaned from the Snowden documents.

The agency’s Dishfire database—nothing happens without a code word at the N.S.A.—stores years of text messages from around the world, just in case. Its Tracfin collection accumulates gigabytes of credit card purchases. The fellow pretending to send a text message at an Internet cafe in Jordan may be using an N.S.A. technique code-named Polarbreeze to tap into nearby computers. The Russian businessman who is socially active on the web might just become food for Snacks, the acronym-mad agency’s Social Network Analysis Collaboration Knowledge Services, which figures out the personnel hierarchies of organizations from texts.

EDITED TO ADD (11/5): This Guardian story is related. It looks like both the New York Times and the Guardian wrote separate stories about the same source material.

EDITED TO ADD (11/5): New York Times reporter Scott Shane gave a 20-minute interview on Democracy Now on the NSA and his reporting.

Posted on November 4, 2013 at 1:39 PM34 Comments

badBIOS

Good story of badBIOS, a really nasty piece of malware. The weirdest part is how it uses ultrasonic sound to jump air gaps.

Ruiu said he arrived at the theory about badBIOS’s high-frequency networking capability after observing encrypted data packets being sent to and from an infected machine that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when one of the machines had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine’s power cord to rule out the possibility it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped.

With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on.

“The airgapped machine is acting like it’s connected to the Internet,” he said. “Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird.”

I’m not sure what to make of this. When I first read it, I thought it was a hoax. But enough others are taking it seriously that I think it’s a real story. I don’t know whether the facts are real, and I haven’t seen anything about what this malware actually does.

Other discussions.

EDITED TO ADD: More discussions.

EDITED TO ADD (11/14): A claimed debunking

Posted on November 4, 2013 at 6:15 AM220 Comments

A Template for Reporting Government Surveillance News Stories

This is from 2006—I blogged it here—but it’s even more true today.

Under a top secret program initiated by the Bush Administration after the Sept. 11 attacks, the [name of agency (FBI, CIA, NSA, etc.)] have been gathering a vast database of [type of records] involving United States citizens.

“This program is a vital tool in the fight against terrorism,” [Bush Administration official] said. “Without it, we would be dangerously unsafe, and the terrorists would have probably killed you and every other American citizen.” The Bush Administration stated that the revelation of this program has severely compromised national security.

We’ve changed administrations—we’ve changed political parties—but nothing has changed.

Posted on November 1, 2013 at 2:26 PM29 Comments

Close-In Surveillance Using Your Phone's Wi-Fi

This article talks about applications in retail, but the possibilities are endless.

Every smartphone these days comes equipped with a WiFi card. When the card is on and looking for networks to join, it’s detectable by local routers. In your home, the router connects to your device, and then voila ­ you have the Internet on your phone. But in a retail environment, other in-store equipment can pick up your WiFi card, learn your device’s unique ID number and use it to keep tabs on that device over time as you move through the store.

This gives offline companies the power to get incredibly specific data about how their customers behave. You could say it’s the physical version of what Web-based vendors have spent millions of dollars trying to perfect ­ the science of behavioral tracking.

Basically, the system is using the MAC address to identify individual devices. Another article on the system is here.

Posted on November 1, 2013 at 6:32 AM48 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.