Blog: August 2010 Archives

Eavesdropping on Smart Homes with Distributed Wireless Sensors

Protecting your daily in-home activity information from a wireless snooping attack,” by Vijay Srinivasan, John Stankovic, and Kamin Whitehouse:

Abstract: In this paper, we first present a new privacy leak in residential wireless ubiquitous computing systems, and then we propose guidelines for designing future systems to prevent this problem. We show that we can observe private activities in the home such as cooking, showering, toileting, and sleeping by eavesdropping on the wireless transmissions of sensors in a home, even when all of the transmissions are encrypted. We call this the Fingerprint and Timing-based Snooping (FATS) attack. This attack can already be carried out on millions of homes today, and may become more important as ubiquitous computing environments such as smart homes and assisted living facilities become more prevalent. In this paper, we demonstrate and evaluate the FATS attack on eight different homes containing wireless sensors. We also propose and evaluate a set of privacy preserving design guidelines for future wireless ubiquitous systems and show how these guidelines can be used in a hybrid fashion to prevent against the FATS attack with low implementation costs.

The group was able to infer surprisingly detailed activity information about the residents, including when they were home or away, when they were awake or sleeping, and when they were performing activities such as showering or cooking. They were able to infer all this without any knowledge of the location, semantics, or source identifier of the wireless sensors, while assuming perfect encryption of the data and source identifiers.

Posted on August 31, 2010 at 12:39 PM24 Comments

High School Teacher Assigns Movie-Plot Threat Contest Problem

In Australia:

A high school teacher who assigned her class to plan a terrorist attack that would kill as many innocent people as possible had no intent to promote terrorism, the school principal said yesterday.

The Year-10 students at Kalgoorlie-Boulder Community High School were asked to pretend they were terrorists making a political statement by releasing a chemical or biological agent on “an unsuspecting Australian community”.

The task included choosing the best time to attack and explaining their choice of victims and what effects the attack would have on a human body.

“Your goal is to kill the MOST innocent civilians,” the assignment read.

Principal Terry Martino said he withdrew the assignment for the class on contemporary conflict and terrorism as soon as he heard of it. He said the teacher was “relatively inexperienced” and it was a “well-intentioned but misguided attempt to engage the students”.

Sounds like me:

It is in this spirit I announce the (possibly First) Movie-Plot Threat Contest. Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

For the record, 1) I have no interest in promoting terrorism—I’m not even sure how I could promote terrorism without actually engaging in terrorism, 2) I’m pretty experienced, and 3) my movie-plot threat contests are not misguided. You can’t understand security defense without also understanding attack.

Australian police are claiming the assignment was illegal, so Australians who enter my movie-plot threat contests should think twice. Also anyone writing a thriller novel about terrorism, perhaps.

An AFP spokeswoman said it was an offence to collect or make documents preparing for or assisting a terrorist attack.

It was also illegal to be “reckless as to whether these documents may assist or prepare for a terrorist attack”.

Posted on August 31, 2010 at 6:42 AM73 Comments

Misidentification and the Court System

Chilling:

How do most wrongful convictions come about?

The primary cause is mistaken identification. Actually, I wouldn’t call it mistaken identification; I’d call it misidentification, because you often find that there was some sort of misconduct by the police. In a lot of cases, the victim initially wasn’t so sure. And then the police say, “Oh, no, you got the right guy. In fact, we think he’s done two others that we just couldn’t get him for.” Or: “Yup, that’s who we thought it was all along, great call.”

It’s disturbing that misidentifications still play such a large role in wrongful convictions, given that we’ve known about the fallibility of eyewitness testimony for over a century.

In terms of empirical studies, that’s right. And 30 or 40 years ago, the Supreme Court acknowledged that eyewitness identification is problematic and can lead to wrongful convictions. The trouble is, it instructed lower courts to determine the validity of eyewitness testimony based on a lot of factors that are irrelevant, like the certainty of the witness. But the certainty you express [in court] a year and half later has nothing to do with how certain you felt two days after the event when you picked the photograph out of the array or picked the guy out of the lineup. You become more certain over time; that’s just the way the mind works. With the passage of time, your story becomes your reality. You get wedded to your own version.

And the police participate in this. They show the victim the same picture again and again to prepare her for the trial. So at a certain point you’re no longer remembering the event; you’re just remembering this picture that you keep seeing.

Posted on August 30, 2010 at 12:05 PM31 Comments

Security Theater on the Boston T

Since a fatal crash a few years ago, Boston T (their subway) operators have been forbidden from using—or even having—cell phones while on the job. Passengers are encouraged to report violators. But sometimes T operators need to use their official radios on the job, and passengers can’t tell the difference. The solution: orange tape:

The solution? Goodbye, sober black; hello, bright orange, a hue so vivid that, MBTA officials hope, no one will mistake the radios for phones anymore. Workers at the agency’s car barns and garages are in the process of outfitting every handset in the fleet with strips of reflective tape emblazoned with T logos.

[…]

… a small but steady number of hot line tips have been found to be cases of drivers or operators communicating with dispatch by radio, according to video and operations-center call logs.

That is where the electric-orange tape should help, Davey said. Over the past two months, the tape has been applied to handheld radios on about 95 percent of the T’s 1,050 buses (each of which has one handset) and one-fourth of its nearly 210 double-ended Green Line trolleys, which have handsets at each end. The rest of the Green Line and the Orange, Blue, and Red line radios will follow.

Taisha O’Bryant, a Roxbury resident who serves as chairwoman of the T Riders Union, said she is more concerned with the frequency and reliability of bus service than the appearance of bus radios. But she said it is a good thing if a driver or operator can call dispatch in the event of a breakdown or service problem without worrying about appearing to talk on a cellphone, and she hailed the cellphone ban.

Of course, no T operator would ever think of putting bright orange tape on his cell phone. Because if he did that, the passengers would immediately know not to report him.

Posted on August 30, 2010 at 5:31 AM36 Comments

Is the Whole Country an Airport Security Zone?

Full-body scanners in roving vans:

American Science & Engineering, a company based in Billerica, Massachusetts, has sold U.S. and foreign government agencies more than 500 backscatter x-ray scanners mounted in vans that can be driven past neighboring vehicles to see their contents, Joe Reiss, a vice president of marketing at the company told me in an interview.

This should be no different than the Kyllo case, where the Supreme Court ruled that the police needed a warrant before they can use a thermal sensor on a building to search for marijuana growers.

Held: Where, as here, the Government uses a device that is not in general public use, to explore details of a private home that would previously have been unknowable without physical intrusion, the surveillance is a Fourth Amendment “search,” and is presumptively unreasonable without a warrant.

Posted on August 27, 2010 at 7:58 AM75 Comments

Detecting Deception in Conference Calls

Research paper: Detecting Deceptive Discussions in Conference Calls, by David F. Larcker and Anastasia A. Zakolyukina.

Abstract: We estimate classification models of deceptive discussions during quarterly earnings conference calls. Using data on subsequent financial restatements (and a set of criteria to identify especially serious accounting problems), we label the Question and Answer section of each call as “truthful” or “deceptive”. Our models are developed with the word categories that have been shown by previous psychological and linguistic research to be related to deception. Using conservative statistical tests, we find that the out-of-sample performance of the models that are based on CEO or CFO narratives is significantly better than random by 4% – 6% (with 50% – 65% accuracy) and provides a significant improvement to a model based on discretionary accruals and traditional controls. We find that answers of deceptive executives have more references to general knowledge, fewer non-extreme positive emotions, and fewer references to shareholders value and value creation. In addition, deceptive CEOs use significantly fewer self-references, more third person plural and impersonal pronouns, more extreme positive emotions, fewer extreme negative emotions, and fewer certainty and hesitation words.

Posted on August 26, 2010 at 6:15 AM23 Comments

Social Steganography

From danah boyd:

Carmen is engaging in social steganography. She’s hiding information in plain sight, creating a message that can be read in one way by those who aren’t in the know and read differently by those who are. She’s communicating to different audiences simultaneously, relying on specific cultural awareness to provide the right interpretive lens. While she’s focused primarily on separating her mother from her friends, her message is also meaningless to broader audiences who have no idea that she had just broken up with her boyfriend.

Posted on August 25, 2010 at 6:20 AM47 Comments

Skeletal Identification

And you thought fingerprints were intrusive.

The Wright State Research Institute is developing a ground-breaking system that would scan the skeletal structures of people at airports, sports stadiums, theme parks and other public places that could be vulnerable to terrorist attacks, child abductions or other crimes. The images would then quickly be matched with potential suspects using a database of previously scanned skeletons.

Because every country has a database of terrorist skeletons just waiting to be used.

Posted on August 24, 2010 at 6:56 AM106 Comments

Malware Contributory Cause of Air Crash

This is a first, I think:

The airline’s central computer which registered technical problems on planes was infected by Trojans at the time of the fatal crash and this resulted in a failure to raise an alarm over multiple problems with the plane, according to Spanish daily El Pais (report here). The plane took off with flaps and slats retracted, something that should in any case have been picked up by the pilots during pre-flight checks or triggered an internal warning on the plane. Neither happened, with tragic consequences, according to a report by independent crash investigators.

More here.

I have long thought that the Blaster worm was a contributing cause of the 2003 blackout in the U.S. and Canada.

EDITED TO ADD (8/23): In the comments, many readers point out that there are a bunch of problems with the El Pais article this is all based on, and that we should wait for more information before drawing any conclusions.

EDITED TO ADD (8/25): Two rebuttals, both worth reading.

Posted on August 23, 2010 at 6:03 AM45 Comments

Friday Squid Blogging: Flying Squid

Who knew?

“Hulse was shooting with burst mode on his camera, so I know exactly what the interval is between the frames and I can calculate velocity of squid flying though the air,” O’Dor says. “We now think there are dozens of species that do it. Squid are used to gliding in the water, so the same physiology probably allows them to maneuver and glide in the air. When you look at some of the pictures, it seems they are more or less using their fins as wings, and they are curling their arms in [a] shape that could easily be some kind of lifting surface.”

Posted on August 20, 2010 at 4:02 PM16 Comments

Intel Buys McAfee

Intel buys McAfee.

It’s another example of a large non-security company buying a security company. I’ve been talking about this sort of thing for two and a half years:

It’s not consolidation as we’re used to. In the security industry, there are waves of consolidation, you know, big companies scoop up little companies and then there’s lots of consolidation. You’ve got Symantec and Network Associates that way. And then you have “best of breed” where a lot of little companies spring up doing one thing well and then you cobble together a suite yourself. What we’re going to see is consolidation of non-security companies buying security companies. So, remember, if security is going to no longer be an end-user component, companies that do things that are actually useful are going to need to provide security. So, we’re seeing Microsoft buying security companies, we’re seeing IBM Global Services buy security companies, my company was purchased by BT, another massive global outsourcer. So, that sort of consolidation we are seeing, it’s not consolidation of security; it’s really the absorption of security into more general IT products and services.

EDITED TO ADD (8/19): Here’s something else I wrote about the general trend, from 2007.

Posted on August 19, 2010 at 10:44 AM72 Comments

"The Fear Tax"

Good essay by Seth Godin:

We pay the fear tax every time we spend time or money seeking reassurance. We pay it twice when the act of seeking that reassurance actually makes us more anxious, not less.

We pay the tax when we cover our butt instead of doing the right thing, and we pay the tax when we take away someone’s dignity because we’re afraid.

We should quantify the tax. The government should publish how much of our money they’re spending to create fear and then spending to (apparently) address fear. Corporations should add to their annual reports how much they spent just-in-case. Once we know how much it costs, we can figure out if it’s worth it.

Posted on August 18, 2010 at 3:48 PM34 Comments

Hacking Cars Through Wireless Tire-Pressure Sensors

Still minor, but this kind of thing is only going to get worse:

The new research shows that other systems in the vehicle are similarly insecure. The tire pressure monitors are notable because they’re wireless, allowing attacks to be made from adjacent vehicles. The researchers used equipment costing $1,500, including radio sensors and special software, to eavesdrop on, and interfere with, two different tire pressure monitoring systems.

The pressure sensors contain unique IDs, so merely eavesdropping enabled the researchers to identify and track vehicles remotely. Beyond this, they could alter and forge the readings to cause warning lights on the dashboard to turn on, or even crash the ECU completely.

More:

Now, Ishtiaq Rouf at the USC and other researchers have found a vulnerability in the data transfer mechanisms between CANbus controllers and wireless tyre pressure monitoring sensors which allows misleading data to be injected into a vehicle’s system and allows remote recording of the movement profiles of a specific vehicle. The sensors, which are compulsory for new cars in the US (and probably soon in the EU), each communicate individually with the vehicle’s on-board electronics. Although a loss of pressure can also be detected via differences in the rotational speed of fully inflated and partially inflated tyres on the same axle, such indirect methods are now prohibited in the US.

Paper here. This is a previous paper on automobile computer security.

EDITED TO ADD (8/25): This is a better article.

Posted on August 17, 2010 at 6:42 AM35 Comments

Cloning Retail Gift Cards

Clever attack.

After researching how gift cards work, Zepeda purchased a magnetic card reader online, began stealing blank gift cards, on display for purchase, from Fred Meyer and scanning them with his reader. He would then return some of the scanned cards to the store and wait for a computer program to alert him when the cards were activated and loaded with money.

Using a magnetic card writer, Zepeda then rewrote one of the leftover stolen gift card’s magnetic strip with the activated card’s information, thus creating a cloned card.

Posted on August 13, 2010 at 7:36 AM37 Comments

Security Analysis of Smudges on Smart Phone Touch Screens

Smudge Attacks on Smartphone Touch Screens“:

Abstract: Touch screens are an increasingly common feature on personal computing devices, especially smartphones, where size and user interface advantages accrue from consolidating multiple hardware components (keyboard, number pad, etc.) into a single software definable user interface. Oily residues, or smudges, on the touch screen surface, are one side effect of touches from which frequently used patterns such as a graphical password might be inferred.

In this paper we examine the feasibility of such smudge attacks on touch screens for smartphones, and focus our analysis on the Android password pattern. We first investigate the conditions (e.g., lighting and camera orientation) under which smudges are easily extracted. In the vast majority of settings, partial or complete patterns are easily retrieved. We also emulate usage situations that interfere with pattern identification, and show that pattern smudges continue to be recognizable. Finally, we provide a preliminary analysis of applying the information learned in a smudge attack to guessing an Android password pattern.

Reminds me of similar attacks on alarm and lock keypads.

Posted on August 12, 2010 at 6:48 AM28 Comments

Late Teens and Facebook Privacy

Facebook Privacy Settings: Who Cares?” by danah boyd and Eszter Hargittai.

Abstract: With over 500 million users, the decisions that Facebook makes about its privacy settings have the potential to influence many people. While its changes in this domain have often prompted privacy advocates and news media to critique the company, Facebook has continued to attract more users to its service. This raises a question about whether or not Facebook’s changes in privacy approaches matter and, if so, to whom. This paper examines the attitudes and practices of a cohort of 18– and 19–year–olds surveyed in 2009 and again in 2010 about Facebook’s privacy settings. Our results challenge widespread assumptions that youth do not care about and are not engaged with navigating privacy. We find that, while not universal, modifications to privacy settings have increased during a year in which Facebook’s approach to privacy was hotly contested. We also find that both frequency and type of Facebook use as well as Internet skill are correlated with making modifications to privacy settings. In contrast, we observe few gender differences in how young adults approach their Facebook privacy settings, which is notable given that gender differences exist in so many other domains online. We discuss the possible reasons for our findings and their implications.

Posted on August 11, 2010 at 6:00 AM13 Comments

Apple JailBreakMe Vulnerability

Good information from Mikko Hyppönen.

Q: What is this all about?
A: It’s about a site called jailbreakme.com that enables you to Jailbreak your iPhones and iPads just by visiting the site.

Q: So what’s the problem?
A: The problem is that the site uses a zero-day vulnerability to execute code on the device.

Q: How does the vulnerability work?
A: Actually, it’s two vulnerabilities. First one uses a corrupted font embedded in a PDF file to execute code and the second one uses a vulnerability in the kernel to escalate the code execution to unsandboxed root.

Q: How difficult was it to create this exploit?
A: Very difficult.

Q: How difficult would it be for someone else to modify the exploit now that it’s out?
A: Quite easy.

Here’s the JailBreakMe blog.

EDITED TO ADD (8/14): Apple has released a patch. It doesn’t help people with old model iPhones and iPod Touches, or work for people who’ve jailbroken their phones.

EDITED TO ADD (8/15): More info.

Posted on August 10, 2010 at 12:12 PM48 Comments

A Revised Taxonomy of Social Networking Data

Lately I’ve been reading about user security and privacy—control, really—on social networking sites. The issues are hard and the solutions harder, but I’m seeing a lot of confusion in even forming the questions. Social networking sites deal with several different types of user data, and it’s essential to separate them.

Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again—revised—at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it—another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.

There are other ways to look at user data. Some of it you give to the social networking site in confidence, expecting the site to safeguard the data. Some of it you publish openly and others use it to find you. And some of it you share only within an enumerated circle of other users. At the receiving end, social networking sites can monetize all of it: generally by selling targeted advertising.

Different social networking sites give users different rights for each data type. Some are always private, some can be made private, and some are always public. Some can be edited or deleted—I know one site that allows entrusted data to be edited or deleted within a 24-hour period—and some cannot. Some can be viewed and some cannot.

It’s also clear that users should have different rights with respect to each data type. We should be allowed to export, change, and delete disclosed data, even if the social networking sites don’t want us to. It’s less clear what rights we have for entrusted data—and far less clear for incidental data. If you post pictures from a party with me in them, can I demand you remove those pictures—or at least blur out my face? (Go look up the conviction of three Google executives in Italian court over a YouTube video.) And what about behavioral data? It’s frequently a critical part of a social networking site’s business model. We often don’t mind if a site uses it to target advertisements, but are less sanguine when it sells data to third parties.

As we continue our conversations about what sorts of fundamental rights people have with respect to their data, and more countries contemplate regulation on social networking sites and user data, it will be important to keep this taxonomy in mind. The sorts of things that would be suitable for one type of data might be completely unworkable and inappropriate for another.

This essay previously appeared in IEEE Security & Privacy.

Edited to add: this post has been translated into Portuguese.

Posted on August 10, 2010 at 6:51 AM39 Comments

P ≠ NP?

There’s a new paper circulating that claims to prove that P ≠ NP. The paper has not been refereed, and I haven’t seen any independent verifications or refutations. Despite the fact that the paper is by a respected researcher—HP Lab’s Vinay Deolalikar—and not a crank, my bet is that the proof is flawed.

EDITED TO ADD (8/16): Proof seems to be seriously flawed.

EDITED TO ADD (9/11): Proof is wrong.

Posted on August 9, 2010 at 2:46 PM82 Comments

Ant Warfare

Interesting:

According to Moffett, we might actually learn a thing or two from how ants wage war. For one, ant armies operate with precise organization despite a lack of central command. “We’re accustomed to being told what to do,” Moffett says. “I think there’s something to be said for fewer layers of control and oversight.”

Which, according to Moffett, is what can make human cyberwar and terrorist cells so effective. Battles waged on the web are often “downright ant-like,” with massive, networked groups engaging in strategic teamwork to rise up with little hierarchy. “Such ‘weak ties’ ­ wide-ranging connections that take us beyond the tight-knit groups we interact with regularly—are likely of special importance in organizing both ants and people,” Moffett notes in his book.

Posted on August 9, 2010 at 7:12 AM19 Comments

More Brain Scans to Detect Future Terrorists

Worked well in a test:

For the first time, the Northwestern researchers used the P300 testing in a mock terrorism scenario in which the subjects are planning, rather than perpetrating, a crime. The P300 brain waves were measured by electrodes attached to the scalp of the make-believe “persons of interest” in the lab.

The most intriguing part of the study in terms of real-world implications, Rosenfeld said, is that even when the researchers had no advance details about mock terrorism plans, the technology was still accurate in identifying critical concealed information.

“Without any prior knowledge of the planned crime in our mock terrorism scenarios, we were able to identify 10 out of 12 terrorists and, among them, 20 out of 30 crime-related details,” Rosenfeld said. “The test was 83 percent accurate in predicting concealed knowledge, suggesting that our complex protocol could identify future terrorist activity.”

Rosenfeld is a leading scholar in the study of P300 testing to reveal concealed information. Basically, electrodes are attached to the scalp to record P300 brain activity—or brief electrical patterns in the cortex—that occur, according to the research, when meaningful information is presented to a person with “guilty knowledge.”

More news stories.

The base rate of terrorism makes this test useless, but the technology will only get better.

Posted on August 6, 2010 at 5:36 AM61 Comments

NSA and the National Cryptologic Museum

Most people might not be aware of it, but there’s a National Cryptologic Museum at Ft. Meade, at NSA Headquarters. It’s hard to know its exact relationship with the NSA. Is it part of the NSA, or is it a separate organization? Can the NSA reclassify things in its archives? David Kahn has given his papers to the museum; is that a good idea?

A “Memorandum of Understanding (MOU) between The National Security Agency (NSA) and the National Cryptologic Museum Foundation” was recently released. It’s pretty boring, really, but it sheds some light on the relationshp between the museum and the agency.

Posted on August 5, 2010 at 6:36 AM58 Comments

WikiLeaks Insurance File

Now this is an interesting development:

In the wake of strong U.S. government statements condemning WikiLeaks’ recent publishing of 77,000 Afghan War documents, the secret-spilling site has posted a mysterious encrypted file labeled “insurance.”

The huge file, posted on the Afghan War page at the WikiLeaks site, is 1.4 GB and is encrypted with AES256. The file’s size dwarfs the size of all the other files on the page combined. The file has also been posted on a torrent download site.

It’s either 1.4 Gig of embarrassing secret documents, or 1.4 Gig of random data bluffing. There’s no way to know.

If WikiLeaks wanted to prove that their “insurance” was the real thing, they should have done this:

  1. Encrypt each document with a separate AES key.
  2. Ask someone to publicly tell them to choose a random document.
  3. Publish the decryption key for that document only.

That would be convincing.

In any case, some of the details might be wrong. The file might not be encrypted with AES256. It might be Blowfish. It might be OpenSSL. It might be something else. Some more info here.

EDITED TO ADD (8/9): Weird Iranian paranoia:

An Iranian IT expert warned here on Wednesday that a mysterious download file posted by the WikiLeaks website, labeled as ‘Insurance’, is likely a spy software used for identifying the information centers of the United States’ foes.

“The mysterious file of the WikiLeaks might be a trap for intelligence gathering,” Hossein Mohammadi told FNA on Wednesday.

The expert added that the file will attract US opponents and Washington experts can identify their enemy centers by monitoring individuals’ or organizations’ tendency and enthusiasm for the file.

Posted on August 4, 2010 at 7:52 AM218 Comments

UAE to Ban BlackBerrys

The United Arab Emirates—Dubai, etc.—is threatening to ban BlackBerrys because they can’t eavesdrop on them.

At the heart of the battle is access to the data transmitted by BlackBerrys. RIM processes the information through a handful of secure Network Operations Centers around the world, meaning that most governments can’t access the data easily on their own. The U.A.E. worries that because of jurisdictional issues, its courts couldn’t compel RIM to turn over secure data from its servers, which are outside the U.A.E. even in a national-security situation, a person familiar with the situation said.

This is a weird story for several reasons:

1. The UAE can’t eavesdrop on BlackBerry traffic because it is encrypted between RIM’s servers and the phones. That makes sense, but conventional e-mail services are no different. Gmail, for example, is encrypted between Google’s servers and the users’ computers. So are most other webmail services. Is the mobile nature of BlackBerrys really that different? Is it really not a problem that any smart phone can access webmail through an encrypted SSL tunnel?

2. This an isolated move in a complicated negotiation between the UAE and RIM.

The U.A.E. ban, due to start Oct. 11, was the result of the “failure of ongoing attempts, dating back to 2007, to bring BlackBerry services in the U.A.E. in line with U.A.E. telecommunications regulations,” the country’s Telecommunications Regulatory Authority said Sunday. The ban doesn’t affect telephone and text-messaging services.

And:

The U.A.E. wanted RIM to locate servers in the country, where it had legal jurisdiction over them; RIM had offered access to the data of 3,000 clients instead, the person said.

There’s no reason to announce the ban over a month before it goes into effect, other than to prod RIM to respond in some way.

3. It’s not obvious who will blink first. RIM has about 500,000 users in the UAE. RIM doesn’t want to lose those subscribers, but the UAE doesn’t want to piss those people off, either. The UAE needs them to work and do business in their country, especially as real estate prices continue to collapse.

4. India, China, and Russia threatened to kick BlackBerrys out for this reason, but relented when RIM agreed to “address concerns,” which is code for “allowed them to eavesdrop.”

Most countries have negotiated agreements with RIM that enable their security agencies to monitor and decipher this traffic. For example, Russia’s two main mobile phone providers, MTS and Vimpelcom, began selling BlackBerrys after they agreed to provide access to the federal security service. “We resolved this question,” Vimpelcom says. “We provided access.”

The launch of BlackBerry service by China Mobile was delayed until RIM negotiated an agreement that enables China to monitor traffic.

Similarly, last week India lifted a threat to ban BlackBerry services after RIM agreed to address concerns.

[…]

Nevertheless, while RIM has declined to comment on the details of its arrangements with any government, it issued an opaque statement on Monday: “RIM respects both the regulatory requirements of government and the security and privacy needs of corporations and consumers.”

How did they do that? Did they put RIM servers in those countries, and allow the government access to the traffic? Did they pipe the raw traffic back to those countries from their servers elsewhere? Did they just promise to turn over any data when asked?

RIM makes a big deal about how secure its users’ data is, but I don’t know how much of that to believe:

RIM said the BlackBerry network was set up so that “no one, including RIM, could access” customer data, which is encrypted from the time it leaves the device. It added that RIM would “simply be unable to accommodate any request” for a key to decrypt the data, since the company doesn’t have the key.

The BlackBerry network is designed “to exclude the capability for RIM or any third party to read encrypted information under any circumstances,” RIM’s statement said. Moreover, the location of BlackBerry’s servers doesn’t matter, the company said, because the data on them can’t be deciphered without a decryption key.

Am I missing something here? RIM isn’t providing a file storage service, where user-encrypted data is stored on its servers. RIM is providing a communications service. While the data is encrypted between RIM’s servers and the BlackBerrys, it has to be encrypted by RIM—so RIM has access to the plaintext.

In any case, RIM has already demonstrated that it has the technical ability to address the UAE’s concerns. Like the apocryphal story about Churchill and Lady Astor, all that’s left is to agree on a price.

5. For the record, I have absolutely no idea what this quote of mine from the Reuters story really means:

“If you want to eavesdrop on your people, then you ban whatever they’re using,” said Bruce Schneier, chief security technology officer at BT. “The basic problem is there’s encryption between the BlackBerries and the servers. We find this issue all around about encryption.”

I hope I wasn’t that incoherent during the phone interview.

EDITED TO ADD (8/5): I might have gotten a do-over with Reuters. On a phone interview yesterday, I said: “RIM’s carefully worded statements about BlackBerry security are designed to make their customers feel better, while giving the company ample room to screw them.” Jonathan Zittrain picks apart one of those statements.

Posted on August 3, 2010 at 11:08 AM67 Comments

Location-Based Quantum Encryption

Location-based encryption—a system by which only a recipient in a specific location can decrypt the message—fails because location can be spoofed. Now a group of researchers has solved the problem in a quantum cryptography setting:

The research group has recently shown that if one sends quantum bits—the quantum equivalent of a bit—instead of only classical bits, a secure protocol can be obtained such that the location of a device cannot be spoofed. This, in turn, leads to a key-exchange protocol based solely on location.

The core idea behind the protocol is the “no-cloning” principle of quantum mechanics. By making a device give the responses of random challenges to several verifiers, the protocol ensures that multiple colluding devices cannot falsely prove any location. This is because an adversarial device can either store the quantum state of the challenge or send it to a colluding adversary, but not both.

Don’t expect this in a product anytime soon. Quantum cryptography is mostly theoretical and almost entirely laboratory-only. But as research, it’s great stuff. Paper here.

Posted on August 3, 2010 at 6:25 AM26 Comments

Eavesdropping Smartphone Apps

Seems there are a lot of them. They do it for marketing purposes. Really, they seem to do it because the code base they use does it automatically or just because they can. (Initial reports that an Android wallpaper app was malicious seems to have been an overstatement; they’re just incompetent: inadvertently collecting more data than necessary.)

Meanwhile, there’s now an Android rootkit available.

Posted on August 2, 2010 at 9:21 PM24 Comments

Book Review: How Risky Is It, Really?

David Ropeik is a writer and consultant who specializes in risk perception and communication. His book, How Risky Is It, Really?: Why Our Fears Don’t Always Match the Facts, is a solid introduction to the biology, psychology, and sociology of risk. If you’re well-read on the topic already, you won’t find much you didn’t already know. But if this is a new topic for you, or if you want a well-organized guide to the current research on risk perception all in one place, this pretty close to the perfect book.

Ropeik builds his model of human risk perception from the inside out. Chapter 1 is about fear, our largely subconscious reaction to risk. Chapter 2 discusses bounded rationality, the cognitive shortcuts that allow us to efficiently make risk trade-offs. Chapter 3 discusses some of the common cognitive biases we have that cause us to either overestimate or underestimate risk: trust, control, choice, natural vs. man-made, fairness, etc.—thirteen in all. Finally, Chapter 4 discusses the sociological aspects of risk perception: how our estimation of risk depends on that of the people around us.

The book is primarily about how we humans get risk wrong: how our perception of risk differs from the reality of risk. But Ropeik is careful not to use the word “wrong,” and repeatedly warns us not to do it. Risk perception is not right or wrong, he says; it simply is. I don’t agree with this. There is both a feeling and reality of risk and security, and when they differ, we make bad security trade-offs. If you think your risk of dying in a terrorist attack, or of your children being kidnapped, is higher than it really is, you’re going to make bad security trade-offs. Yes, security theater has its place, but we should try to make that place as small as we can.

In Chapter 5, Ropeik tries his hand at solutions to this problem: “closing the perception gap” is how he puts it; reducing the difference between the feeling of security and the reality is how I like to explain it. This is his weakest chapter, but it’s also a very hard problem. My writings along this line are similarly weak. Still, his ideas are worth reading and thinking about.

I don’t have any other complaints with the book. Ropeik nicely balances readability with scientific rigor, his examples are interesting and illustrative, and he is comprehensive without being boring. Extensive footnotes allow the reader to explore the actual research behind the generalities. Even though I didn’t learn much from reading it, I enjoyed the ride.

How Risky Is It, Really? is available in hardcover and for the Kindle. Presumably a paperback will come out in a year or so. Ropeik has a blog, although he doesn’t update it much.

Posted on August 2, 2010 at 6:38 AM12 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.