Blog: April 2016 Archives
I’m writing a book on security in the highly connected Internet-of-Things world. Tentative title:
<blockquote><i>Click Here to Kill Everybody Peril and Promise in a Hyper-Connected World</i></blockquote>
There are two underlying metaphors in the book. The first is what I have called the World-Sized Web, which is that combination of mobile, cloud, persistence, personalization, agents, cyber-physical systems, and the Internet of Things. The second is what I’m calling the “war of all against all,” which is the recognition that security policy is a series of “wars” between various interests, and that any policy decision in any one of the wars affects all the others. I am not wedded to either metaphor at this point.
This is the current table of contents, with three of the chapters broken out into sub-chapters:
- The World-Sized Web
- The Coming Threats
- Privacy Threats
- Availability and Integrity Threats
- Threats from Software-Controlled Systems
- Threats from Interconnected Systems
- Threats from Automatic Algorithms
- Threats from Autonomous Systems
- Other Threats of New Technologies
- Catastrophic Risk
- The Current Wars
- The Copyright Wars
- The US/EU Data Privacy Wars
- The War for Control of the Internet
- The War of Secrecy
- The Coming Wars
- The War for Your Data
- The War Against Your Computers
- The War for Your Embedded Computers
- The Militarization of the Internet
- The Powerful vs. the Powerless
- The Rights of the Individual vs. the Rights of Society
- The State of Security
- Near-Term Solutions
- Security for an Empowered World
That will change, of course. If the past is any guide, everything will change.
Questions: Am I missing any threats? Am I missing any wars?
Current schedule is for me to finish writing this book by the end of September, and have it published at the end of April 2017. I hope to have pre-publication copies available for sale at the RSA Conference next year. As with my previous book, Norton is the publisher.
So if you notice me blogging less this summer, this is why.
In Data and Goliath, I talk about the self-censorship that comes along with broad surveillance. This interesting research documents this phenomenon in Wikipedia: “Chilling Effects: Online Surveillance and Wikipedia Use,” by Jon Penney, Berkeley Technology Law Journal, 2016.
Abstract: This article discusses the results of the first empirical study providing evidence of regulatory “chilling effects” of Wikipedia users associated with online government surveillance. The study explores how traffic to Wikipedia articles on topics that raise privacy concerns for Wikipedia users decreased after the widespread publicity about NSA/PRISM surveillance revelations in June 2013. Using an interdisciplinary research design, the study tests the hypothesis, based on chilling effects theory, that traffic to privacy-sensitive Wikipedia articles reduced after the mass surveillance revelations. The Article finds not only a statistically significant immediate decline in traffic for these Wikipedia articles after June 2013, but also a change in the overall secular trend in the view count traffic, suggesting not only immediate but also long-term chilling effects resulting from the NSA/PRISM online surveillance revelations. These, and other results from the case study, not only offer compelling evidence for chilling effects associated with online surveillance, but also offer important insights about how we should understand such chilling effects and their scope, including how they interact with other dramatic or significant events (like war and conflict) and their broader implications for privacy, U.S. constitutional litigation, and the health of democratic society. This study is among the first to demonstrate—using either Wikipedia data or web traffic data more generally how government surveillance and similar actions impact online activities, including access to information and knowledge online.
Amazon Unlimited is an all-you-can-read service. You pay one price and can read anything that’s in the program. Amazon pays authors out of a fixed pool, on the basis of how many people read their books. More interestingly, it pays by the page. An author makes more money if someone reads his book through to page 200 than if they give up at page 50, and even more if they make it through to the end. This makes sense; it doesn’t pay authors for books people download but don’t read, or read the first few pages of and then decide not to read the rest.
This payment structure requires surveillance, and the Kindle does watch people as they read. The problem is that the Kindle doesn’t know if the reader actually reads the book—only what page they’re on. So Kindle Unlimited records the furthest page the reader synched, and pays based on that.
This opens up the possibility for fraud. If an author can create a thousand-page book and trick the reader into reading page 1,000, he gets paid the maximum. Scam authors are doing this through a variety of tricks.
What’s interesting is that while Amazon is definitely concerned about this kind of fraud, it doesn’t affect its bottom line. The fixed payment pool doesn’t change; just who gets how much of it does.
EDITED TO ADD: John Scalzi comments.
In the study, sponsored in part by the Air Force Office of Scientific Research (AFOSR), the researchers recruited a group of 42 volunteers, most of them college students, and asked them to follow a brightly colored robot that had the words “Emergency Guide Robot” on its side. The robot led the study subjects to a conference room, where they were asked to complete a survey about robots and read an unrelated magazine article. The subjects were not told the true nature of the research project.
In some cases, the robot—which was controlled by a hidden researcher—led the volunteers into the wrong room and traveled around in a circle twice before entering the conference room. For several test subjects, the robot stopped moving, and an experimenter told the subjects that the robot had broken down. Once the subjects were in the conference room with the door closed, the hallway through which the participants had entered the building was filled with artificial smoke, which set off a smoke alarm.
When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.
“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”
The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.
Our notions of trust depend on all sorts of cues that have nothing to do with actual trustworthiness. I would be interested in seeing where the robot fits in in the continuum of authority figures. Is it trusted more or less than a man in a hazmat suit? A woman in a business suit? An obviously panicky student? How do different looking robots fare?
Last week, there was a big news story about the BlackBerry encryption key. The news was that all BlackBerry devices share a global encryption key, and that the Canadian RCMP has a copy of it. Stupid design, certainly, but it’s not news. As the Register points out, this has been repeatedly reported on since 2010.
And note that this only holds for individual users. If your organization uses a BlackBerry Enterprise Server (BES), you have your own unique key.
If doping weren’t enough, cyclists are cheating in races by hiding tiny motors in their bicycles. There are many detection techniques:
For its report, Stade 2 positioned a thermal imaging camera along the route of the Strade Bianche, an Italian professional men’s race in March held mostly on unpaved roads and featuring many steep climbs. The rear hub of one bicycle glowed with almost the same vivid orange-yellow thermal imprint of the riders’ legs. Engineers and antidoping experts interviewed by the TV program said the pattern could be explained only by heat generated by a motor. The rider was not named by the program and could not be identified from the thermal image.
Cycling’s equivalents of the Zapruder film are online videos that show unusual patterns of bike changes that precede or follow exceptional bursts of speed by riders. Other videos analyze riders’ hand movements for signs of switching on motors. Still other online analysts pore over crashes, looking for bikes on which the cranks keep turning after separation from the rider.
Unlike the thermal images, however, the videos have only implied that a motor was present.
In a statement, the cycling union, which commonly goes by its French initials, U.C.I., said it had tested and rejected thermal imaging.
“The U.C.I. has been testing for technological fraud for many years, and with the objective of increasing the efficiency of these tests, we have been trialling new methods of detection over the last year,” the governing body said. “We have looked at thermal imaging, X-ray and ultrasonic testing, but by far the most cost-effective, reliable and accurate method has proved to be magnetic resonance testing using software we have created in partnership with a company of specialist developers.”
NYU professor Helen Nissenbaum gave an excellent lecture at Brown University last month, where she rebutted those who think that we should not regulate data collection, only data use: something she calls “big data exceptionalism.” Basically, this is the idea that collecting the “haystack” isn’t the problem; it what is done with it that is. (I discuss this same topic in Data and Goliath, on pages 197-9.)
In her talk, she makes a very strong argument that the problem is one of domination. Contemporary political philosopher Philip Pettit has written extensively about a republican conception of liberty. He defines domination as the extent one person has the ability to interfere with the affairs of another.
Under this framework, the problem with wholesale data collection is not that it is used to curtail your freedom; the problem is that the collector has the power to curtail your freedom. Whether they use it or not, the fact that they have that power over us is itself a harm.
Last year, we learned about a backdoor in Juniper firewalls, one that seems to have been added into the code base.
There’s now some good research: “A Systematic Analysis of the Juniper Dual EC Incident,” by Stephen Checkoway, Shaanan Cohney, Christina Garman, Matthew Green, Nadia Heninger, Jacob Maskiewicz, Eric Rescorla, Hovav Shacham, and Ralf-Philipp Weinmann:
Abstract: In December 2015, Juniper Networks announced that unknown attackers had added unauthorized code to ScreenOS, the operating system for their NetScreen VPN routers. This code created two vulnerabilities: an authentication bypass that enabled remote administrative access, and a second vulnerability that allowed passive decryption of VPN traffic. Reverse engineering of ScreenOS binaries revealed that the first of these vulnerabilities was a conventional back door in the SSH password checker. The second is far more intriguing: a change to the Q parameter used by the Dual EC pseudorandom number generator. It is widely known that Dual EC has the unfortunate property that an attacker with the ability to choose Q can, from a small sample of the generator’s output, predict all future outputs. In a 2013 public statement, Juniper noted the use of Dual EC but claimed that ScreenOS included countermeasures that neutralized this form of attack.
In this work, we report the results of a thorough independent analysis of the ScreenOS randomness subsystem, as well as its interaction with the IKE VPN key establishment protocol. Due to apparent flaws in the code, Juniper’s countermeasures against a Dual EC attack are never executed. Moreover, by comparing sequential versions of ScreenOS, we identify a cluster of additional changes that were introduced concurrently with the inclusion of Dual EC in a single 2008 release. Taken as a whole, these changes render the ScreenOS system vulnerable to passive exploitation by an attacker who selects Q. We demonstrate this by installing our own parameters, and showing that it is possible to passively decrypt a single IKE handshake and its associated VPN traffic in isolation without observing any other network traffic.
We still don’t know who installed the back door.
There’s a new law that will enforce DNA testing for everyone: citizens, expatriates, and visitors. They promise that the program “does not include genealogical implications or affects personal freedoms and privacy.”
I assume that “visitors” includes tourists, so presumably the entry procedure at passport control will now include a cheek swab. And there is nothing preventing the Kuwaiti government from sharing that information with any other government.
Shortened URLs, produced by services like bit.ly and goo.gl, can be brute-forced. And searching random shortened URLs yields all sorts of secret documents. Plus, many of them can be edited, and can be infected with malware.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Monday is Tax Day. Many of us are thinking about our taxes. Are they too high or too low? What’s our money being spent on? Do we have a government worth paying for? I’m not here to answer any of those questions—I’m here to give you something else to think about. In addition to sending the IRS your money, you’re also sending them your data.
It’s a lot of highly personal financial data, so it’s sensitive and important information.
Is that data secure?
The short answer is “no.” Every year, the GAO—Government Accountability Office—reviews IRS security and issues a report. The title of this year’s report kind of says it all: “IRS Needs to Further Improve Controls over Financial and Taxpayer Data.” The details are ugly: failures in identification and authentication of network users, failures to encrypt data, failures in audit and monitoring and failures to patch vulnerabilities and update software.
To be fair, the GAO can sometimes be pedantic in its evaluations. And the 43 recommendations for the IRS to improve security aren’t being made public, so as not to advertise our vulnerabilities to the bad guys. But this is all pretty basic stuff, and it’s embarrassing.
More importantly, this lack of security is dangerous. We know that cybercriminals are using our financial information to commit fraud. Specifically, they’re using our personal tax information to file for tax refunds in our name to fraudulently collect the refunds.
We know that foreign governments are targeting U.S. government networks for personal information on U.S. citizens: Remember the OPM data theft that was made public last year in which a federal personnel database with records on 21.5 million people was stolen?
There have been some stories of hacks against IRS databases in the past. I think that the IRS has been hacked even more than is publicly reported, either because the government is keeping the attacks secret or because it doesn’t even realize it’s been attacked.
So what happens next?
If the past is any guide, not a lot. The GAO has been warning about problems with IRS security since it started writing these reports in 2007. In each report, the GAO has issued recommendations for the IRS to improve security. After each report, the IRS did a few of those things, but ignored most of the recommendations. In this year’s report, for example, the GAO complained that the IRS ignored 47 of its 70 recommendations from 2015. In its 2015 report, it complained that the IRS only mitigated 14 of the 69 weaknesses it identified in 2013. The 2012 report didn’t paint IRS security in any better light.
If I had to guess, I’d say the IRS’s security is this bad for the exact same reason that so much corporate network-security is so bad: lack of budget. It’s not uncommon for companies to skimp on their security budget. The budget at the IRS has been cut 17% since 2010; I am certain IT security was not exempt from those cuts.
So we’re stuck. We have no choice but to give the IRS our data. The IRS isn’t doing a good job securing our data. Congress isn’t giving the IRS enough budget to do a good job securing our data. Last Tuesday, the Senate Finance Committee urged the IRS to improve its security. We all need to urge Congress to give it the money to do so.
Nothing is absolutely hacker-proof, but there are a lot of security improvements the IRS can make. If we have to give the IRS all our information—and we do—we deserve to have it taken care of properly.
This essay previously appeared on CNN.com.
Story of Julie Miller, who cheated in multiple triathlon races:
The difference between cheating in 1980 and cheating today is that it’s much harder to get away with now. What trips up contemporary cheaters, Empfield said, is their false assumption that the only thing they have to worry about is their timing chip, the device they wear that records their time at various points along a course.
But the use of additional technology especially the ubiquitous course photos taken by spectators and professional photographers, which provide a wealth of information about athletes’ positions and times throughout a race makes it difficult for people to cover their tracks after the fact.
“What these people don’t understand is that the photos contain so much data they don’t know that this exists,” Empfield said of cheaters. “They think that if they hide in the bushes and re-emerge or take the chip off or whatever, they’re in the clear. But the problem is that people can now forensically recreate your race.”
Reminds me of this 2012 story about marathon cheating.
EDITED TO ADD (4/27): An update with proof of cheating.
The company Cellebrite is developing a portable forensics device that would determine if a smartphone user was using the phone at a particular time. The idea is to test phones of drivers after accidents:
Under the first-of-its-kind legislation proposed in New York, drivers involved in accidents would have to submit their phone to roadside testing from a textalyzer to determine whether the driver was using a mobile phone ahead of a crash. In a bid to get around the Fourth Amendment right to privacy, the textalyzer allegedly would keep conversations, contacts, numbers, photos, and application data private. It will solely say whether the phone was in use prior to a motor-vehicle mishap. Further analysis, which might require a warrant, could be necessary to determine whether such usage was via hands-free dashboard technology and to confirm the original finding.
This is interesting technology. To me, it feels no more intrusive than a breathalyzer, assuming that the textalyzer has all the privacy guards described above.
EDITED TO ADD (4/19): Good analysis and commentary.
Interesting article about how a former security director of the US Multi-State Lottery Association hacked the random-number generator in lottery software so he could predict the winning numbers.
For several years, Eddie Tipton, the former security director of the US Multi-State Lottery Association, installed software code that allowed him to predict winning numbers on specific days of the year, investigators allege. The random-number generators had been erased, but new forensic evidence has revealed how the hack was apparently done.
The number generator had apparently been hacked to produce predictable numbers on three days of the year, after the machine had gone through a security audit.
Note that last bit. The software would only produce the non-random results after the software security audit was completed.
They feel quaint today:
But in the spring of 1859, folks were concerned about another kind of hustle: A man who went by the name of A.V. Lamartine drifted from town to town in the Midwest pretending to attempt suicide.
He would walk into a hotel according to newspaper accounts from Salem, Ore., to Richmond, Va., and other places and appear depressed as he requested a room. Once settled in, he would ring a bell for assistance, and when someone arrived, Lamartine would point to an empty bottle on the table labeled “2 ounces of laudanum” and call for a clergyman.
People rushing to his bedside to help him would find a suicide note. The Good Samaritans would summon a doctor, administer emetics and nurse him as he recovered.
Somehow Lamartine knew his situation would engender medical and financial assistance from kind strangers in the 19th century. The scenarios ended this way, as one Brooklyn reporter explained: “He is restored with difficulty and sympathetic people raise a purse for him and he departs.
Interesting research: Suphannee Sivakorn, Iasonas Polakis and Angelos D. Keromytis, “I Am Robot: (Deep) Learning to Break Semantic Image CAPTCHAs“:
Abstract: Since their inception, captchas have been widely used for preventing fraudsters from performing illicit actions. Nevertheless, economic incentives have resulted in an armsrace, where fraudsters develop automated solvers and, in turn, captcha services tweak their design to break the solvers. Recent work, however, presented a generic attack that can be applied to any text-based captcha scheme. Fittingly, Google recently unveiled the latest version of reCaptcha. The goal of their new system is twofold; to minimize the effort for legitimate users, while requiring tasks that are more challenging to computers than text recognition. ReCaptcha is driven by an “advanced risk analysis system” that evaluates requests and selects the difficulty of the captcha that will be returned. Users may be required to click in a checkbox, or solve a challenge by identifying images with similar content.
In this paper, we conduct a comprehensive study of reCaptcha, and explore how the risk analysis process is influenced by each aspect of the request. Through extensive experimentation, we identify flaws that allow adversaries to effortlessly influence the risk analysis, bypass restrictions, and deploy large-scale attacks. Subsequently, we design a novel low-cost attack that leverages deep learning technologies for the semantic annotation of images. Our system is extremely effective, automatically solving 70.78% of the image reCaptcha challenges, while requiring only 19 seconds per challenge. We also apply our attack to the Facebook image captcha and achieve an accuracy of 83.5%. Based on our experimental findings, we propose a series of safeguards and modifications for impacting the scalability and accuracy of our attacks. Overall, while our study focuses on reCaptcha, our findings have wide implications; as the semantic information conveyed via images is increasingly within the realm of automated reasoning, the future of captchas relies on the exploration of novel directions.
Khan was arrested in mid-July 2015. Undercover police officers posing as company managers arrived at his workplace and asked to check his driver and work records, according to the source. When they disputed where he was on a particular day, he got out his iPhone and showed them the record of his work.
The undercover officers asked to see his iPhone and Khan handed it over. After that, he was arrested. British police had 30 seconds to change the password settings to keep the phone open.
Reminds me about how the FBI arrested Ross William Ulbricht:
The agents had tailed him, waiting for the 29-year-old to open his computer and enter his passwords before swooping in.
That also works.
And, yes, I understand that none of this would have worked with the already dead Syed Farook and his iPhone.
As I expected when I announced this acquisition, I am staying on as the CTO of Resilient and something like Senior Advisor to IBM Security—we’re still working on the exact title. Everything I’ve seen so far indicates that this will be a good home for me. They know what they’re getting, and they’re still keeping me on. I have no intention of changing what I write about or speak about—or to whom.
For the company, this is still a great deal. The acquisition was big news at the RSA Conference a month ago, and we’ve gotten nothing but a positive response from analysts and a primarily positive response from customers.
CONIKS is an new easy-to-use transparent key-management system:
CONIKS is a key management system for end users capable of integration in end-to-end secure communication services. The main idea is that users should not have to worry about managing encryption keys when they want to communicate securely, but they also should not have to trust their secure communication service providers to act in their interest.
One of the main challenges to building usable end-to-end encrypted communication tools is key management. Services such as Apple’s iMessage have made encrypted communication available to the masses with an excellent user experience because Apple manages a directory of public keys in a centralized server on behalf of their users. But this also means users have to trust that Apple’s key server won’t be compromised or compelled by hackers or nation-state actors to insert spurious keys to intercept and manipulate users’ encrypted messages. The alternative, and more secure, approach is to have the service provider delegate key management to the users so they aren’t vulnerable to a compromised centralized key server. This is how Google’s End-To-End works right now. But decentralized key management means users must “manually” verify each other’s keys to be sure that the keys they see for one another are valid, a process that several studies have shown to be cumbersome and error-prone for the vast majority of users. So users must make the choice between strong security and great usability.
And this is CONIKS:
In CONIKS, communication service providers (e.g. Google, Apple) run centralized key servers so that users don’t have to worry about encryption keys, but the main difference is CONIKS key servers store the public keys in a tamper-evident directory that is publicly auditable yet privacy-preserving. On a regular basis, CONIKS key servers publish directory summaries, which allow users in the system to verify they are seeing consistent information. To achieve this transparent key management, CONIKS uses various cryptographic mechanisms that leave undeniable evidence if any malicious outsider or insider were to tamper with any key in the directory and present different parties different views of the directory. These consistency checks can be automated and built into the communication apps to minimize user involvement.
WhatsApp is now end-to-end encrypted.
EDITED TO ADD (4/6): Another article.
This is good:
Threats constantly change, yet our political discourse suggests that our vulnerabilities are simply for lack of resources, commitment or competence. Sometimes, that is true. But mostly we are vulnerable because we choose to be; because we’ve accepted, at least implicitly, that some risk is tolerable. A state that could stop every suicide bomber wouldn’t be a free or, let’s face it, fun one.
We will simply never get to maximum defensive posture. Regardless of political affiliation, Americans wouldn’t tolerate the delay or intrusion of an urban mass-transit system that required bag checks and pat-downs. After the 2013 Boston Marathon bombing, many wondered how to make the race safe the next year. A heavier police presence helps, but the only truly safe way to host a marathon is to not have one at all. The risks we tolerate, then, are not necessarily bad bargains simply because an enemy can exploit them.
No matter what promises are made on the campaign trail, terrorism will never be vanquished. There is no ideology, no surveillance, no wall that will definitely stop some 24-year-old from becoming radicalized on the Web, gaining access to guns and shooting a soft target. When we don’t admit this to ourselves, we often swing between the extremes of putting our heads in the sand or losing them entirely.
I am reminded of my own 2006 “Refuse to be Terrorized” essay.
Squid-based research is yielding composites that are both strong and flexible.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
I have long discounted warrant canaries. A gag order is serious, and this sort of high-school trick won’t fool judges for a minute. But so far they seem to be working.
Now we have another question: now what? We have one piece of information, but not a very useful one. We know that NSLs can affect anywhere from a single user to millions of users. Which kind was this? We have no idea. Is Reddit fighting? We have no idea. How long will this go on? We don’t know that, either. When I think about what we can do to be useful here, I can’t think of anything.
Long and interesting article about a fixer who hacked multiple elections in Latin America. This isn’t election hacking as in manipulate the voting machines or the vote counting, but hacking and social-media dirty tricks leading up to the election.
EDITED TO ADD: April Fool’s joke, it seems. Fooled me, probably because I read too fast. The ending is definitely suspicious.
EDITED TO ADD: Not an April Fool’s joke. I have gotten this from Bloomberg News itself. They spent a lot of time on this story—it’s 100% real. And this follow-on story is also worth reading.
This is definitely an April Fool’s joke.
Sidebar photo of Bruce Schneier by Joe MacInnis.