Entries Tagged "laws"

Page 3 of 33

Presidential Cybersecurity and Pelotons

President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.

Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and—yes—exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.

Not everyone is so lucky, and the rest of us need something more structural.

US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones—and possibly the wireless Internet connection.

President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.

The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.

Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure—but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.

This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.

Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.

That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor—and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”

And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.

Internet security is national security—not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.

Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.

I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.

This essay previously appeared in the Washington Post.

Posted on February 5, 2021 at 5:58 AMView Comments

Another California Data Privacy Law

The California Consumer Privacy Act is a lesson in missed opportunities. It was passed in haste, to stop a ballot initiative that would have been even more restrictive:

In September 2017, Alastair Mactaggart and Mary Ross proposed a statewide ballot initiative entitled the “California Consumer Privacy Act.” Ballot initiatives are a process under California law in which private citizens can propose legislation directly to voters, and pursuant to which such legislation can be enacted through voter approval without any action by the state legislature or the governor. While the proposed privacy initiative was initially met with significant opposition, particularly from large technology companies, some of that opposition faded in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s April 2018 testimony before Congress. By May 2018, the initiative appeared to have garnered sufficient support to appear on the November 2018 ballot. On June 21, 2018, the sponsors of the ballot initiative and state legislators then struck a deal: in exchange for withdrawing the initiative, the state legislature would pass an agreed version of the California Consumer Privacy Act. The initiative was withdrawn, and the state legislature passed (and the Governor signed) the CCPA on June 28, 2018.

Since then, it was substantially amended—that is, watered down—at the request of various surveillance capitalism companies. Enforcement was supposed to start this year, but we haven’t seen much yet.

And we could have had that ballot initiative.

It looks like Alastair Mactaggart and others are back.

Advocacy group Californians for Consumer Privacy, which started the push for a state-wide data privacy law, announced this week that it has the signatures it needs to get version 2.0 of its privacy rules on the US state’s ballot in November, and submitted its proposal to Sacramento.

This time the goal is to tighten up the rules that its previously ballot measure managed to get into law, despite the determined efforts of internet giants like Google and Facebook to kill it. In return for the legislation being passed, that ballot measure was dropped. Now, it looks like the campaigners are taking their fight to a people’s vote after all.

[…]

The new proposal would add more rights, including the use and sale of sensitive personal information, such as health and financial information, racial or ethnic origin, and precise geolocation. It would also triples existing fines for companies caught breaking the rules surrounding data on children (under 16s) and would require an opt-in to even collect such data.

The proposal would also give Californians the right to know when their information is used to make fundamental decisions about them, such as getting credit or employment offers. And it would require political organizations to divulge when they use similar data for campaigns.

And just to push the tech giants from fury into full-blown meltdown the new ballot measure would require any amendments to the law to require a majority vote in the legislature, effectively stripping their vast lobbying powers and cutting off the multitude of different ways the measures and its enforcement can be watered down within the political process.

I don’t know why they accepted the compromise in the first place. It was obvious that the legislative process would be hijacked by the powerful tech companies. I support getting this onto the ballot this year.

EDITED TO ADD(5/17): It looks like this new ballot initiative isn’t going to be an improvement.

Posted on May 11, 2020 at 10:58 AMView Comments

Securing the Internet of Things through Class-Action Lawsuits

This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

EDITED TO ADD (3/13): Consumer Reports recently explored how prevalent arbitration (vs. lawsuits) has become in the USA.

Posted on February 27, 2020 at 6:03 AMView Comments

Policy vs. Technology

Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath—about surveillance—and Click Here to Kill Everybody—about IoT security—are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers—differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used—how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different—and sometimes conflicting—national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better—NIST’s recent work on trust is a good example—but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law­—that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on February 21, 2020 at 5:54 AMView Comments

Modern Mass Surveillance: Identify, Correlate, Discriminate

Communities across the United States are starting to ban facial recognition technologies. In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

These efforts are well-intentioned, but facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

In all cases, modern mass surveillance has three broad components: identification, correlation and discrimination. Let’s take them in turn.

Facial recognition is a technology that can be used to identify people without their knowledge or consent. It relies on the prevalence of cameras, which are becoming both more powerful and smaller, and machine learning technologies that can match the output of these cameras with images from a database of existing photos.

But that’s just one identification technology among many. People can be identified at a distance by their heartbeat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses. Other things identify us as well: our phone numbers, our credit card numbers, the license plates on our cars. China, for example, uses multiple identification technologies to support its surveillance state.

Once we are identified, the data about who we are and what we are doing can be correlated with other data collected at other times. This might be movement data, which can be used to “follow” us as we move throughout our day. It can be purchasing data, Internet browsing data, or data about who we talk to via email or text. It might be data about our income, ethnicity, lifestyle, profession and interests. There is an entire industry of data brokers who make a living analyzing and augmenting data about who we are ­—using surveillance data collected by all sorts of companies and then sold without our knowledge or consent.

There is a huge ­—and almost entirely unregulated ­—data broker industry in the United States that trades on our information. This is how large Internet companies like Google and Facebook make their money. It’s not just that they know who we are, it’s that they correlate what they know about us to create profiles about who we are and what our interests are. This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The whole purpose of this process is for companies—­ and governments ­—to treat individuals differently. We are shown different ads on the Internet and receive different offers for credit cards. Smart billboards display different advertisements based on who we are. In the future, we might be treated differently when we walk into a store, just as we currently are when we visit websites.

The point is that it doesn’t matter which technology is used to identify people. That there currently is no comprehensive database of heartbeats or gaits doesn’t make the technologies that gather them any less effective. And most of the time, it doesn’t matter if identification isn’t tied to a real name. What’s important is that we can be consistently identified over time. We might be completely anonymous in a system that uses unique cookies to track us as we browse the Internet, but the same process of correlation and discrimination still occurs. It’s the same with faces; we can be tracked as we move around a store or shopping mall, even if that tracking isn’t tied to a specific name. And that anonymity is fragile: If we ever order something online with a credit card, or purchase something with a credit card in a store, then suddenly our real names are attached to what was anonymous tracking information.

Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.

Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law ­—passed in Vermont in 2018 ­—that requires data brokers to register and explain in broad terms what kind of data they collect. The large Internet surveillance companies like Facebook and Google collect dossiers on us are more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.

Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.

Today, facial recognition technologies are receiving the brunt of the tech backlash, but focusing on them misses the point. We need to have a serious conversation about all the technologies of identification, correlation and discrimination, and decide how much we as a society want to be spied on by governments and corporations—and what sorts of influence we want them to have over our lives.

This essay previously appeared in the New York Times.

EDITED TO ADD: Rereading this post-publication, I see that it comes off as overly critical of those who are doing activism in this space. Writing the piece, I wasn’t thinking about political tactics. I was thinking about the technologies that support surveillance capitalism, and law enforcement’s usage of that corporate platform. Of course it makes sense to focus on face recognition in the short term. It’s something that’s easy to explain, viscerally creepy, and obviously actionable. It also makes sense to focus specifically on law enforcement’s use of the technology; there are clear civil and constitutional rights issues. The fact that law enforcement is so deeply involved in the technology’s marketing feels wrong. And the technology is currently being deployed in Hong Kong against political protesters. It’s why the issue has momentum, and why we’ve gotten the small wins we’ve had. (The EU is considering a five-year ban on face recognition technologies.) Those wins build momentum, which lead to more wins. I should have been kinder to those in the trenches.

If you want to help, sign the petition from Public Voice calling on a moratorium on facial recognition technology for mass surveillance. Or write to your US congressperson and demand similar action. There’s more information from EFF and EPIC.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on January 27, 2020 at 12:21 PMView Comments

How Privacy Laws Hurt Defendants

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don’t have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

Posted on August 2, 2019 at 6:04 AMView Comments

The Importance of Protecting Cybersecurity Whistleblowers

Interesting essay arguing that we need better legislation to protect cybersecurity whistleblowers.

Congress should act to protect cybersecurity whistleblowers because information security has never been so important, or so challenging. In the wake of a barrage of shocking revelations about data breaches and companies mishandling of customer data, a bipartisan consensus has emerged in support of legislation to give consumers more control over their personal information, require companies to disclose how they collect and use consumer data, and impose penalties for data breaches and misuse of consumer data. The Federal Trade Commission (“FTC”) has been held out as the best agency to implement this new regulation. But for any such legislation to be effective, it must protect the courageous whistleblowers who risk their careers to expose data breaches and unauthorized use of consumers’ private data.

Whistleblowers strengthen regulatory regimes, and cybersecurity regulation would be no exception. Republican and Democratic leaders from the executive and legislative branches have extolled the virtues of whistleblowers. High-profile cases abound. Recently, Christopher Wylie exposed Cambridge Analytica’s misuse of Facebook user data to manipulate voters, including its apparent theft of data from 50 million Facebook users as part of a psychological profiling campaign. Though additional research is needed, the existing empirical data reinforces the consensus that whistleblowers help prevent, detect, and remedy misconduct. Therefore it is reasonable to conclude that protecting and incentivizing whistleblowers could help the government address the many complex challenges facing our nation’s information systems.

Posted on June 3, 2019 at 6:30 AMView Comments

Major Tech Companies Finally Endorse Federal Privacy Regulation

The major tech companies, scared that states like California might impose actual privacy regulations, have now decided that they can better lobby the federal government for much weaker national legislation that will preempt any stricter state measures.

I’m sure they’ll still do all they can to weaken the California law, but they know they’ll do better at the national level.

Posted on September 28, 2018 at 1:19 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.