Entries Tagged "software liability"

Page 1 of 3

New National Cybersecurity Strategy

Last week, the Biden administration released a new National Cybersecurity Strategy (summary here). There is lots of good commentary out there. It’s basically a smart strategy, but the hard parts are always the implementation details. It’s one thing to say that we need to secure our cloud infrastructure, and another to detail what the means technically, who pays for it, and who verifies that it’s been done.

One of the provisions getting the most attention is a move to shift liability to software vendors, something I’ve been advocating for since at least 2003.

Slashdot thread.

Posted on March 6, 2023 at 7:06 AMView Comments

Presidential Cybersecurity and Pelotons

President Biden wants his Peloton in the White House. For those who have missed the hype, it’s an Internet-connected stationary bicycle. It has a screen, a camera, and a microphone. You can take live classes online, work out with your friends, or join the exercise social network. And all of that is a security risk, especially if you are the president of the United States.

Any computer brings with it the risk of hacking. This is true of our computers and phones, and it’s also true about all of the Internet-of-Things devices that are increasingly part of our lives. These large and small appliances, cars, medical devices, toys and—yes—exercise machines are all computers at their core, and they’re all just as vulnerable. Presidents face special risks when it comes to the IoT, but Biden has the NSA to help him handle them.

Not everyone is so lucky, and the rest of us need something more structural.

US presidents have long tussled with their security advisers over tech. The NSA often customizes devices, but that means eliminating features. In 2010, President Barack Obama complained that his presidential BlackBerry device was “no fun” because only ten people were allowed to contact him on it. In 2013, security prevented him from getting an iPhone. When he finally got an upgrade to his BlackBerry in 2016, he complained that his new “secure” phone couldn’t take pictures, send texts, or play music. His “hardened” iPad to read daily intelligence briefings was presumably similarly handicapped. We don’t know what the NSA did to these devices, but they certainly modified the software and physically removed the cameras and microphones—and possibly the wireless Internet connection.

President Donald Trump resisted efforts to secure his phones. We don’t know the details, only that they were regularly replaced, with the government effectively treating them as burner phones.

The risks are serious. We know that the Russians and the Chinese were eavesdropping on Trump’s phones. Hackers can remotely turn on microphones and cameras, listening in on conversations. They can grab copies of any documents on the device. They can also use those devices to further infiltrate government networks, maybe even jumping onto classified networks that the devices connect to. If the devices have physical capabilities, those can be hacked as well. In 2007, the wireless features of Vice President Richard B. Cheney’s pacemaker were disabled out of fears that it could be hacked to assassinate him. In 1999, the NSA banned Furbies from its offices, mistakenly believing that they could listen and learn.

Physically removing features and components works, but the results are increasingly unacceptable. The NSA could take Biden’s Peloton and rip out the camera, microphone, and Internet connection, and that would make it secure—but then it would just be a normal (albeit expensive) stationary bike. Maybe Biden wouldn’t accept that, and he’d demand that the NSA do even more work to customize and secure the Peloton part of the bicycle. Maybe Biden’s security agents could isolate his Peloton in a specially shielded room where it couldn’t infect other computers, and warn him not to discuss national security in its presence.

This might work, but it certainly doesn’t scale. As president, Biden can direct substantial resources to solving his cybersecurity problems. The real issue is what everyone else should do. The president of the United States is a singular espionage target, but so are members of his staff and other administration officials.

Members of Congress are targets, as are governors and mayors, police officers and judges, CEOs and directors of human rights organizations, nuclear power plant operators, and election officials. All of these people have smartphones, tablets, and laptops. Many have Internet-connected cars and appliances, vacuums, bikes, and doorbells. Every one of those devices is a potential security risk, and all of those people are potential national security targets. But none of those people will get their Internet-connected devices customized by the NSA.

That is the real cybersecurity issue. Internet connectivity brings with it features we like. In our cars, it means real-time navigation, entertainment options, automatic diagnostics, and more. In a Peloton, it means everything that makes it more than a stationary bike. In a pacemaker, it means continuous monitoring by your doctor—and possibly your life saved as a result. In an iPhone or iPad, it means…well, everything. We can search for older, non-networked versions of some of these devices, or the NSA can disable connectivity for the privileged few of us. But the result is the same: in Obama’s words, “no fun.”

And unconnected options are increasingly hard to find. In 2016, I tried to find a new car that didn’t come with Internet connectivity, but I had to give up: there were no options to omit that in the class of car I wanted. Similarly, it’s getting harder to find major appliances without a wireless connection. As the price of connectivity continues to drop, more and more things will only be available Internet-enabled.

Internet security is national security—not because the president is personally vulnerable but because we are all part of a single network. Depending on who we are and what we do, we will make different trade-offs between security and fun. But we all deserve better options.

Regulations that force manufacturers to provide better security for all of us are the only way to do that. We need minimum security standards for computers of all kinds. We need transparency laws that give all of us, from the president on down, sufficient information to make our own security trade-offs. And we need liability laws that hold companies liable when they misrepresent the security of their products and services.

I’m not worried about Biden. He and his staff will figure out how to balance his exercise needs with the national security needs of the country. Sometimes the solutions are weirdly customized, such as the anti-eavesdropping tent that Obama used while traveling. I am much more worried about the political activists, journalists, human rights workers, and oppressed minorities around the world who don’t have the money or expertise to secure their technology, or the information that would give them the ability to make informed decisions on which technologies to choose.

This essay previously appeared in the Washington Post.

Posted on February 5, 2021 at 5:58 AMView Comments

Software Developers and Security

According to a survey: “68% of the security professionals surveyed believe it’s a programmer’s job to write secure code, but they also think less than half of developers can spot security holes.” And that’s a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, “It’s a mess, no standardization, most of my work has never had a security scan.”

Another problem is it seems many companies don’t take security seriously enough. Nearly 44% of those surveyed reported that they’re not judged on their security vulnerabilities.

Posted on July 25, 2019 at 6:17 AMView Comments

HP Shared ArcSight Source Code with Russians

Reuters is reporting that HP Enterprise gave the Russians a copy of the ArcSight source code.

The article highlights that ArcSight is used by the Pentagon to protect classified networks, but the security risks are much broader. Any weaknesses the Russians discover could be used against any ArcSight customer.

What is HP Enterprise thinking? Near as I can tell, they only gave it away because the Russians asked nicely.

Supply chain security is very difficult. The article says that Russia demands source code because it’s worried about supply chain security: “One reason Russia requests the reviews before allowing sales to government agencies and state-run companies is to ensure that U.S. intelligence services have not placed spy tools in the software.” That’s a reasonable thing to worry about, considering what we know about NSA’s interdiction of commercial hardware and software products. But how can Group A convince Group B of the integrity and security of hardware/software without putting itself at risk from Group B?

This is one of the areas where open-source software has a security edge. If everyone has access to the source code—and security doesn’t depend on its secrecy—then there’s no advantage in getting a copy. As long as companies rely on obscurity for their security, these sorts of attacks are possible and profitable.

I wonder what sorts of assurances HP Enterprise gave its customers that it would secure its source code, and if any of those customers have negligence options against HP Enterprise.

News articles.

EDITED TO ADD (10/5): Commentary.

Posted on October 4, 2017 at 8:08 AMView Comments

Regulation of the Internet of Things

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider—a company named Dyn—­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­—possibly millions—of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam—­ or thermostat, or refrigerator ­—with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­—you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­—that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­—and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter—­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Posted on November 10, 2016 at 6:06 AMView Comments

Volkswagen and Cheating Software

Portuguese translation by Ricardo R Hashimoto

For the past six years, Volkswagen has been cheating on the emissions testing for its diesel cars. The cars’ computers were able to detect when they were being tested, and temporarily alter how their engines worked so they looked much cleaner than they actually were. When they weren’t being tested, they belched out 40 times the pollutants. Their CEO has resigned, and the company will face an expensive recall, enormous fines and worse.

Cheating on regulatory testing has a long history in corporate America. It happens regularly in automobile emissions control and elsewhere. What’s important in the VW case is that the cheating was preprogrammed into the algorithm that controlled cars’ emissions.

Computers allow people to cheat in ways that are new. Because the cheating is encapsulated in software, the malicious actions can happen at a far remove from the testing itself. Because the software is “smart” in ways that normal objects are not, the cheating can be subtler and harder to detect.

We’ve already had examples of smartphone manufacturers cheating on processor benchmark testing: detecting when they’re being tested and artificially increasing their performance. We’re going to see this in other industries.

The Internet of Things is coming. Many industries are moving to add computers to their devices, and that will bring with it new opportunities for manufacturers to cheat. Light bulbs could fool regulators into appearing more energy efficient than they are. Temperature sensors could fool buyers into believing that food has been stored at safer temperatures than it has been. Voting machines could appear to work perfectly—except during the first Tuesday of November, when they undetectably switch a few percent of votes from one party’s candidates to another’s.

My worry is that some corporate executives won’t interpret the VW story as a cautionary tale involving just punishments for a bad mistake but will see it instead as a demonstration that you can get away with something like that for six years.

And they’ll cheat smarter. For all of VW’s brazenness, its cheating was obvious once people knew to look for it. Far cleverer would be to make the cheating look like an accident. Overall software quality is so bad that products ship with thousands of programming mistakes.

Most of them don’t affect normal operations, which is why your software generally works just fine. Some of them do, which is why your software occasionally fails, and needs constant updates. By making cheating software appear to be a programming mistake, the cheating looks like an accident. And, unfortunately, this type of deniable cheating is easier than people think.

Computer-security experts believe that intelligence agencies have been doing this sort of thing for years, both with the consent of the software developers and surreptitiously.

This problem won’t be solved through computer security as we normally think of it. Conventional computer security is designed to prevent outside hackers from breaking into your computers and networks. The car analogue would be security software that prevented an owner from tweaking his own engine to run faster but in the process emit more pollutants. What we need to contend with is a very different threat: malfeasance programmed in at the design stage.

We already know how to protect ourselves against corporate misbehavior. Ronald Reagan once said “trust, but verify” when speaking about the Soviet Union cheating on nuclear treaties. We need to be able to verify the software that controls our lives.

Software verification has two parts: transparency and oversight. Transparency means making the source code available for analysis. The need for this is obvious; it’s much easier to hide cheating software if a manufacturer can hide the code.

But transparency doesn’t magically reduce cheating or improve software quality, as anyone who uses open-source software knows. It’s only the first step. The code must be analyzed. And because software is so complicated, that analysis can’t be limited to a once-every-few-years government test. We need private analysis as well.

It was researchers at private labs in the United States and Germany that eventually outed Volkswagen. So transparency can’t just mean making the code available to government regulators and their representatives; it needs to mean making the code available to everyone.

Both transparency and oversight are being threatened in the software world. Companies routinely fight making their code public and attempt to muzzle security researchers who find problems, citing the proprietary nature of the software. It’s a fair complaint, but the public interests of accuracy and safety need to trump business interests.

Proprietary software is increasingly being used in critical applications: voting machines, medical devices, breathalyzers, electric power distribution, systems that decide whether or not someone can board an airplane. We’re ceding more control of our lives to software and algorithms. Transparency is the only way verify that they’re not cheating us.

There’s no shortage of corporate executives willing to lie and cheat their way to profits. We saw another example of this last week: Stewart Parnell, the former CEO of the now-defunct Peanut Corporation of America, was sentenced to 28 years in prison for knowingly shipping out salmonella-tainted products. That may seem excessive, but nine people died and many more fell ill as a result of his cheating.

Software will only make malfeasance like this easier to commit and harder to prove. Fewer people need to know about the conspiracy. It can be done in advance, nowhere near the testing time or site. And, if the software remains undetected for long enough, it could easily be the case that no one in the company remembers that it’s there.

We need better verification of the software that controls our lives, and that means more—and more public—transparency.

This essay previously appeared on CNN.com.

EDITED TO ADD: Three more essays.

EDITED TO ADD (10/8): A history of emissions-control cheating devices.

Posted on September 30, 2015 at 9:13 AMView Comments

An Interesting Software Liability Proposal

This proposal is worth thinking about.

Clause 1. If you deliver software with complete and buildable source code and a license that allows disabling any functionality or code by the licensee, then your liability is limited to a refund.

This clause addresses how to avoid liability: license your users to inspect and chop off any and all bits of your software they do not trust or do not want to run, and make it practical for them to do so.

The word disabling is chosen very carefully. This clause grants no permission to change or modify how the program works, only to disable the parts of it that the licensee does not want. There is also no requirement that the licensee actually look at the source code, only that it was received.

All other copyrights are still yours to control, and your license can contain any language and restriction you care to include, leaving the situation unchanged with respect to hardware locking, confidentiality, secrets, software piracy, magic numbers, etc. Free and open source software is obviously covered by this clause, and it does not change its legal situation in any way.

Clause 2. In any other case, you are liable for whatever damage your software causes when used normally.

If you do not want to accept the information sharing in Clause 1, you would fall under Clause 2 and have to live with normal product liability, just as manufacturers of cars, blenders, chainsaws, and hot coffee do.

Posted on September 23, 2011 at 5:22 AMView Comments

Ars Technica on Liabilities and Computer Security

Good article:

Halderman argued that secure software tends to come from companies that have a culture of taking security seriously. But it’s hard to mandate, or even to measure, “security consciousness” from outside a company. A regulatory agency can force a company to go through the motions of beefing up its security, but it’s not likely to be effective unless management’s heart is in it.

This is a key advantage of using liability as the centerpiece of security policy. By making companies financially responsible for the actual harms caused by security failures, lawsuits give management a strong motivation to take security seriously without requiring the government to directly measure and penalize security problems. Sony allegedly laid off security personnel ahead of this year’s attacks. Presumably it thought this would be a cost-saving move; a big class action lawsuit could ensure that other companies don’t repeat that mistake in future.

I’ve been talking about liabilities for about a decade now. Here are essays I’ve written in 2002, 2003, 2004, and 2006.

Posted on July 27, 2011 at 6:44 AMView Comments

Software Liabilities and Free Software

Whenever I write about software liabilities, many people ask about free and open source software. If people who write free software, like Password Safe, are forced to assume liabilities, they will simply not be able to and free software would disappear.

Don’t worry, they won’t be.

The key to understanding this is that this sort of contractual liability is part of a contract, and with free software—or free anything—there’s no contract. Free software wouldn’t fall under a liability regime because the writer and the user have no business relationship; they are not seller and buyer. I would hope the courts would realize this without any prompting, but we could always pass a Good Samaritan-like law that would protect people who distribute free software. (The opposite would be an Attractive Nuisance-like law—that would be bad.)

There would be an industry of companies who provide liabilities for free software. If Red Hat, for example, sold free Linux, they would have to provide some liability protection. Yes, this would mean that they would charge more for Linux; that extra would go to the insurance premiums. That same sort of insurance protection would be available to companies who use other free software packages.

The insurance industry is key to making this work. Luckily, they’re good at protecting people against liabilities. There’s no reason to think they won’t be able to do it here.

I’ve written more about liabilities and the insurance industry here.

Posted on July 28, 2008 at 2:42 PMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.