Crypto-Gram

November 15, 2017

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit <https://www.schneier.com/crypto-gram.html>.

You can read this issue on the web at <https://www.schneier.com/crypto-gram/archives/2017/…>. These same essays and news items appear in the “Schneier on Security” blog at <https://www.schneier.com/>, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:


Me on the Equifax Breach

Last week, I testified before the House Energy and Commerce committee on the Equifax hack. A link to the video is at the bottom of this section. And you can read my written testimony below.

Testimony and Statement for the Record of Bruce Schneier
Fellow and Lecturer, Belfer Center for Science and International Affairs, Harvard Kennedy School Fellow, Berkman Center for Internet and Society at Harvard Law School

Hearing on “Securing Consumers’ Credit Data in the Age of Digital Commerce”

Before the Subcommittee on Digital Commerce and Consumer Protection Committee on Energy and Commerce United States House of Representatives

1 November 2017
2125 Rayburn House Office Building
Washington, DC 20515

Mister Chairman and Members of the Committee, thank you for the opportunity to testify today concerning the security of credit data. My name is Bruce Schneier, and I am a security technologist. For over 30 years I have studied the technologies of security and privacy. I have authored 13 books on these subjects, including “Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World” (Norton, 2015). My popular newsletter “Crypto-Gram” and my blog “Schneier on Security” are read by over 250,000 people.

Additionally, I am a Fellow and Lecturer at the Harvard Kennedy School of Government—where I teach Internet security policy—and a Fellow at the Berkman Klein Center for Internet and Society at Harvard Law School. I am a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of Electronic Privacy Information Center and VerifiedVoting.org. I am also a special advisor to IBM Security and the Chief Technology Officer of IBM Resilient.

I am here representing none of those organizations, and speak only for myself based on my own expertise and experience.

I have eleven main points:

1. The Equifax breach was a serious security breach that puts millions of Americans at risk.

Equifax reported that 145.5 million US customers, about 44% of the population, were impacted by the breach. (That’s the original 143 million plus the additional 2.5 million disclosed a month later.) The attackers got access to full names, Social Security numbers, birth dates, addresses, and driver’s license numbers.

This is exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, cell phone companies and other businesses vulnerable to fraud. As a result, all 143 million US victims are at greater risk of identity theft, and will remain at risk for years to come. And those who suffer identify theft will have problems for months, if not years, as they work to clean up their name and credit rating.

2. Equifax was solely at fault.

This was not a sophisticated attack. The security breach was a result of a vulnerability in the software for their websites: a program called Apache Struts. The particular vulnerability was fixed by Apache in a security patch that was made available on March 6, 2017. This was not a minor vulnerability; the computer press at the time called it “critical.” Within days, it was being used by attackers to break into web servers. Equifax was notified by Apache, US CERT, and the Department of Homeland Security about the vulnerability, and was provided instructions to make the fix.

Two months later, Equifax had still failed to patch its systems. It eventually got around to it on July 29. The attackers used the vulnerability to access the company’s databases and steal consumer information on May 13, over two months after Equifax should have patched the vulnerability.

The company’s incident response after the breach was similarly damaging. It waited nearly six weeks before informing victims that their personal information had been stolen and they were at increased risk of identity theft. Equifax opened a website to help aid customers, but the poor security around that—the site was at a domain separate from the Equifax domain—invited fraudulent imitators and even more damage to victims. At one point, the official Equifax communications even directed people to that fraudulent site.

This is not the first time Equifax failed to take computer security seriously. It confessed to another data leak in January 2017. In May 2016, one of its websites was hacked, resulting in 430,000 people having their personal information stolen. Also in 2016, a security researcher found and reported a basic security vulnerability in its main website. And in 2014, the company reported yet another security breach of consumer information. There are more.

3. There are thousands of data brokers with similarly intimate information, similarly at risk. Equifax is more than a credit reporting agency. It’s a data broker. It collects information about all of us, analyzes it all, and then sells those insights. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about us—almost all of them companies you’ve never heard of and have no business relationship with.

The breadth and depth of information that data brokers have is astonishing. Data brokers collect and store billions of data elements covering nearly every US consumer. Just one of the data brokers studied holds information on more than 1.4 billion consumer transactions and 700 billion data elements, and another adds more than 3 billion new data points to its database each month.

These brokers collect demographic information: names, addresses, telephone numbers, e-mail addresses, gender, age, marital status, presence and ages of children in household, education level, profession, income level, political affiliation, cars driven, and information about homes and other property. They collect lists of things we’ve purchased, when we’ve purchased them, and how we paid for them. They keep track of deaths, divorces, and diseases in our families. They collect everything about what we do on the Internet.

4. These data brokers deliberately hide their actions, and make it difficult for consumers to learn about or control their data.

If there were a dozen people who stood behind us and took notes of everything we purchased, read, searched for, or said, we would be alarmed at the privacy invasion. But because these companies operate in secret, inside our browsers and financial transactions, we don’t see them and we don’t know they’re there.

Regarding Equifax, few consumers have any idea what the company knows about them, who they sell personal data to or why. If anyone knows about them at all, it’s about their business as a credit bureau, not their business as a data broker. Their website lists 57 different offerings for business: products for industries like automotive, education, health care, insurance, and restaurants.

In general, options to “opt-out” don’t work with data brokers. It’s a confusing process, and doesn’t result in your data being deleted. Data brokers will still collect data about consumers who opt out. It will still be in those companies’ databases, and will still be vulnerable. It just won’t be included individually when they sell data to their customers.

5. The existing regulatory structure is inadequate.

Right now, there is no way for consumers to protect themselves. Their data has been harvested and analyzed by these companies without their knowledge or consent. They cannot improve the security of their personal data, and have no control over how vulnerable it is. They only learn about data breaches when the companies announce them—which can be months after the breaches occur—and at that point the onus is on them to obtain credit monitoring services or credit freezes. And even those only protect consumers from some of the harms, and only those suffered after Equifax admitted to the breach.

Right now, the press is reporting “dozens” of lawsuits against Equifax from shareholders, consumers, and banks. Massachusetts has sued Equifax for violating state consumer protection and privacy laws. Other states may follow suit.

If any of these plaintiffs win in the court, it will be a rare victory for victims of privacy breaches against the companies that have our personal information. Current law is too narrowly focused on people who have suffered financial losses directly traceable to a specific breach. Proving this is difficult. If you are the victim of identity theft in the next month, is it because of Equifax or does the blame belong to another of the thousands of companies who have your personal data? As long as one can’t prove it one way or the other, data brokers remain blameless and liability free.

Additionally, much of this market in our personal data falls outside the protections of the Fair Credit Reporting Act. And in order for the Federal Trade Commission to levy a fine against Equifax, it needs to have a consent order and then a subsequent violation. Any fines will be limited to credit information, which is a small portion of the enormous amount of information these companies know about us. In reality, this is not an effective enforcement regime.

Although the FTC is investigating Equifax, it is unclear if it has a viable case.

6. The market cannot fix this because we are not the customers of data brokers.

The customers of these companies are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you’d be a profitable customer—everyone who wants to sell you something, even governments.

Markets work because buyers choose from a choice of sellers, and sellers compete for buyers. None of us are Equifax’s customers. None of us are the customers of any of these data brokers. We can’t refuse to do business with the companies. We can’t remove our data from their databases. With few limited exceptions, we can’t even see what data these companies have about us or correct any mistakes.

We are the product that these companies sell to their customers: those who want to use our personal information to understand us, categorize us, make decisions about us, and persuade us.

Worse, the financial markets reward bad security. Given the choice between increasing their cybersecurity budget by 5%, or saving that money and taking the chance, a rational CEO chooses to save the money. Wall Street rewards those whose balance sheets look good, not those who are secure. And if senior management gets unlucky and the a public breach happens, they end up okay. Equifax’s CEO didn’t get his $5.2 million severance pay, but he did keep his $18.4 million pension. Any company that spends more on security than absolutely necessary is immediately penalized by shareholders when its profits decrease.

Even the negative PR that Equifax is currently suffering will fade. Unless we expect data brokers to put public interest ahead of profits, the security of this industry will never improve without government regulation.

7. We need effective regulation of data brokers.

In 2014, the Federal Trade Commission recommended that Congress require data brokers be more transparent and give consumers more control over their personal information. That report contains good suggestions on how to regulate this industry.

First, Congress should help plaintiffs in data breach cases by authorizing and funding empirical research on the harm individuals receive from these breaches.

Specifically, Congress should move forward legislative proposals that establish a nationwide “credit freeze”—which is better described as changing the default for disclosure from opt-out to opt-in—and free lifetime credit monitoring services. By this I do not mean giving customers free credit-freeze options, a proposal by Senators Warren and Schatz, but that the default should be a credit freeze.

The credit card industry routinely notifies consumers when there are suspicious charges. It is obvious that credit reporting agencies should have a similar obligation to notify consumers when there is suspicious activity concerning their credit report.

On the technology side, more could be done to limit the amount of personal data companies are allowed to collect. Increasingly, privacy safeguards impose “data minimization” requirements to ensure that only the data that is actually needed is collected. On the other hand, Congress should not create a new national identifier to replace the Social Security Numbers. That would make the system of identification even more brittle. Better is to reduce dependence on systems of identification and to create contextual identification where necessary.

Finally, Congress needs to give the Federal Trade Commission the authority to set minimum security standards for data brokers and to give consumers more control over their personal information. This is essential as long as consumers are these companies’ products and not their customers.

8. Resist complaints from the industry that this is “too hard.”

The credit bureaus and data brokers, and their lobbyists and trade-association representatives, will claim that many of these measures are too hard. They’re not telling you the truth.

Take one example: credit freezes. This is an effective security measure that protects consumers, but the process of getting one and of temporarily unfreezing credit is made deliberately onerous by the credit bureaus. Why isn’t there a smartphone app that alerts me when someone wants to access my credit rating, and lets me freeze and unfreeze my credit at the touch of the screen? Too hard? Today, you can have an app on your phone that does something similar if you try to log into a computer network, or if someone tries to use your credit card at a physical location different from where you are.

Moreover, any credit bureau or data broker operating in Europe is already obligated to follow the more rigorous EU privacy laws. The EU General Data Protection Regulation will come into force, requiring even more security and privacy controls for companies collecting storing the personal data of EU citizens. Those companies have already demonstrated that they can comply with those more stringent regulations.

Credit bureaus, and data brokers in general, are deliberately not implementing these 21st-century security solutions, because they want their services to be as easy and useful as possible for their actual customers: those who are buying your information. Similarly, companies that use this personal information to open accounts are not implementing more stringent security because they want their services to be as easy-to-use and convenient as possible.

9. This has foreign trade implications.

The Canadian Broadcast Corporation reported that 100,000 Canadians had their data stolen in the Equifax breach. The British Broadcasting Corporation originally reported that 400,000 UK consumers were affected; Equifax has since revised that to 15.2 million.

Many American Internet companies have significant numbers of European users and customers, and rely on negotiated safe harbor agreements to legally collect and store personal data of EU citizens.

The European Union is in the middle of a massive regulatory shift in its privacy laws, and those agreements are coming under renewed scrutiny. Breaches such as Equifax give these European regulators a powerful argument that US privacy regulations are inadequate to protect their citizens’ data, and that they should require that data to remain in Europe. This could significantly harm American Internet companies.

10. This has national security implications.

Although it is still unknown who compromised the Equifax database, it could easily have been a foreign adversary that routinely attacks the servers of US companies and US federal agencies with the goal of exploiting security vulnerabilities and obtaining personal data.

When the Fair Credit Reporting Act was passed in 1970, the concern was that the credit bureaus might misuse our data. That is still a concern, but the world has changed since then. Credit bureaus and data brokers have far more intimate data about all of us. And it is valuable not only to companies wanting to advertise to us, but foreign governments as well. In 2015, the Chinese breached the database of the Office of Personal Management and stole the detailed security clearance information of 21 million Americans. North Korea routinely engages in cybercrime as way to fund its other activities. In a world where foreign governments use cyber capabilities to attack US assets, requiring data brokers to limit collection of personal data, securely store the data they collect, and delete data about consumers when it is no longer needed is a matter of national security.

11. We need to do something about it.

Yes, this breach is a huge black eye and a temporary stock dip for Equifax—this month. Soon, another company will have suffered a massive data breach and few will remember Equifax’s problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

Unless Congress acts to protect consumer information in the digital age, these breaches will continue.

Thank you for the opportunity to testify today. I will be pleased to answer your questions.

Hearing:
https://energycommerce.house.gov/hearings/…

Video of the hearing:
https://www.youtube.com/watch?…

Number of Americans affected:
https://www.nytimes.com/2017/09/07/business/…
https://www.nytimes.com/2017/10/02/business/…

Apache security patch:
https://cwiki.apache.org/confluence/display/WW/S2-045

Reporting about the vulnerability:
https://arstechnica.com/information-technology/2017/…

Equifax’s delays in patching their system:
https://arstechnica.com/information-technology/2017/…
https://arstechnica.com/tech-policy/2017/09/…
https://arstechnica.com/information-technology/2017/…
https://arstechnica.com/information-technology/2017/…

The fake Equifax website:
https://www.nytimes.com/2017/09/20/business/…

Equifax’s history of security failures:
https://www.forbes.com/sites/thomasbrewster/2017/09/…

The FTC on data brokers:
https://www.ftc.gov/system/files/documents/reports/…
https://www.ftc.gov/news-events/press-releases/2014/…

Equifax’s non-credit-bureau business:
https://www.forbes.com/sites/forrester/2017/09/08/…
http://www.equifax.com/business/

Equifax lawsuits:
https://www.washingtonpost.com/news/business/wp/…
http://money.cnn.com/2017/09/12/news/…
http://money.cnn.com/2017/09/19/technology/…

FTC investigation:
https://www.washingtonpost.com/news/the-switch/wp/…

The move to replace social security numbers:
https://www.cnet.com/news/…

The Warren/Schatz proposal:
http://money.cnn.com/2017/09/15/pf/…

The EU’s more rigorous privacy rules:
http://adage.com/article/datadriven-marketing/…

Canadians affected:
http://www.cbc.ca/news/technology/…

British affected:
http://www.bbc.com/news/technology-41286638
https://www.equifax.co.uk/about-equifax/…


News

A security flaw in Infineon smart cards and TPMs allows an attacker to recover private keys from the public keys. Basically, the key generation algorithm sometimes creates public keys that are vulnerable to Coppersmith’s attack.
https://arstechnica.com/information-technology/2017/…
https://crocs.fi.muni.cz/public/papers/rsa_ccs17
This is the flaw in the Estonian national ID card we learned about last month.
https://www.schneier.com/blog/archives/2017/09/…

The Norwegian Consumer Council has published a report detailing a series of security and privacy flaws in smart watches marketed to children.
https://fil.forbrukerradet.no/wp-content/uploads/…
https://www.forbrukerradet.no/side/…
http://www.bbc.com/news/technology-41652742
https://yro.slashdot.org/story/17/10/20/173204/…
This is the same group that found all those security and privacy vulnerabilities in smart dolls.
https://www.forbrukerradet.no/siste-nytt/…

Denuvo is probably the best digital rights management system, used to protect computer games. It’s regularly cracked within a day.
https://arstechnica.com/gaming/2017/10/…
https://boingboing.net/2017/10/19/denuvo.html
https://games.slashdot.org/story/17/10/19/215233/…
Related: Vice has a good history of DRM:
https://motherboard.vice.com/en_us/article/evbgkn/…

Wondermark comic on security:
http://wondermark.com/c1351/

Hacking back is a terrible idea that just will not die. Josephine Wolff takes apart the new hacking back bill that was introduced in the House recently.
http://www.slate.com/articles/technology/…

The Reaper botnet is based on the Mirai code, but much more virulent. It’s already infected a million IoT devices.
https://www.wired.com/story/…
https://krebsonsecurity.com/2017/10/…

The Communications Security Establishment of Canada—basically, Canada’s version of the NSA—has released a suite of malware analysis tools:
http://www.cbc.ca/news/technology/…

Fascinating article about two psychologists who are studying interrogation techniques.
https://www.theguardian.com/news/2017/oct/13/…

Last month, Deputy Attorney General Rod Rosenstein gave a speech warning that a world with encryption is a world without law—or something like that.
https://www.lawfareblog.com/…
The EFF’s Kurt Opsahl takes it apart pretty thoroughly.
https://www.eff.org/deeplinks/2017/10/…
A couple of weeks later, FBI Director Christopher Wray said much the same thing.
https://apnews.com/04791dfbe30a4d3596e8d187b16d837e
The rhetoric is increasing. This is an idea that will not die.

Google has a new login service for high-risk users. It’s good, but unforgiving. It’s called Advanced Protection.
https://www.wired.com/story/…
https://landing.google.com/advancedprotection/

Almost 20 years ago, I wrote a paper that pointed to a potential flaw in the ANSI X9.17 RNG standard. Now, new research has found that the flaw exists in some implementations of the RNG standard.
https://duhkattack.com/
https://duhkattack.com/paper.pdf
https://duhkattack.com/#paper
https://.cryptographyengineering.com/2017/10/23/…
My original paper:
https://www.schneier.com/academic/paperfiles/…

Turns out that heart size doesn’t change throughout your adult life, and you can use low-level Doppler radar to scan the size—even at a distance—as a biometric.
http://www.buffalo.edu/news/releases/2017/09/034.html
https://sctracy.github.io/chensong.github.io/pdf/…

There’s a new criminal tactic involving hacking an e-mail account of a company that handles high-value transactions and diverting payments. We’re seeing it in real estate and in fine art. I’m sure it’s happening in other industries as well, probably even with business-to-business commerce.
https://www.washingtonpost.com/realestate/…
http://theartnewspaper.com/news/…

Facebook is piloting a project to fingerprint photos to prevent revenge porn. I’m not sure I like this. It doesn’t prevent revenge porn in general; it only prevents the same photos being uploaded to Facebook in particular. And it requires the person to send Facebook copies of all their intimate photos. Facebook claims they’ll only store the photos for a short time before deleting them, but still. Facebook employees will have to look at them.
https://www.theguardian.com/technology/2017/nov/07/…
https://arstechnica.com/tech-policy/2017/11/…
https://www.washingtonpost.com/news/morning-mix/wp/…
https://www.thedailybeast.com/…

Embedded in a story about infidelity and a mid-flight altercation, there’s an interesting security tidbit: “The woman had unlocked her husband’s phone using his thumb impression when he was sleeping…”
http://www.hindustantimes.com/india-news/…

New research in invisible inks, with a lot more chemistry than I understand.
https://cen.acs.org/articles/95/i44/…
https://www.nature.com/articles/s41467-017-01248-2

Google has some interesting data on login thefts.
https://security.googleblog.com/2017/11/…
https://research.google.com/pubs/pub46437.html

The New York Times just published a long article on the Shadow Brokers and their effects on NSA operations. Summary: it’s been an operational disaster, the NSA still doesn’t know who did it or how, and NSA morale has suffered considerably.
https://www.nytimes.com/2017/11/12/us/…
This is me on the Shadow Brokers from last May.
https://www.schneier.com/blog/archives/2017/05/…

Apple FaceID hacked. It only took a week.
https://www.wired.com/story/…
http://www.bkav.com/d/top-news/-/view_content/…


Daphne Caruana Galizia’s Murder and the Security of WhatsApp

Daphne Caruana Galizia was a Maltese journalist whose anti-corruption investigations exposed powerful people. She was murdered in October by a car bomb.

Galizia used WhatsApp to communicate securely with her sources. Now that she is dead, the Maltese police want to break into her phone or the app, and find out who those sources were.

One journalist reports:

Part of Daphne’s destroyed smart phone was elevated from the scene.

Investigators say that Caruana Galizia had not taken her laptop with her on that particular trip. If she had done so, the forensic experts would have found evidence on the ground.

Her mobile phone is also being examined, as can be seen from her WhatsApp profile, which has registered activity since the murder. But it is understood that the data is safe.

Sources close to the newsroom said that as part of the investigation her sim card has been cloned. This is done with the help of mobile service providers in similar cases. Asked if her WhatsApp messages or any other messages that were stored in her phone will be retrieved, the source said that since the messaging application is encrypted, the messages cannot be seen. Therefore it is unlikely that any data can be retrieved.

I am less optimistic than that reporter. The FBI is providing “specific assistance.” The article doesn’t explain that, but I would not be surprised if they were helping crack the phone.

It will be interesting to see if WhatsApp’s security survives this. My guess is that it depends on how much of the phone was recovered from the bombed car.

https://www.independent.co.uk/news/world/europe/…
https://daphnecaruanagalizia.com/2017/04/…

Source of that quote:
http://www.independent.com.mt/articles/2017-10-25/…

FBI providing “specific assistance:
https://www.reuters.com/article/…

The court-appointed IT expert on the case has a criminal record in the UK for theft and forgery.
https://www.timesofmalta.com/mobile/articles/view/…


New KRACK Attack Against Wi-Fi Encryption

Mathy Vanhoef has just published a devastating attack against WPA2, the 14-year-old encryption protocol used by pretty much all Wi-Fi systems. It’s an interesting attack, where the attacker forces the protocol to reuse a key. The authors call this attack KRACK, for Key Reinstallation Attacks.

This is yet another of a series of marketed attacks; with a cool name, a website, and a logo. The Q&A on the website answers a lot of questions about the attack and its implications.

https://www.krackattacks.com/
https://arstechnica.com/information-technology/2017/…
https://papers.mathyvanhoef.com/ccs2017.pdf
https://packetstormsecurity.com/files/download/…
More:
https://www.theguardian.com/technology/2017/oct/16/…
https://www.alexhudson.com/2017/10/15/…

Matthew Green has a blog post on what went wrong. The vulnerability is in the interaction between two protocols. At a meta level, he blames the opaque IEEE standards process:
https://.cryptographyengineering.com/2017/10/16/…

Nicholas Weaver explains why most people shouldn’t worry about this:
https://www.lawfareblog.com/dont-worry-about-krack


Schneier News

I’m speaking at the AI World conference in Boston on December 12.
https://aiworld.com/


Fraud Detection in Pokemon Go

I play Pokemon Go. (There, I’ve admitted it.) One of the interesting aspects of the game I’ve been watching is how the game’s publisher, Niantic, deals with cheaters.

There are three basic types of cheating in Pokemon Go. The first is botting, where a computer plays the game instead of a person. The second is spoofing, which is faking GPS to convince the game that you’re somewhere you’re not. These two cheats are often used together—and you see the results in the many high-level accounts for sale on the Internet. The third type of cheating is the use of third-party apps like trackers to get extra information about the game.

None of this would matter if everyone played independently. The only reason any player cares about whether other players are cheating is that there is a group aspect of the game: gym battling. Everyone’s enjoyment of that part of the game is affected by cheaters who can pretend to be where they’re not, especially if they have lots of powerful Pokemon that they collected effortlessly.

Niantic has been trying to deal with this problem since the game debuted, mostly by banning accounts when it detects cheating. Its initial strategy was basic—algorithmically detecting impossibly fast travel between physical locations or super-human amounts of playing, and then banning those accounts—with limited success. The limiting factor in all of this is false positives. While Niantic wants to stop cheating, it doesn’t want to block or limit any legitimate players. This makes it a very difficult problem, and contributes to the balance in the attacker/defender arms race.

Recently, Niantic implemented two new anti-cheating measures. The first is machine learning to detect cheaters. About this, we know little. The second is to limit the functionality of cheating accounts rather than ban them outright, making it harder for cheaters to know when they’ve been discovered.

“This is may very well be the beginning of Niantic’s machine learning approach to active bot countering,” user Dronpes writes on The Silph Road subreddit. “If the parameters for a shadowban are constantly adjusted server-side, as they can now easily be, then Niantic’s machine learning engineers can train their detection (classification) algorithms in ever-improving, ever more aggressive ways, and botters will constantly be forced to re-evaluate what factors may be triggering the detection.”

One of the expected future features in the game is trading. Creating a market for rare or powerful Pokemon would add a huge additional financial incentive to cheat. Unless Niantic can effectively prevent botting and spoofing, it’s unlikely to implement that feature.

Cheating detection in virtual reality games is going to be a constant problem as these games become more popular, especially if there are ways to monetize the results of cheating. This means that cheater detection will continue to be a critical component of these games’ success. Anything Niantic learns in Pokemon Go will be useful in whatever games come next.

Mystic, level 39—if you must know.

And, yes, I know the game tracks works by tracking your location. I’m all right with that. As I repeatedly say, Internet privacy is all about trade-offs.

Botting in Pokemon Go:
https://arstechnica.co.uk/gaming/2016/08/…

Spoofing in Pokemon Go:
https://pokemongohub.net/post/featured/…

Pokemon Go accounts for sale:
https://www.playerauctions.com/pokemon-go-account/

Pokemon Go trackers:
https://kotaku.com/…

Banning Pokemon Go cheaters:
https://www.reddit.com/r/TheSilphRoad/comments/…
https://arstechnica.co.uk/gaming/2017/05/…
https://www.youtube.com/watch?v=uzaZclcyB88
https://pokemongohub.net/post/breaking-news/…


IoT Cybersecurity: What’s Plan B?

In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.

What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set, and that it will considerably improve security speaks volumes about the current state of IoT security. (Full disclosure: I helped draft some of the bill’s security requirements.)

The bill would also modify the Computer Fraud and Abuse and the Digital Millennium Copyright Acts to allow security researchers to study the security of IoT devices purchased by the government. It’s a far narrower exemption than our industry needs. But it’s a good first step, which is probably the best thing you can say about this legislation.

However, it’s unlikely this first step will even be taken. I am writing this column in August, and have no doubt that the bill will have gone nowhere by the time you read it in October or later. If hearings are held, they won’t matter. The bill won’t have been voted on by any committee, and it won’t be on any legislative calendar. The odds of this bill becoming law are zero. And that’s not just because of current politics—I’d be equally pessimistic under the Obama administration.

But the situation is critical. The Internet is dangerous—and the IoT gives it not just eyes and ears, but also hands and feet. Security vulnerabilities, exploits, and attacks that once affected only bits and bytes now affect flesh and blood.

Markets, as we’ve repeatedly learned over the past century, are terrible mechanisms for improving the safety of products and services. It was true for automobile, food, restaurant, airplane, fire, and financial-instrument safety. The reasons are complicated, but basically, sellers don’t compete on safety features because buyers can’t efficiently differentiate products based on safety considerations. The race-to-the-bottom mechanism that markets use to minimize prices also minimizes quality. Without government intervention, the IoT remains dangerously insecure.

The US government has no appetite for intervention, so we won’t see serious safety and security regulations, a new federal agency, or better liability laws. We might have a better chance in the EU. Depending on how the General Data Protection Regulation on data privacy pans out, the EU might pass a similar security law in 5 years. No other country has a large enough market share to make a difference.

Sometimes we can opt out of the IoT, but that option is becoming increasingly rare. Last year, I tried and failed to purchase a new car without an Internet connection. In a few years, it’s going to be nearly impossible to not be multiply connected to the IoT. And our biggest IoT security risks will stem not from devices we have a market relationship with, but from everyone else’s cars, cameras, routers, drones, and so on.

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety—and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. Comment if you have any ideas.

This essay previously appeared in the September/October issue of “IEEE Security & Privacy.”


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <https://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Bruce Schneier is an internationally renowned security technologist, called a “security guru” by The Economist. He is the author of 12 books—including “Liars and Outliers: Enabling the Trust Society Needs to Survive”—as well as hundreds of articles, essays, and academic papers. His influential newsletter “Crypto-Gram” and his blog “Schneier on Security” are read by over 250,000 people. He has testified before Congress, is a frequent guest on television and radio, has served on several government committees, and is regularly quoted in the press. Schneier is a fellow at the Berkman Center for Internet and Society at Harvard Law School, a program fellow at the New America Foundation’s Open Technology Institute, a board member of the Electronic Frontier Foundation, an Advisory Board Member of the Electronic Privacy Information Center, and CTO of IBM Resilient and Special Advisor to IBM Security. See <https://www.schneier.com>.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM Resilient.

Copyright (c) 2017 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.