Blog: February 2020 Archives

Friday Squid Blogging: Squid Eggs

Cool photo.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

EDITED TO ADD (3/4): I just deleted a slew of comments about COVID 19. I may reinstate some of them later; right now I want some time to think about what is relevant and what is not. Surely lots of things are relevant to this blog—fear, risk management, surveillance, containment measures—but most of the talk about the virus are not. I would like to suggest that those who wish to talk about the virus do so elsewhere, and those who want to talk specifically about the security/risk implications continue to do so, politely and respectfully.

Posted on February 28, 2020 at 4:08 PM113 Comments

Humble Bundle's 2020 Cybersecurity Books

For years, Humble Bundle has been selling great books at a “pay what you can afford” model. This month, they’re featuring as many as nineteen cybersecurity books for as little as $1, including four of mine. These are digital copies, all DRM-free. Part of the money goes to support the EFF or Let’s Encrypt. (The default is 15%, and you can change that.) As an EFF board member, I know that we’ve received a substantial amount from this program in previous years.

Posted on February 28, 2020 at 1:53 PM7 Comments

Deep Learning to Find Malicious Email Attachments

Google presented its system of using deep-learning techniques to identify malicious email attachments:

At the RSA security conference in San Francisco on Tuesday, Google’s security and anti-abuse research lead Elie Bursztein will present findings on how the new deep-learning scanner for documents is faring against the 300 billion attachments it has to process each week. It’s challenging to tell the difference between legitimate documents in all their infinite variations and those that have specifically been manipulated to conceal something dangerous. Google says that 63 percent of the malicious documents it blocks each day are different than the ones its systems flagged the day before. But this is exactly the type of pattern-recognition problem where deep learning can be helpful.

[…]

The document analyzer looks for common red flags, probes files if they have components that may have been purposefully obfuscated, and does other checks like examining macros­—the tool in Microsoft Word documents that chains commands together in a series and is often used in attacks. The volume of malicious documents that attackers send out varies widely day to day. Bursztein says that since its deployment, the document scanner has been particularly good at flagging suspicious documents sent in bursts by malicious botnets or through other mass distribution methods. He was also surprised to discover how effective the scanner is at analyzing Microsoft Excel documents, a complicated file format that can be difficult to assess.

This is the sort of thing that’s pretty well optimized for machine-learning techniques.

Posted on February 28, 2020 at 11:57 AM20 Comments

Securing the Internet of Things through Class-Action Lawsuits

This law journal article discusses the role of class-action litigation to secure the Internet of Things.

Basically, the article postulates that (1) market realities will produce insecure IoT devices, and (2) political failures will leave that industry unregulated. Result: insecure IoT. It proposes proactive class action litigation against manufacturers of unsafe and unsecured IoT devices before those devices cause unnecessary injury or death. It’s a lot to read, but it’s an interesting take on how to secure this otherwise disastrously insecure world.

And it was inspired by my book, Click Here to Kill Everybody.

EDITED TO ADD (3/13): Consumer Reports recently explored how prevalent arbitration (vs. lawsuits) has become in the USA.

Posted on February 27, 2020 at 6:03 AM16 Comments

Firefox Enables DNS over HTTPS

This is good news:

Whenever you visit a website—even if it’s HTTPS enabled—the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can’t be intercepted or hijacked in order to send a user to a malicious site.

[…]

But the move is not without controversy. Last year, an internet industry group branded Mozilla an “internet villain” for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.

The move to enable DoH by default will no doubt face resistance, but browser makers have argued it’s not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH—with others, like Chrome, Edge, and Opera—quickly following suit.

I think DoH is a great idea, and long overdue.

Slashdot thread. Tech details here. And here’s a good summary of the criticisms.

Posted on February 25, 2020 at 9:15 AM61 Comments

Russia Is Trying to Tap Transatlantic Cables

The Times of London is reporting that Russian agents are in Ireland probing transatlantic communications cables.

Ireland is the landing point for undersea cables which carry internet traffic between America, Britain and Europe. The cables enable millions of people to communicate and allow financial transactions to take place seamlessly.

Garda and military sources believe the agents were sent by the GRU, the military intelligence branch of the Russian armed forces which was blamed for the nerve agent attack in Britain on Sergei Skripal, a former Russian intelligence officer.

This is nothing new. The NSA and GCHQ have been doing this for decades.

Boing Boing post.

Posted on February 24, 2020 at 6:27 AM31 Comments

Friday Squid Blogging: 13-foot Giant Squid Caught off New Zealand Coast

It’s probably a juvenile:

Researchers aboard the New Zealand-based National Institute of Water and Atmospheric Research Ltd (NIWA) research vessel Tangaroa were on an expedition to survey hoki, New Zealand’s most valuable commercial fish, in the Chatham Rise ­ an area of ocean floor to the east of New Zealand that makes up part of the “lost continent” of Zealandia.

At 7.30am on the morning of January 21, scientists were hauling up their trawler net from a depth of 442 meters (1,450 feet) when they were surprised to spot tentacles in amongst their catch. Large tentacles.

According to voyage leader and NIWA fisheries scientist Darren Stevens, who was on watch, it took six members of staff to lift the giant squid out of the net. Despite the squid being 4 meters long and weighing about 110 kilograms (240 pounds), Stevens said he thought the squid was “on the smallish side,” compared to other behemoths caught.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on February 21, 2020 at 4:19 PM181 Comments

Inrupt, Tim Berners-Lee's Solid, and Me

For decades, I have been talking about the importance of individual privacy. For almost as long, I have been using the metaphor of digital feudalism to describe how large companies have become central control points for our data. And for maybe half a decade, I have been talking about the world-sized robot that is the Internet of Things, and how digital security is now a matter of public safety. And most recently, I have been writing and speaking about how technologists need to get involved with public policy.

All of this is a long-winded way of saying that I have joined a company called Inrupt that is working to bring Tim Berners-Lee’s distributed data ownership model that is Solid into the mainstream. (I think of Inrupt basically as the Red Hat of Solid.) I joined the Inrupt team last summer as its Chief of Security Architecture, and have been in stealth mode until now.

The idea behind Solid is both simple and extraordinarily powerful. Your data lives in a pod that is controlled by you. Data generated by your things—your computer, your phone, your IoT whatever—is written to your pod. You authorize granular access to that pod to whoever you want for whatever reason you want. Your data is no longer in a bazillion places on the Internet, controlled by you-have-no-idea-who. It’s yours. If you want your insurance company to have access to your fitness data, you grant it through your pod. If you want your friends to have access to your vacation photos, you grant it through your pod. If you want your thermostat to share data with your air conditioner, you give both of them access through your pod.

The ideal would be for this to be completely distributed. Everyone’s pod would be on a computer they own, running on their network. But that’s not how it’s likely to be in real life. Just as you can theoretically run your own email server but in reality you outsource it to Google or whoever, you are likely to outsource your pod to those same sets of companies. But maybe pods will come standard issue in home routers. Even if you do hand your pod over to some company, it’ll be like letting them host your domain name or manage your cell phone number. If you don’t like what they’re doing, you can always move your pod—just like you can take your cell phone number and move to a different carrier. This will give users a lot more power.

I believe this will fundamentally alter the balance of power in a world where everything is a computer, and everything is producing data about you. Either IoT companies are going to enter into individual data sharing agreements, or they’ll all use the same language and protocols. Solid has a very good chance of being that protocol. And security is critical to making all of this work. Just trying to grasp what sort of granular permissions are required, and how the authentication flows might work, is mind-altering. We’re stretching pretty much every Internet security protocol to its limits and beyond just setting this up.

Building a secure technical infrastructure is largely about policy, but there’s also a wave of technology that can shift things in one direction or the other. Solid is one of those technologies. It moves the Internet away from overly-centralized power of big corporations and governments and towards more rational distributions of power; greater liberty, better privacy, and more freedom for everyone.

I’ve worked with Inrupt’s CEO, John Bruce, at both of my previous companies: Counterpane and Resilient. It’s a little weird working for a start-up that is not a security company. (While security is essential to making Solid work, the technology is fundamentally about the functionality.) It’s also a little surreal working on a project conceived and spearheaded by Tim Berners-Lee. But at this point, I feel that I should only work on things that matter to society. So here I am.

Whatever happens next, it’s going to be a really fun ride.

EDITED TO ADD (2/23): News article. HackerNews thread.

EDITED TO ADD (2/25): More press coverage.

Posted on February 21, 2020 at 2:04 PM74 Comments

Policy vs. Technology

Sometime around 1993 or 1994, during the first Crypto Wars, I was part of a group of cryptography experts that went to Washington to advocate for strong encryption. Matt Blaze and Ron Rivest were with me; I don’t remember who else. We met with then Massachusetts Representative Ed Markey. (He didn’t become a senator until 2013.) Back then, he and Vermont Senator Patrick Leahy were the most knowledgeable on this issue and our biggest supporters against government backdoors. They still are.

Markey was against forcing encrypted phone providers to implement the NSA’s Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus.

Since then, I have become more immersed in policy discussions. I have spent more time with legislators, advised advocacy organizations like EFF and EPIC, and worked with policy-minded think tanks in the United States and around the world. I teach cybersecurity policy and technology at the Harvard Kennedy School of Government. My most recent two books, Data and Goliath—about surveillance—and Click Here to Kill Everybody—about IoT security—are really about the policy implications of technology.

Over that time, I have observed many other differences between technologists and policy makers—differences that we in cybersecurity need to understand if we are to translate our technological solutions into viable policy outcomes.

Technologists don’t try to consider all of the use cases of a given technology. We tend to build something for the uses we envision, and hope that others can figure out new and innovative ways to extend what we created. We love it when there is a new use for a technology that we never considered and that changes the world. And while we might be good at security around the use cases we envision, we are regularly blindsided when it comes to new uses or edge cases. (Authentication risks surrounding someone’s intimate partner is a good example.)

Policy doesn’t work that way; it’s specifically focused on use. It focuses on people and what they do. Policy makers can’t create policy around a piece of technology without understanding how it is used—how all of it’s used.

Policy is often driven by exceptional events, like the FBI’s desire to break the encryption on the San Bernardino shooter’s iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It’s hard to imagine policy makers creating laws around VR systems, because they don’t yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.

As technologists, we iterate. It’s how we write software. It’s how we field products. We know we can’t get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn’t get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it’s slower.

Along similar lines, the two groups work in very different time frames. Engineers, conditioned by Moore’s law, have long thought of 18 months as the maximum time to roll out a new product, and now think in terms of continuous deployment of new features. As I said previously, policy makers tend to think in terms of multiple years to get a law or regulation in place, and then more years as the case law builds up around it so everyone knows what it really means. It’s like tortoises and hummingbirds.

Technology is inherently global. It is often developed with local sensibilities according to local laws, but it necessarily has global reach. Policy is always jurisdictional. This difference is causing all sorts of problems for the global cloud services we use every day. The providers are unable to operate their global systems in compliance with more than 200 different—and sometimes conflicting—national requirements. Policy makers are often unimpressed with claims of inability; laws are laws, they say, and if Facebook can translate its website into French for the French, it can also implement their national laws.

Technology and policy both use concepts of trust, but differently. Technologists tend to think of trust in terms of controls on behavior. We’re getting better—NIST’s recent work on trust is a good example—but we have a long way to go. For example, Google’s Trust and Safety Department does a lot of AI and ethics work largely focused on technological controls. Policy makers think of trust in more holistic societal terms: trust in institutions, trust as the ability not to worry about adverse outcomes, consumer confidence. This dichotomy explains how techies can claim bitcoin is trusted because of the strong cryptography, but policy makers can’t imagine calling a system trustworthy when you lose all your money if you forget your encryption key.

Policy is how society mediates how individuals interact with society. Technology has the potential to change how individuals interact with society. The conflict between these two causes considerable friction, as technologists want policy makers to get out of the way and not stifle innovation, and policy makers want technologists to stop moving fast and breaking so many things.

Finally, techies know that code is law­—that the restrictions and limitations of a technology are more fundamental than any human-created legal anything. Policy makers know that law is law, and tech is just tech. We can see this in the tension between applying existing law to new technologies and creating new law specifically for those new technologies.

Yes, these are all generalizations and there are exceptions. It’s also not all either/or. Great technologists and policy makers can see the other perspectives. The best policy makers know that for all their work toward consensus, they won’t make progress by redefining pi as three. Thoughtful technologists look beyond the immediate user demands to the ways attackers might abuse their systems, and design against those adversaries as well. These aren’t two alien species engaging in first contact, but cohorts who can each learn and borrow tools from the other. Too often, though, neither party tries.

In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who’d ruled on them came to exactly opposite conclusions. The law students took this in stride; it’s the way the legal system works when it’s wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn’t a single correct answer.

But that’s not how law works, and that’s not how policy works. As the technologies we’re creating become more central to society, and as we in technology continue to move into the public sphere and become part of the increasingly important policy debates, it is essential that we learn these lessons. Gone are the days when we were creating purely technical systems and our work ended at the keyboard and screen. Now we’re building complex socio-technical systems that are literally creating a new world. And while it’s easy to dismiss policy makers as doing it wrong, it’s important to understand that they’re not. Policy making has been around a lot longer than the Internet or computers or any technology. And the essential challenges of this century will require both groups to work together.

This essay previously appeared in IEEE Security & Privacy.

EDITED TO ADD (3/16): This essay has been translated into Spanish.

Posted on February 21, 2020 at 5:54 AM22 Comments

Hacking McDonald's for Free Food

This hack was possible because the McDonald’s app didn’t authenticate the server, and just did whatever the server told it to do:

McDonald’s receipts in Germany end with a link to a survey page. Once you take the survey, you receive a coupon code for a free small beverage, redeemable within a month. One day, David happened to be checking out how the website’s coding was structured when he noticed that the information triggering the server to issue a new voucher was always the same. That meant he could build a programme replicating the code, as if someone was taking the survey again and again.

[…]

At the McDonald’s in East Berlin, David began the demonstration by setting up an internet hotspot with his smartphone. Lenny connected with a second phone and a laptop, then turned the laptop into a proxy server connected to both phones. He opened the McDonald’s app and entered a voucher code generated by David’s programme. The next step was ordering the food for a total of €17. The bill on the app was transmitted to the laptop, which set all prices to zero through a programme created by Lenny, and sent the information back to the app. After tapping “Complete and pay 0.00 euros”, we simply received our pick-up number. It had worked.

The flaw was fixed late last year.

Posted on February 18, 2020 at 6:09 AM15 Comments

Voatz Internet Voting App Is Insecure

This paper describes the flaws in the Voatz Internet voting app: “The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections.”

Abstract: In the 2018 midterm elections, West Virginia became the first state in the U.S. to allow select voters to cast their ballot on a mobile phone via a proprietary app called “Voatz.” Although there is no public formal description of Voatz’s security model, the company claims that election security and integrity are maintained through the use of a permissioned blockchain, biometrics, a mixnet, and hardware-backed key storage modules on the user’s device. In this work, we present the first public security analysis of Voatz, based on a reverse engineering of their Android application and the minimal available documentation of the system. We performed a clean-room reimplementation of Voatz’s server and present an analysis of the election process as visible from the app itself.

We find that Voatz has vulnerabilities that allow different kinds of adversaries to alter, stop, or expose a user’s vote,including a sidechannel attack in which a completely passive network adversary can potentially recover a user’s secret ballot. We additionally find that Voatz has a number of privacy issues stemming from their use of third party services for crucial app functionality. Our findings serve as a concrete illustration of the common wisdom against Internet voting,and of the importance of transparency to the legitimacy of elections.

News articles.

The company’s response is a perfect illustration of why non-computer non-security companies have no idea what they’re doing, and should not be trusted with any form of security.

EDITED TO ADD (3/11): The researchers respond to Voatz’s response.

Posted on February 17, 2020 at 6:35 AM31 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

  • I’ll be at RSA Conference 2020 in San Francisco. On Wednesday, February 26, at 2:50 PM, I’ll be part of a panel on “How to Reduce Supply Chain Risk: Lessons from Efforts to Block Huawei.” On Thursday, February 27, at 9:20 AM, I’m giving a keynote on “Hacking Society.”
  • I’m speaking at SecIT by Heise in Hannover, Germany on March 26, 2020.

The list is maintained on this page.

Posted on February 14, 2020 at 1:03 PM6 Comments

DNSSEC Keysigning Ceremony Postponed Because of Locked Safe

Interesting collision of real-world and Internet security:

The ceremony sees several trusted internet engineers (a minimum of three and up to seven) from across the world descend on one of two secure locations—one in El Segundo, California, just south of Los Angeles, and the other in Culpeper, Virginia—both in America, every three months.

Once in place, they run through a lengthy series of steps and checks to cryptographically sign the digital key pairs used to secure the internet’s root zone. (Here’s Cloudflare‘s in-depth explanation, and IANA’s PDF step-by-step guide.)

[…]

Only specific named people are allowed to take part in the ceremony, and they have to pass through several layers of security—including doors that can only be opened through fingerprint and retinal scans—before getting in the room where the ceremony takes place.

Staff open up two safes, each roughly one-metre across. One contains a hardware security module that contains the private portion of the KSK. The module is activated, allowing the KSK private key to sign keys, using smart cards assigned to the ceremony participants. These credentials are stored in deposit boxes and tamper-proof bags in the second safe. Each step is checked by everyone else, and the event is livestreamed. Once the ceremony is complete—which takes a few hours—all the pieces are separated, sealed, and put back in the safes inside the secure facility, and everyone leaves.

But during what was apparently a check on the system on Tuesday night—the day before the ceremony planned for 1300 PST (2100 UTC) Wednesday—IANA staff discovered that they couldn’t open one of the two safes. One of the locking mechanisms wouldn’t retract and so the safe stayed stubbornly shut.

As soon as they discovered the problem, everyone involved, including those who had flown in for the occasion, were told that the ceremony was being postponed. Thanks to the complexity of the problem—a jammed safe with critical and sensitive equipment inside—they were told it wasn’t going to be possible to hold the ceremony on the back-up date of Thursday, either.

Posted on February 14, 2020 at 6:07 AM25 Comments

Companies that Scrape Your Email

Motherboard has a long article on apps—Edison, Slice, and Cleanfox—that spy on your email by scraping your screen, and then sell that information to others:

Some of the companies listed in the J.P. Morgan document sell data sourced from “personal inboxes,” the document adds. A spokesperson for J.P. Morgan Research, the part of the company that created the document, told Motherboard that the research “is intended for institutional clients.”

That document describes Edison as providing “consumer purchase metrics including brand loyalty, wallet share, purchase preferences, etc.” The document adds that the “source” of the data is the “Edison Email App.”

[…]

A dataset obtained by Motherboard shows what some of the information pulled from free email app users’ inboxes looks like. A spreadsheet containing data from Rakuten’s Slice, an app that scrapes a user’s inbox so they can better track packages or get their money back once a product goes down in price, contains the item that an app user bought from a specific brand, what they paid, and an unique identification code for each buyer.

Posted on February 12, 2020 at 10:26 AM19 Comments

Crypto AG Was Owned by the CIA

The Swiss cryptography firm Crypto AG sold equipment to governments and militaries around the world for decades after World War II. They were owned by the CIA:

But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company’s devices so they could easily break the codes that countries used to send encrypted messages.

This isn’t really news. We have long known that Crypto AG was backdooring crypto equipment for the Americans. What is new is the formerly classified documents describing the details:

The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public broadcaster, in a joint reporting project.

The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations’ gullibility for years, taking their money and stealing their secrets.

The operation, known first by the code name “Thesaurus” and later “Rubicon,” ranks among the most audacious in CIA history.

EDITED TO ADD: More news articles. And a 1995 story on this. It’s not new news.

Posted on February 11, 2020 at 10:42 AM89 Comments

Apple's Tracking-Prevention Feature in Safari has a Privacy Bug

Last month, engineers at Google published a very curious privacy bug in Apple’s Safari web browser. Apple’s Intelligent Tracking Prevention, a feature designed to reduce user tracking, has vulnerabilities that themselves allow user tracking. Some details:

ITP detects and blocks tracking on the web. When you visit a few websites that happen to load the same third-party resource, ITP detects the domain hosting the resource as a potential tracker and from then on sanitizes web requests to that domain to limit tracking. Tracker domains are added to Safari’s internal, on-device ITP list. When future third-party requests are made to a domain on the ITP list, Safari will modify them to remove some information it believes may allow tracking the user (such as cookies).

[…]

The details should come as a surprise to everyone because it turns out that ITP could effectively be used for:

  • information leaks: detecting websites visited by the user (web browsing history hijacking, stealing a list of visited sites)
  • tracking the user with ITP, making the mechanism function like a cookie
  • fingerprinting the user: in ways similar to the HSTS fingerprint, but perhaps a bit better

I am sure we all agree that we would not expect a privacy feature meant to protect from tracking to effectively enable tracking, and also accidentally allowing any website out there to steal its visitors’ web browsing history. But web architecture is complex, and the consequence is that this is exactly the case.

Apple fixed this vulnerability in December, a month before Google published.

If there’s any lesson here, it’s that privacy is hard—and that privacy engineering is even harder. It’s not that we shouldn’t try, but we should recognize that it’s easy to get it wrong.

Posted on February 10, 2020 at 6:06 AM5 Comments

Friday Squid Blogging: An MRI Scan of a Squid's Brain

This paper is filled with brain science that I do not understand (news article), but fails to answer what I consider to be the important question: how do you keep a live squid still for long enough to do an MRI scan on them?

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on February 7, 2020 at 4:11 PM129 Comments

Security in 2020: Revisited

Ten years ago, I wrote an essay: “Security in 2020.” Well, it’s finally 2020. I think I did pretty well. Here’s what I said back then:

There’s really no such thing as security in the abstract. Security can only be defined in relation to something else. You’re secure from something or against something. In the next 10 years, the traditional definition of IT security—that it protects you from hackers, criminals, and other bad guys—will undergo a radical shift. Instead of protecting you from the bad guys, it will increasingly protect businesses and their business models from you.

Ten years ago, the big conceptual change in IT security was deperimeterization. A wordlike grouping of 18 letters with both a prefix and a suffix, it has to be the ugliest word our industry invented. The concept, though—the dissolution of the strict boundaries between the internal and external network—was both real and important.

There’s more deperimeterization today than there ever was. Customer and partner access, guest access, outsourced e-mail, VPNs; to the extent there is an organizational network boundary, it’s so full of holes that it’s sometimes easier to pretend it isn’t there. The most important change, though, is conceptual. We used to think of a network as a fortress, with the good guys on the inside and the bad guys on the outside, and walls and gates and guards to ensure that only the good guys got inside. Modern networks are more like cities, dynamic and complex entities with many different boundaries within them. The access, authorization, and trust relationships are even more complicated.

Today, two other conceptual changes matter. The first is consumerization. Another ponderous invented word, it’s the idea that consumers get the cool new gadgets first, and demand to do their work on them. Employees already have their laptops configured just the way they like them, and they don’t want another one just for getting through the corporate VPN. They’re already reading their mail on their BlackBerrys or iPads. They already have a home computer, and it’s cooler than the standard issue IT department machine. Network administrators are increasingly losing control over clients.

This trend will only increase. Consumer devices will become trendier, cheaper, and more integrated; and younger people are already used to using their own stuff on their school networks. It’s a recapitulation of the PC revolution. The centralized computer center concept was shaken by people buying PCs to run VisiCalc; now it’s iPads and Android smartphones.

The second conceptual change comes from cloud computing: our increasing tendency to store our data elsewhere. Call it decentralization: our email, photos, books, music, and documents are stored somewhere, and accessible to us through our consumer devices. The younger you are, the more you expect to get your digital stuff on the closest screen available. This is an important trend, because it signals the end of the hardware and operating system battles we’ve all lived with. Windows vs. Mac doesn’t matter when all you need is a web browser. Computers become temporary; user backup becomes irrelevant. It’s all out there somewhere—and users are increasingly losing control over their data.

During the next 10 years, three new conceptual changes will emerge, two of which we can already see the beginnings of. The first I’ll call deconcentration. The general-purpose computer is dying and being replaced by special-purpose devices. Some of them, like the iPhone, seem general purpose but are strictly controlled by their providers. Others, like Internet-enabled game machines or digital cameras, are truly special purpose. In 10 years, most computers will be small, specialized, and ubiquitous.

Even on what are ostensibly general-purpose devices, we’re seeing more special-purpose applications. Sure, you could use the iPhone’s web browser to access the New York Times website, but it’s much easier to use the NYT’s special iPhone app. As computers become smaller and cheaper, this trend will only continue. It’ll be easier to use special-purpose hardware and software. And companies, wanting more control over their users’ experience, will push this trend.

The second is decustomerization—now I get to invent the really ugly words—the idea that we get more of our IT functionality without any business relation­ship. We’re all part of this trend: every search engine gives away its services in exchange for the ability to advertise. It’s not just Google and Bing; most webmail and social networking sites offer free basic service in exchange for advertising, possibly with premium services for money. Most websites, even useful ones that take the place of client software, are free; they are either run altruistically or to facilitate advertising.

Soon it will be hardware. In 1999, Internet startup FreePC tried to make money by giving away computers in exchange for the ability to monitor users’ surfing and purchasing habits. The company failed, but computers have only gotten cheaper since then. It won’t be long before giving away netbooks in exchange for advertising will be a viable business. Or giving away digital cameras. Already there are companies that give away long-distance minutes in exchange for advertising. Free cell phones aren’t far off. Of course, not all IT hardware will be free. Some of the new cool hardware will cost too much to be free, and there will always be a need for concentrated computing power close to the user—game systems are an obvious example—but those will be the exception. Where the hardware costs too much to just give away, however, we’ll see free or highly subsidized hardware in exchange for locked-in service; that’s already the way cell phones are sold.

This is important because it destroys what’s left of the normal business rela­tionship between IT companies and their users. We’re not Google’s customers; we’re Google’s product that they sell to their customers. It’s a three-way relation­ship: us, the IT service provider, and the advertiser or data buyer. And as these noncustomer IT relationships proliferate, we’ll see more IT companies treating us as products. If I buy a Dell computer, then I’m obviously a Dell customer; but if I get a Dell computer for free in exchange for access to my life, it’s much less obvious whom I’m entering a business relationship with. Facebook’s continual ratcheting down of user privacy in order to satisfy its actual customers­—the advertisers—and enhance its revenue is just a hint of what’s to come.

The third conceptual change I’ve termed depersonization: computing that removes the user, either partially or entirely. Expect to see more software agents: programs that do things on your behalf, such as prioritize your email based on your observed preferences or send you personalized sales announcements based on your past behavior. The “people who liked this also liked” feature on many retail websites is just the beginning. A website that alerts you if a plane ticket to your favorite destination drops below a certain price is simplistic but useful, and some sites already offer this functionality. Ten years won’t be enough time to solve the serious artificial intelligence problems required to fully real­ize intelligent agents, but the agents of that time will be both sophisticated and commonplace, and they’ll need less direct input from you.

Similarly, connecting objects to the Internet will soon be cheap enough to be viable. There’s already considerable research into Internet-enabled medical devices, smart power grids that communicate with smart phones, and networked automobiles. Nike sneakers can already communicate with your iPhone. Your phone already tells the network where you are. Internet-enabled appliances are already in limited use, but soon they will be the norm. Businesses will acquire smart HVAC units, smart elevators, and smart inventory systems. And, as short-range communications—like RFID and Bluetooth—become cheaper, everything becomes smart.

The “Internet of things” won’t need you to communicate. The smart appliances in your smart home will talk directly to the power company. Your smart car will talk to road sensors and, eventually, other cars. Your clothes will talk to your dry cleaner. Your phone will talk to vending machines; they already do in some countries. The ramifications of this are hard to imagine; it’s likely to be weirder and less orderly than the contemporary press describes it. But certainly smart objects will be talking about you, and you probably won’t have much control over what they’re saying.

One old trend: deperimeterization. Two current trends: consumerization and decentralization. Three future trends: deconcentration, decustomerization, and depersonization. That’s IT in 2020—it’s not under your control, it’s doing things without your knowledge and consent, and it’s not necessarily acting in your best interests. And this is how things will be when they’re working as they’re intended to work; I haven’t even started talking about the bad guys yet.

That’s because IT security in 2020 will be less about protecting you from traditional bad guys, and more about protecting corporate business models from you. Deperimeterization assumes everyone is untrusted until proven otherwise. Consumerization requires networks to assume all user devices are untrustworthy until proven otherwise. Decentralization and deconcentration won’t work if you’re able to hack the devices to run unauthorized software or access unauthorized data. Deconsumerization won’t be viable unless you’re unable to bypass the ads, or whatever the vendor uses to monetize you. And depersonization requires the autonomous devices to be, well, autonomous.

In 2020—10 years from now—Moore’s Law predicts that computers will be 100 times more powerful. That’ll change things in ways we can’t know, but we do know that human nature never changes. Cory Doctorow rightly pointed out that all complex ecosystems have parasites. Society’s traditional parasites are criminals, but a broader definition makes more sense here. As we users lose control of those systems and IT providers gain control for their own purposes, the definition of “parasite” will shift. Whether they’re criminals trying to drain your bank account, movie watchers trying to bypass whatever copy protection studios are using to protect their profits, or Facebook users trying to use the service without giving up their privacy or being forced to watch ads, parasites will continue to try to take advantage of IT systems. They’ll exist, just as they always have existed, and—like today—security is going to have a hard time keeping up with them.

Welcome to the future. Companies will use technical security measures, backed up by legal security measures, to protect their business models. And unless you’re a model user, the parasite will be you.

My only real complaint with the essay is that I used “decentralization” in a nonstandard manner, and didn’t explain it well. I meant that our personal data will become decentralized; instead of it all being on our own computers, it will be on the computers of various cloud providers. But that causes a massive centralization of all of our data. I should have explicitly called out the risks of that.

Otherwise, I’m happy with what I wrote ten years ago.

Posted on February 7, 2020 at 12:50 PM20 Comments

New Ransomware Targets Industrial Control Systems

EKANS is a new ransomware that targets industrial control systems:

But EKANS also uses another trick to ratchet up the pain: It’s designed to terminate 64 different software processes on victim computers, including many that are specific to industrial control systems. That allows it to then encrypt the data that those control system programs interact with. While crude compared to other malware purpose-built for industrial sabotage, that targeting can nonetheless break the software used to monitor infrastructure, like an oil firm’s pipelines or a factory’s robots. That could have potentially dangerous consequences, like preventing staff from remotely monitoring or controlling the equipment’s operation.

EKANS is actually the second ransomware to hit industrial control systems. According to Dragos, another ransomware strain known as Megacortex that first appeared last spring included all of the same industrial control system process-killing features, and may in fact be a predecessor to EKANS developed by the same hackers. But because Megacortex also terminated hundreds of other processes, its industrial-control-system targeted features went largely overlooked.

Speculation is that this is criminal in origin, and not the work of a government.

It’s also the first malware that is named after a Pokémon character.

Posted on February 7, 2020 at 9:42 AM13 Comments

A New Clue for the Kryptos Sculpture

Jim Sanborn, who designed the Kryptos sculpture in a CIA courtyard, has released another clue to the still-unsolved part 4. I think he’s getting tired of waiting.

Did we mention Mr. Sanborn is 74?

Holding on to one of the world’s most enticing secrets can be stressful. Some would-be codebreakers have appeared at his home.

Many felt they had solved the puzzle, and wanted to check with Mr. Sanborn. Sometimes forcefully. Sometimes, in person.

Elonka Dunin, a game developer and consultant who has created a rich page of background information on the sculpture and oversees the best known online community of thousands of Kryptos fans, said that some who contact her (sometimes also at home) are obsessive and appear to have tipped into mental illness. “I am always gentle to them and do my best to listen to them,” she said.

Mr. Sanborn has set up systems to allow people to check their proposed solutions without having to contact him directly. The most recent incarnation is an email-based process with a fee of $50 to submit a potential solution. He receives regular inquiries, so far none of them successful.

The ongoing process is exhausting, he said, adding “It’s not something I thought I would be doing 30 years on.”

Another news article.

EDITED TO ADD (2/13): Another article.

Posted on February 6, 2020 at 6:14 AM16 Comments

New Research on the Adtech Industry

The Norwegian Consumer Council has published an extensive report about how the adtech industry violates consumer privacy. At the same time, it is filing three legal complaints against six companies in this space. From a Twitter summary:

1. [thread] We are filing legal complaints against six companies based on our research, revealing systematic breaches to privacy, by shadowy #OutOfControl #adtech companies gathering & sharing heaps of personal data. https://forbrukerradet.no/out-of-control/#GDPR… #privacy

2. We observed how ten apps transmitted user data to at least 135 different third parties involved in advertising and/or behavioural profiling, exposing (yet again) a vast network of companies monetizing user data and using it for their own purposes.

3. Dating app @Grindr shared detailed user data with a large number of third parties. Data included the fact that you are using the app (clear indication of sexual orientation), IP address (personal data), Advertising ID, GPS location (very revealing), age, and gender.

From a news article:

The researchers also reported that the OkCupid app sent a user’s ethnicity and answers to personal profile questions—like “Have you used psychedelic drugs?”—to a firm that helps companies tailor marketing messages to users. The Times found that the OkCupid site had recently posted a list of more than 300 advertising and analytics “partners” with which it may share users’ information.

This is really good research exposing the inner workings of a very secretive industry.

Posted on February 4, 2020 at 6:21 AM20 Comments

Attacking Driverless Cars with Projected Images

Interesting research—”Phantom Attacks Against Advanced Driving Assistance Systems“:

Abstract: The absence of deployed vehicular communication systems, which prevents the advanced driving assistance systems (ADASs) and autopilots of semi/fully autonomous cars to validate their virtual perception regarding the physical environment surrounding the car with a third party, has been exploited in various attacks suggested by researchers. Since the application of these attacks comes with a cost (exposure of the attacker’s identity), the delicate exposure vs. application balance has held, and attacks of this kind have not yet been encountered in the wild. In this paper, we investigate a new perceptual challenge that causes the ADASs and autopilots of semi/fully autonomous to consider depthless objects (phantoms) as real. We show how attackers can exploit this perceptual challenge to apply phantom attacks and change the abovementioned balance, without the need to physically approach the attack scene, by projecting a phantom via a drone equipped with a portable projector or by presenting a phantom on a hacked digital billboard that faces the Internet and is located near roads. We show that the car industry has not considered this type of attack by demonstrating the attack on today’s most advanced ADAS and autopilot technologies: Mobileye 630 PRO and the Tesla Model X, HW 2.5; our experiments show that when presented with various phantoms, a car’s ADAS or autopilot considers the phantoms as real objects, causing these systems to trigger the brakes, steer into the lane of oncoming traffic, and issue notifications about fake road signs. In order to mitigate this attack, we present a model that analyzes a detected object’s context, surface, and reflected light, which is capable of detecting phantoms with 0.99 AUC. Finally, we explain why the deployment of vehicular communication systems might reduce attackers’ opportunities to apply phantom attacks but won’t eliminate them.

The paper will be presented at CyberTech at the end of the month.

Posted on February 3, 2020 at 6:24 AM22 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.