Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak:
- I’m speaking at the AI Summit New York on December 6, 2023.
The list is maintained on this page.
Page 76
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
Sad story of Tokelau, and how its top-level domain “became the unwitting host to the dark underworld by providing a never-ending supply of domain names that could be weaponized against internet users. Scammers began using .tk websites to do everything from harvesting passwords and payment information to displaying pop-up ads or delivering malware.”
Artificial intelligence will change so many aspects of society, largely in ways that we cannot conceive of yet. Democracy, and the systems of governance that surround it, will be no exception. In this short essay, I want to move beyond the “AI-generated disinformation” trope and speculate on some of the ways AI will change how democracy functions—in both large and small ways.
When I survey how artificial intelligence might upend different aspects of modern society, democracy included, I look at four different dimensions of change: speed, scale, scope, and sophistication. Look for places where changes in degree result in changes of kind. Those are where the societal upheavals will happen.
Some items on my list are still speculative, but none require science-fictional levels of technological advance. And we can see the first stages of many of them today. When reading about the successes and failures of AI systems, it’s important to differentiate between the fundamental limitations of AI as a technology, and the practical limitations of AI systems in the fall of 2023. Advances are happening quickly, and the impossible is becoming the routine. We don’t know how long this will continue, but my bet is on continued major technological advances in the coming years. Which means it’s going to be a wild ride.
So, here’s my list:
When I teach AI policy at HKS, I stress the importance of separating the specific AI chatbot technologies in November of 2023 with AI’s technological possibilities in general. Some of the items on my list will soon be possible; others will remain fiction for many years. Similarly, our acceptance of these technologies will change. Items on that list that we would never accept today might feel routine in a few years. A judgeless courtroom seems crazy today, but so did a driverless car a few years ago. Don’t underestimate our ability to normalize new technologies. My bet is that we’re in for a wild ride.
This essay previously appeared on the Harvard Kennedy School Ash Center’s website.
Really interesting article.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Selling miniature replicas to unsuspecting shoppers:
Online marketplaces sell tiny pink cowboy hats. They also sell miniature pencil sharpeners, palm-size kitchen utensils, scaled-down books and camping chairs so small they evoke the Stonehenge scene in “This Is Spinal Tap.” Many of the minuscule objects aren’t clearly advertised.
[…]
But there is no doubt some online sellers deliberately trick customers into buying smaller and often cheaper-to-produce items, Witcher said. Common tactics include displaying products against a white background rather than in room sets or on models, or photographing items with a perspective that makes them appear bigger than they really are. Dimensions can be hidden deep in the product description, or not included at all.
In those instances, the duped consumer “may say, well, it’s only $1, $2, maybe $3—what’s the harm?” Witcher said. When the item arrives the shopper may be confused, amused or frustrated, but unlikely to complain or demand a refund.
“When you aggregate that to these companies who are selling hundreds of thousands, maybe millions of these items over time, that adds up to a nice chunk of change,” Witcher said. “It’s finding a loophole in how society works and making money off of it.”
Defrauding a lot of people out of a small amount each can be a very successful way of making money.
This is an excerpt from a longer paper. You can read the whole thing (complete with sidebars and illustrations) here.
Our message is simple: it is possible to get the best of both worlds. We can and should get the benefits of the cloud while taking security back into our own hands. Here we outline a strategy for doing that.
In the last few years, a slew of ideas old and new have converged to reveal a path out of this morass, but they haven’t been widely recognized, combined, or used. These ideas, which we’ll refer to in the aggregate as “decoupling,” allow us to rethink both security and privacy.
Here’s the gist. The less someone knows, the less they can put you and your data at risk. In security this is called Least Privilege. The decoupling principle applies that idea to cloud services by making sure systems know as little as possible while doing their jobs. It states that we gain security and privacy by separating private data that today is unnecessarily concentrated.
To unpack that a bit, consider the three primary modes for working with our data as we use cloud services: data in motion, data at rest, and data in use. We should decouple them all.
Our data is in motion as we exchange traffic with cloud services such as videoconferencing servers, remote file-storage systems, and other content-delivery networks. Our data at rest, while sometimes on individual devices, is usually stored or backed up in the cloud, governed by cloud provider services and policies. And many services use the cloud to do extensive processing on our data, sometimes without our consent or knowledge. Most services involve more than one of these modes.
To ensure that cloud services do not learn more than they should, and that a breach of one does not pose a fundamental threat to our data, we need two types of decoupling. The first is organizational decoupling: dividing private information among organizations such that none knows the totality of what is going on. The second is functional decoupling: splitting information among layers of software. Identifiers used to authenticate users, for example, should be kept separate from identifiers used to connect their devices to the network.
In designing decoupled systems, cloud providers should be considered potential threats, whether due to malice, negligence, or greed. To verify that decoupling has been done right, we can learn from how we think about encryption: you’ve encrypted properly if you’re comfortable sending your message with your adversary’s communications system. Similarly, you’ve decoupled properly if you’re comfortable using cloud services that have been split across a noncolluding group of adversaries.
This essay was written with Barath Raghavan, and previously appeared in IEEE Spectrum.
Gene Spafford wrote an essay reflecting on the Morris Worm of 1988—thirty-five years ago. His lessons from then are still applicable today.
The Flipper Zero is an incredibly versatile hacking device. Now it can be used to crash iPhones in its vicinity by sending them a never-ending stream of pop-ups.
These types of hacks have been possible for decades, but they require special equipment and a fair amount of expertise. The capabilities generally required expensive SDRs—short for software-defined radios—that, unlike traditional hardware-defined radios, use firmware and processors to digitally re-create radio signal transmissions and receptions. The $200 Flipper Zero isn’t an SDR in its own right, but as a software-controlled radio, it can do many of the same things at an affordable price and with a form factor that’s much more convenient than the previous generations of SDRs.
It’s not actually alive, but it twitches in response to soy sauce.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Sidebar photo of Bruce Schneier by Joe MacInnis.