Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak:
- I’m speaking (remotely) at the Sektor 3.0 Festival in Warsaw, Poland, May 21-22, 2025.
The list is maintained on this page.
Page 29
This is a current list of where and when I am scheduled to speak:
The list is maintained on this page.
Google has extended its Advanced Protection features to Android devices. It’s not for everybody, but something to be considered by high-risk users.
Wired article, behind a paywall.
The case is over:
A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.
I’m sure it’ll be appealed. Everything always is.
A Florida bill requiring encryption backdoors failed to pass.
The video is really amazing.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”
“Research rockets.” Sure.
Reporting on the rise of fake students enrolling in community college courses:
The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.
The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community.
Slashdot thread.
Deepfakes are now mimicking heartbeats
In a nutshell
- Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
- The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
- To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy.
And the AI models will start mimicking that.
The small pyjama squid (Sepioloidea lineolata) produces toxic slime, “a rare example of a poisonous predatory mollusc.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.
In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)
I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.
This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.
Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.
I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:
This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.
For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.
I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.
Sidebar photo of Bruce Schneier by Joe MacInnis.