Google Stops Collecting Location Data from Maps
Google Maps now stores location data locally on your device, meaning that Google no longer has that data to turn over to the police.
Google Maps now stores location data locally on your device, meaning that Google no longer has that data to turn over to the police.
It’s squid parts from college dissections, so it’s not a volume operation.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Ben Rothke chose A Hacker’s Mind as “the best information security book of 2023.”
Interesting attack on a LLM:
In Writer, users can enter a ChatGPT-like session to edit or create their documents. In this chat session, the LLM can retrieve information from sources on the web to assist users in creation of their documents. We show that attackers can prepare websites that, when a user adds them as a source, manipulate the LLM into sending private information to the attacker or perform other malicious activities.
The data theft can include documents the user has uploaded, their chat history or potentially specific private information the chat model can convince the user to divulge at the attacker’s behest.
The Solntsepek group has taken credit for the attack. They’re linked to the Russian military, so it’s unclear whether the attack was government directed or freelance.
This is one of the most significant cyberattacks since Russia invaded in February 2022.
There’s a rumor flying around the Internet that OpenAI is training foundation models on your Dropbox documents.
Here’s CNBC. Here’s Boing Boing. Some articles are more nuanced, but there’s still a lot of confusion.
It seems not to be true. Dropbox isn’t sharing all of your documents with OpenAI. But here’s the problem: we don’t trust OpenAI. We don’t trust tech corporations. And—to be fair—corporations in general. We have no reason to.
Simon Willison nails it in a tweet:
“OpenAI are training on every piece of data they see, even when they say they aren’t” is the new “Facebook are showing you ads based on overhearing everything you say through your phone’s microphone.”
Willison expands this in a blog post, which I strongly recommend reading in its entirety. His point is that these companies have lost our trust:
Trust is really important. Companies lying about what they do with your privacy is a very serious allegation.
A society where big companies tell blatant lies about how they are handling our data—and get away with it without consequences—is a very unhealthy society.
A key role of government is to prevent this from happening. If OpenAI are training on data that they said they wouldn’t train on, or if Facebook are spying on us through our phone’s microphones, they should be hauled in front of regulators and/or sued into the ground.
If we believe that they are doing this without consequence, and have been getting away with it for years, our intolerance for corporate misbehavior becomes a victim as well. We risk letting companies get away with real misconduct because we incorrectly believed in conspiracy theories.
Privacy is important, and very easily misunderstood. People both overestimate and underestimate what companies are doing, and what’s possible. This isn’t helped by the fact that AI technology means the scope of what’s possible is changing at a rate that’s hard to appreciate even if you’re deeply aware of the space.
If we want to protect our privacy, we need to understand what’s going on. More importantly, we need to be able to trust companies to honestly and clearly explain what they are doing with our data.
On a personal level we risk losing out on useful tools. How many people cancelled their Dropbox accounts in the last 48 hours? How many more turned off that AI toggle, ruling out ever evaluating if those features were useful for them or not?
And while Dropbox is not sending your data to OpenAI today, it could do so tomorrow with a simple change of its terms of service. So could your bank, or credit card company, your phone company, or any other company that owns your data. Any of the tens of thousands of data brokers could be sending your data to train AI models right now, without your knowledge or consent. (At least, in the US. Hooray for the EU and GDPR.)
Or, as Thomas Claburn wrote:
“Your info won’t be harvested for training” is the new “Your private chatter won’t be used for ads.”
These foundation models want our data. The corporations that have our data want the money. It’s only a matter of time, unless we get serious government privacy regulation.
More unconstrained surveillance:
Lawmakers noted the pharmacies’ policies for releasing medical records in a letter dated Tuesday to the Department of Health and Human Services (HHS) Secretary Xavier Becerra. The letter—signed by Sen. Ron Wyden (D-Ore.), Rep. Pramila Jayapal (D-Wash.), and Rep. Sara Jacobs (D-Calif.)—said their investigation pulled information from briefings with eight big prescription drug suppliers.
They include the seven largest pharmacy chains in the country: CVS Health, Walgreens Boots Alliance, Cigna, Optum Rx, Walmart Stores, Inc., The Kroger Company, and Rite Aid Corporation. The lawmakers also spoke with Amazon Pharmacy.
All eight of the pharmacies said they do not require law enforcement to have a warrant prior to sharing private and sensitive medical records, which can include the prescription drugs a person used or uses and their medical conditions. Instead, all the pharmacies hand over such information with nothing more than a subpoena, which can be issued by government agencies and does not require review or approval by a judge.
Three pharmacies—CVS Health, The Kroger Company, and Rite Aid Corporation—told lawmakers they didn’t even require their pharmacy staff to consult legal professionals before responding to law enforcement requests at pharmacy counters. According to the lawmakers, CVS, Kroger, and Rite Aid said that “their pharmacy staff face extreme pressure to immediately respond to law enforcement demands and, as such, the companies instruct their staff to process those requests in store.”
The rest of the pharmacies—Amazon, Cigna, Optum Rx, Walmart, and Walgreens Boots Alliance—at least require that law enforcement requests be reviewed by legal professionals before pharmacists respond. But, only Amazon said it had a policy of notifying customers of law enforcement demands for pharmacy records unless there were legal prohibitions to doing so, such as a gag order.
The Molinière Underwater Sculpture Park has pieces that are colored in part with squid ink.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
In 2016, I wrote about an Internet that affected the world in a direct, physical manner. It was connected to your smartphone. It had sensors like cameras and thermostats. It had actuators: Drones, autonomous cars. And it had smarts in the middle, using sensor data to figure out what to do and then actually do it. This was the Internet of Things (IoT).
The classical definition of a robot is something that senses, thinks, and acts—that’s today’s Internet. We’ve been building a world-sized robot without even realizing it.
In 2023, we upgraded the “thinking” part with large-language models (LLMs) like GPT. ChatGPT both surprised and amazed the world with its ability to understand human language and generate credible, on-topic, humanlike responses. But what these are really good at is interacting with systems formerly designed for humans. Their accuracy will get better, and they will be used to replace actual humans.
In 2024, we’re going to start connecting those LLMs and other AI systems to both sensors and actuators. In other words, they will be connected to the larger world, through APIs. They will receive direct inputs from our environment, in all the forms I thought about in 2016. And they will increasingly control our environment, through IoT devices and beyond.
It will start small: Summarizing emails and writing limited responses. Arguing with customer service—on chat—for service changes and refunds. Making travel reservations.
But these AIs will interact with the physical world as well, first controlling robots and then having those robots as part of them. Your AI-driven thermostat will turn the heat and air conditioning on based also on who’s in what room, their preferences, and where they are likely to go next. It will negotiate with the power company for the cheapest rates by scheduling usage of high-energy appliances or car recharging.
This is the easy stuff. The real changes will happen when these AIs group together in a larger intelligence: A vast network of power generation and power consumption with each building just a node, like an ant colony or a human army.
Future industrial-control systems will include traditional factory robots, as well as AI systems to schedule their operation. It will automatically order supplies, as well as coordinate final product shipping. The AI will manage its own finances, interacting with other systems in the banking world. It will call on humans as needed: to repair individual subsystems or to do things too specialized for the robots.
Consider driverless cars. Individual vehicles have sensors, of course, but they also make use of sensors embedded in the roads and on poles. The real processing is done in the cloud, by a centralized system that is piloting all the vehicles. This allows individual cars to coordinate their movement for more efficiency: braking in synchronization, for example.
These are robots, but not the sort familiar from movies and television. We think of robots as discrete metal objects, with sensors and actuators on their surface, and processing logic inside. But our new robots are different. Their sensors and actuators are distributed in the environment. Their processing is somewhere else. They’re a network of individual units that become a robot only in aggregate.
This turns our notion of security on its head. If massive, decentralized AIs run everything, then who controls those AIs matters a lot. It’s as if all the executive assistants or lawyers in an industry worked for the same agency. An AI that is both trusted and trustworthy will become a critical requirement.
This future requires us to see ourselves less as individuals, and more as parts of larger systems. It’s AI as nature, as Gaia—everything as one system. It’s a future more aligned with the Buddhist philosophy of interconnectedness than Western ideas of individuality. (And also with science-fiction dystopias, like Skynet from the Terminator movies.) It will require a rethinking of much of our assumptions about governance and economy. That’s not going to happen soon, but in 2024 we will see the first steps along that path.
This essay previously appeared in Wired.
Sidebar photo of Bruce Schneier by Joe MacInnis.