<h2>Remotely Stopping Polish Trains</h2>
<a href="https://www.schneier.com/blog/archives/2023/08/remotely-stopping-polish-trains.html"><strong>[2023.08.28]</strong></a> Turns out that it’s <a href="https://www.wired.com/story/poland-train-radio-stop-attack/">easy</a> to broadcast radio commands that force Polish trains to stop:
<blockquote>…the saboteurs appear to have sent simple so-called “radio-stop” commands via radio frequency to the trains they targeted. Because the trains use a radio system that lacks encryption or authentication for those commands, Olejnik says, anyone with as little as $30 of off-the-shelf radio equipment can broadcast the command to a Polish train—sending a series of three acoustic tones at a 150.100 megahertz frequency—and trigger their emergency stop function.
“It is three tonal messages sent consecutively. Once the radio equipment receives it, the locomotive goes to a halt,” Olejnik says, pointing to a document outlining trains’ different technical standards in the European Union that <a href="https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:32006D0860">describes the “radio-stop” command</a> used in the Polish system. In fact, Olejnik says that the ability to send the command has been described in Polish radio and train forums and on YouTube for years. “Everybody could do this. Even teenagers trolling. The frequencies are known. The tones are known. The equipment is cheap.”</blockquote>
Even so, this is being described as a cyberattack.
<h2>Identity Theft from 1965 Uncovered through Face Recognition</h2>
<a href="https://www.schneier.com/blog/archives/2023/08/identity-theft-from-1965-uncovered-through-face-recognition.html"><strong>[2023.08.29]</strong></a> Interesting <a href="https://apnews.com/article/maine-brothers-assumed-identity-facial-recognition-technology-cf99404df550dcff9d20042b1f91dad2">story</a>:
<blockquote>Napoleon Gonzalez, of Etna, assumed the identity of his brother in 1965, a quarter century after his sibling’s death as an infant, and used the stolen identity to obtain Social Security benefits under both identities, multiple passports and state identification cards, law enforcement officials said.
A new investigation was launched in 2020 after facial identification software indicated Gonzalez’s face was on two state identification cards.
The facial recognition technology is used by the Maine Bureau of Motor Vehicles to ensure no one obtains multiple credentials or credentials under someone else’s name, said Emily Cook, spokesperson for the secretary of state’s office.</blockquote>
<h2>When Apps Go Rogue</h2>
<a href="https://www.schneier.com/blog/archives/2023/08/when-apps-go-rogue.html"><strong>[2023.08.30]</strong></a> Interesting <a href="https://gizmodo.com/how-nightowl-for-mac-added-a-botnet-1850740785">story</a> of an Apple Macintosh app that went rogue. Basically, it was a good app until one particular update…when it went bad.
<blockquote>With more official macOS features added in 2021 that enabled the “Night Shift” dark mode, the NightOwl app was left forlorn and forgotten on many older Macs. Few of those supposed tens of thousands of users likely noticed when the app they ran in the background of their older Macs was bought by another company, nor when earlier this year that company silently updated the dark mode app so that it hijacked their machines in order to send their IP data through a server network of affected computers, AKA a botnet.</blockquote>
This is not an unusual story. Sometimes the apps are sold. Sometimes they’re orphaned, and then taken over by someone else.
<h2>Own Your Own Government Surveillance Van</h2>
<a href="https://www.schneier.com/blog/archives/2023/08/own-your-own-government-surveillance-van.html"><strong>[2023.08.31]</strong></a> A used government surveillance van is <a href="https://www.theautopian.com/a-used-car-dealership-in-chicago-is-selling-an-actual-government-surveillance-van-for-26795/">for sale</a> in Chicago:
<blockquote>So how was this van turned into a mobile spying center? Well, let’s start with how it has more LCD monitors than a Counterstrike LAN party. They can be used to monitor any of six different video inputs including a videoscope camera. A videoscope and a borescope are very similar as they’re both cameras on the ends of optical fibers, so the same tech you’d use to inspect cylinder walls is also useful for surveillance. Kind of cool, right? Multiple Sony DVD-based video recorders store footage captured by cameras, audio recorders by high-end equipment brand Marantz capture sounds, and time and date generators sync gathered media up for accurate analysis. Circling back around to audio, this van features seven different audio inputs including a body wire channel.</blockquote>
Only $26,795, but you can probably negotiate them down.
<h2>Spyware Vendor Hacked</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/spyware-vendor-hacked.html"><strong>[2023.09.01]</strong></a> A Brazilian spyware app vendor was <a href="https://techcrunch.com/2023/08/26/brazil-webdetetive-spyware-deleted/">hacked</a> by activists:
<blockquote>In an undated note seen by TechCrunch, the unnamed hackers described how they found and exploited several security vulnerabilities that allowed them to compromise WebDetetive’s servers and access its user databases. By exploiting other flaws in the spyware maker’s web dashboard—used by abusers to access the stolen phone data of their victims—the hackers said they enumerated and downloaded every dashboard record, including every customer’s email address.
The hackers said that dashboard access also allowed them to delete victim devices from the spyware network altogether, effectively severing the connection at the server level to prevent the device from uploading new data. “Which we definitely did. Because we could. Because #fuckstalkerware,” the hackers wrote in the note.
The note was included in a cache containing more than 1.5 gigabytes of data scraped from the spyware’s web dashboard. That data included information about each customer, such as the IP address they logged in from and their purchase history. The data also listed every device that each customer had compromised, which version of the spyware the phone was running, and the types of data that the spyware was collecting from the victim’s phone.</blockquote>
<h2>Inconsistencies in the Common Vulnerability Scoring System (CVSS)</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/inconsistencies-in-the-common-vulnerability-scoring-system-cvss.html"><strong>[2023.09.05]</strong></a> Interesting <a href="https://arxiv.org/abs/2308.15259">research</a>:
<blockquote>Shedding Light on CVSS Scoring Inconsistencies: A User-Centric Study on Evaluating Widespread Security Vulnerabilities
<b>Abstract:</b> The Common Vulnerability Scoring System (CVSS) is a popular method for evaluating the severity of vulnerabilities in vulnerability management. In the evaluation process, a numeric score between 0 and 10 is calculated, 10 being the most severe (critical) value. The goal of CVSS is to provide comparable scores across different evaluators. However, previous works indicate that CVSS might not reach this goal: If a vulnerability is evaluated by several analysts, their scores often differ. This raises the following questions: Are CVSS evaluations consistent? Which factors influence CVSS assessments? We systematically investigate these questions in an online survey with 196 CVSS users. We show that specific CVSS metrics are inconsistently evaluated for widespread vulnerability types, including Top 3 vulnerabilities from the ”2022 CWE Top 25 Most Dangerous Software Weaknesses” list. In a follow-up survey with 59 participants, we found that for the same vulnerabilities from the main study, 68% of these users gave different severity ratings. Our study reveals that most evaluators are aware of the problematic aspects of CVSS, but they still see CVSS as a useful tool for vulnerability assessment. Finally, we discuss possible reasons for inconsistent evaluations and provide recommendations on improving the consistency of scoring.</blockquote>
Here’s a <a href="https://www.cs1.tf.fau.de/research/human-factors-in-security-and-privacy-group/consistency-of-cvss">summary</a> of the research.
<h2>Cryptocurrency Startup Loses Encryption Key for Electronic Wallet</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/cryptocurrency-startup-loses-encryption-key-for-electronic-wallet.html"><strong>[2023.09.06]</strong></a> The cryptocurrency fintech startup Prime Trust <a href="https://www.404media.co/crypto-startup-prime-trust-files-for-bankruptcy-after-losing-password-to-38-9-million-crypto-wallet/">lost</a> the encryption key to its hardware wallet—and the recovery key—and therefore $38.9 million. It is now in bankruptcy.
I can’t understand why anyone thinks these technologies are a good idea.
<h2>The Hacker Tool to Get Personal Data from Credit Bureaus</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/the-hacker-tool-to-get-personal-data-from-credit-bureaus.html"><strong>[2023.09.07]</strong></a> The new site 404 Media has a <a href="https://www.404media.co/the-secret-weapon-hackers-can-use-to-dox-nearly-anyone-in-america-for-15-tlo-usinfosearch-transunion/">good article</a> on how hackers are cheaply getting personal information from credit bureaus:
<blockquote>This is the result of a secret weapon criminals are selling access to online that appears to tap into an especially powerful set of data: the target’s credit header. This is personal information that the credit bureaus Experian, Equifax, and TransUnion have on most adults in America via their credit cards. Through a complex web of agreements and purchases, that data trickles down from the credit bureaus to other companies who offer it to debt collectors, insurance companies, and law enforcement.
A 404 Media investigation has found that criminals have managed to tap into that data supply chain, in some cases by stealing former law enforcement officer’s identities, and are selling unfettered access to their criminal cohorts online. The tool 404 Media tested has also been used to gather information on high profile targets such as Elon Musk, Joe Rogan, and even President Joe Biden, seemingly without restriction. 404 Media verified that although not always sensitive, at least some of that data is accurate.</blockquote>
<h2>LLMs and Tool Use</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/ai-tool-use.html"><strong>[2023.09.08]</strong></a> Last March, just two weeks after <a href="https://www.wired.com/story/gpt-4-openai-will-make-chatgpt-smarter-but-wont-fix-its-flaws/">GPT-4 was released</a>, researchers at Microsoft quietly <a href="https://arxiv.org/abs/2303.16434" data-offer-url="https://arxiv.org/abs/2303.16434">announced</a> a plan to compile millions of APIs—tools that can do everything from ordering a pizza to solving physics equations to controlling the TV in your living room—into a compendium that would be made accessible to large language models (LLMs). This was just one milestone in the race across industry and academia to find the <a href="https://arxiv.org/abs/2305.11554" data-offer-url="https://arxiv.org/abs/2305.11554">best</a> <a href="https://arxiv.org/abs/2303.09014" data-offer-url="https://arxiv.org/abs/2303.09014">ways</a> <a href="https://arxiv.org/abs/2304.08244" data-offer-url="https://arxiv.org/abs/2304.08244">to</a> <a href="https://arxiv.org/abs/2304.09842" data-offer-url="https://arxiv.org/abs/2304.09842">teach</a> <a href="https://arxiv.org/pdf/2307.16789.pdf" data-offer-url="https://arxiv.org/pdf/2307.16789.pdf">LLMs</a> how to manipulate tools, which would supercharge the potential of AI more than any of the impressive advancements we’ve seen to date.
The Microsoft project aims to teach AI how to use any and all digital tools in one fell swoop, a clever and efficient approach. Today, LLMs can do a pretty <a href="https://medium.com/vcrsoft/pizza-ordering-using-chatgpt-effe6e29cd58">good job</a> of recommending pizza toppings to you if you describe your dietary preferences and can draft <a href="https://medium.com/vcrsoft/chatgpt-simulation-of-pizza-ordering-bot-with-contingency-2nd-try-abae06ceaef0">dialog</a> that you could use when you call the restaurant. But most AI tools can’t place the order, not even online. In contrast, Google’s seven-year-old <a href="https://www.theverge.com/2021/4/14/22382754/google-duplex-web-assistant-online-food-orders-android">Assistant</a> tool can synthesize a voice on the telephone and fill out an online order form, but it can’t pick a restaurant or guess your order. By combining these capabilities, though, a tool-using AI could do it all. An LLM with access to your past conversations and tools like calorie calculators, a restaurant menu database, and your digital payment wallet could feasibly judge that you are trying to lose weight and want a low-calorie option, find the nearest restaurant with toppings you like, and place the delivery order. If it has access to your payment history, it could even guess at how generously you usually tip. If it has access to the sensors on your smartwatch or fitness tracker, it might be able to sense when your blood sugar is low and order the pie before you even realize you’re hungry.
Perhaps the most compelling potential applications of tool use are those that give AIs the ability to improve themselves. Suppose, for example, you asked a chatbot for help interpreting some facet of ancient Roman law that no one had thought to include examples of in the model’s original training. An LLM empowered to search academic databases and trigger its own training process could fine-tune its understanding of Roman law before answering. Access to specialized tools could even help a model like this better explain itself. While LLMs like GPT-4 already do a fairly good job of explaining their reasoning when asked, these explanations emerge from a “black box” and are vulnerable to errors and <a href="https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the-same-way-205754" data-offer-url="https://theconversation.com/both-humans-and-ai-hallucinate-but-not-in-the-same-way-205754">hallucinations</a>. But a tool-using LLM could <a href="https://openai.com/research/language-models-can-explain-neurons-in-language-models" data-offer-url="https://openai.com/research/language-models-can-explain-neurons-in-language-models">dissect</a> its own internals, offering empirical assessments of its own reasoning and deterministic explanations of why it produced the answer it did.
If given access to tools for soliciting human feedback, a tool-using LLM could even generate specialized knowledge that isn’t yet captured on the web. It could post a question to Reddit or Quora or delegate a task to a human on Amazon’s Mechanical Turk. It could even seek out data about human preferences by doing survey research, either to provide an answer directly to you or to fine-tune its own training to be able to better answer questions in the future. Over time, tool-using AIs might start to look a lot like tool-using humans. An LLM can generate code much faster than any human programmer, so it can manipulate the systems and services of your computer with ease. It could also use your computer’s keyboard and cursor the way a person would, allowing it to use any program you do. And it could improve its own capabilities, using tools to ask questions, conduct research, and write code to incorporate into itself.
It’s easy to see how this kind of tool use comes with tremendous risks. Imagine an LLM being able to find someone’s phone number, call them and surreptitiously record their voice, guess what bank they use based on the largest providers in their area, impersonate them on a phone call with customer service to reset their password, and liquidate their account to make a donation to a political party. Each of these tasks invokes a simple tool—an Internet search, a voice synthesizer, a bank app—and the LLM scripts the sequence of actions using the tools.
We don’t yet know how successful any of these attempts will be. As remarkably fluent as LLMs are, they weren’t built specifically for the purpose of operating tools, and it remains to be seen how their early successes in tool use will translate to future use cases like the ones described here. As such, giving the current generative AI sudden access to millions of APIs—as Microsoft plans to—could be a little like letting a toddler loose in a weapons depot.
Companies like Microsoft should be particularly careful about granting AIs access to certain combinations of tools. Access to tools to look up information, make specialized calculations, and examine real-world sensors all carry a modicum of risk. The ability to transmit messages beyond the immediate user of the tool or to use APIs that manipulate physical objects like locks or machines carries much larger risks. Combining these categories of tools amplifies the risks of each.
The operators of the most advanced LLMs, such as OpenAI, should continue to proceed cautiously as they begin enabling tool use and should <a href="https://www.semafor.com/article/05/17/2023/openai-shut-down-dc-companys-pitch-to-apply-chatgpt-to-politics" data-offer-url="https://www.semafor.com/article/05/17/2023/openai-shut-down-dc-companys-pitch-to-apply-chatgpt-to-politics">restrict</a> uses of their products in sensitive domains such as politics, health care, banking, and defense. But it seems clear that these industry leaders have already largely lost their <a href="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither" data-offer-url="https://www.semianalysis.com/p/google-we-have-no-moat-and-neither">moat</a> around LLM technology—open source is catching up. <a href="https://about.fb.com/news/2023/07/llama-2/" data-offer-url="https://about.fb.com/news/2023/07/llama-2/">Recognizing</a> this trend, Meta has taken an “If you can’t beat ’em, join ’em” approach and <a href="https://spectrum.ieee.org/open-source-llm-not-open" data-offer-url="https://spectrum.ieee.org/open-source-llm-not-open">partially</a> embraced the role of providing open source LLM platforms.
On the policy front, national—and regional—AI prescriptions seem futile. <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence">Europe</a> is the only significant jurisdiction that has made meaningful progress on regulating the responsible use of AI, but it’s not entirely clear how regulators will <a href="https://www.brookings.edu/articles/key-enforcement-issues-of-the-ai-act-should-lead-eu-trilogue-debate/">enforce</a> it. And the US is playing catch-up and seems destined to be much more permissive in allowing even risks deemed “<a href="https://www.justsecurity.org/87344/the-perils-and-promise-of-ai-regulation/" data-offer-url="https://www.justsecurity.org/87344/the-perils-and-promise-of-ai-regulation/">unacceptable</a>” by the EU. Meanwhile, no government has invested in a “<a href="https://foreignpolicy.com/2023/06/12/ai-regulation-technology-us-china-eu-governance/">public option</a>” AI model that would offer an alternative to Big Tech that is more responsive and accountable to its citizens.
Regulators should consider what AIs are allowed to do autonomously, like whether they can be assigned property ownership or register a business. Perhaps more sensitive transactions should require a verified human in the loop, even at the cost of some added friction. Our legal system may be imperfect, but we largely know how to hold humans accountable for misdeeds; the trick is not to let them shunt their responsibilities to artificial third parties. We should continue pursuing AI-specific regulatory solutions while also recognizing that they are not sufficient on their own.
We must also prepare for the benign ways that tool-using AI might impact society. In the best-case scenario, such an LLM may rapidly accelerate a field like drug discovery, and the patent office and FDA should prepare for a dramatic increase in the number of legitimate drug candidates. We should reshape how we interact with our governments to take advantage of AI tools that give us all dramatically more potential to have our <a href="https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278" data-offer-url="https://theconversation.com/ai-could-shore-up-democracy-heres-one-way-207278">voices</a> heard. And we should make sure that the economic benefits of superintelligent, labor-saving AI are <a href="https://jacobin.com/2023/03/ai-artificial-intelligence-art-chatgpt-jobs-capitalism" data-offer-url="https://jacobin.com/2023/03/ai-artificial-intelligence-art-chatgpt-jobs-capitalism">equitably</a> distributed.
We can debate whether LLMs are truly intelligent or conscious, or have agency, but AIs will become increasingly capable tool users either way. Some things are greater than the sum of their parts. An AI with the ability to manipulate and interact with even simple tools will become vastly more powerful than the tools themselves. Let’s be sure we’re ready for them.
This essay was written with Nathan Sanders, and <a href="https://www.wired.com/story/does-chatgpt-make-you-nervous-try-chatgpt-with-a-hammer/">previously appeared</a> on Wired.com.
<h2>On Robots Killing People</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/on-robots-killing-people.html"><strong>[2023.09.11]</strong></a> The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. <a href="https://news.google.com/newspapers?id=7KMyAAAAIBAJ&sjid=Bu8FAAAAIBAJ&pg=3301,87702&dq=flat-rock+williams+robot&hl=en">And so twenty-five-year-old Robert Williams</a> was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was <a href="https://www.wired.com/2010/01/0125robot-kills-worker/">reportedly</a> the first incident in which a robot killed a human; many more would follow.
At Kawasaki Heavy Industries in 1981, Kenji Urada died in similar <a href="https://www.theguardian.com/theguardian/2014/dec/09/robot-kills-factory-worker">circumstances</a>. A malfunctioning robot he went to inspect killed him when he obstructed its path, according to Gabriel Hallevy in his 2013 book, <a href="https://tertulia.com/book/when-robots-kill-artificial-intelligence-under-criminal-law-gabriel-hallevy/9781555538057?affiliate_id=atl-347"><em>When Robots Kill: Artificial Intelligence Under Criminal Law</em></a>. As Hallevy puts it, the robot simply determined that “the most efficient way to eliminate the threat was to push the worker into an adjacent machine.” From 1992 to 2017, workplace robots were responsible for <a href="https://pubmed.ncbi.nlm.nih.gov/36850052/">41 recorded deaths in the United States</a>—and that’s likely an <a href="https://read.dukeupress.edu/demography/article/59/2/607/294500/Death-by-Robots-Automation-and-Working-Age">underestimate</a>, especially when you consider knock-on effects from automation, such as job loss. A robotic anti-aircraft cannon <a href="https://www.wired.com/2007/10/robot-cannon-ki/">killed</a> nine South African soldiers in 2007 when a possible software failure led the machine to swing itself wildly and fire dozens of lethal rounds in less than a second. In a 2018 trial, a medical robot was implicated in killing <a href="https://www.dailymail.co.uk/news/article-6356917/Newcastle-patient-died-experts-helping-surgeons-use-robot-left-inquest-hears.html">Stephen Pettitt</a> during a routine operation that had occurred a few years earlier.
You get the picture. Robots—”intelligent” and not—have been killing people for decades. And the development of more advanced artificial intelligence has only increased the potential for machines to cause harm. Self-driving cars are <a href="https://www.theatlantic.com/technology/archive/2023/08/driverless-taxis-waymo-cruise/675161/">already</a> on American streets, and robotic "dogs" are being used by <a href="https://www.theverge.com/2023/4/11/23679297/nypd-robot-dog-spot-surveillance-boston-dynamics">law enforcement</a>. Computerized systems are being given the capabilities to <a href="https://www.wired.com/story/does-chatgpt-make-you-nervous-try-chatgpt-with-a-hammer/">use tools</a>, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, superintelligent <a href="https://www.theatlantic.com/technology/archive/2023/03/open-ai-gpt4-chatbot-technology-power/673421/">program</a> when more immediate problems are at our doorstep? Regulation must push companies toward safe innovation and innovation in safety. We are not there yet.
Historically, major disasters have needed to occur to spur regulation—the types of disasters we would ideally foresee and avoid in today’s AI paradigm. The 1905 <a href="https://guides.loc.gov/chronicling-america-grover-shoe-factory-disaster/introduction">Grover Shoe Factory disaster</a> led to regulations governing the safe operation of steam boilers. At the time, companies claimed that large steam-automation machines were too complex to rush safety regulations. This, of course, led to overlooked safety flaws and escalating disasters. It wasn’t until the <a href="https://www.asme.org/about-asme/engineering-history">American Society of Mechanical Engineers</a> demanded risk analysis and transparency that dangers from these huge tanks of boiling water, once considered mystifying, were made easily understandable. The 1911 <a href="https://guides.loc.gov/chronicling-america-triangle-shirtwaist-factory-fire">Triangle Shirtwaist Factory fire</a> led to regulations on sprinkler systems and emergency exits. And the <a href="https://www.nist.gov/nist-time-capsule/nist-beneath-waves/nist-reveals-how-tiny-rivets-doomed-titanic-vessel">preventable</a> 1912 sinking of the Titanic resulted in new regulations on lifeboats, safety audits, and on-ship radios.
Perhaps the best analogy is the evolution of the Federal Aviation Administration. Fatalities in the first decades of aviation forced regulation, which required new developments in both law and technology. Starting with the Air Commerce Act of 1926, Congress recognized that the integration of aerospace tech into people’s lives and our economy demanded the highest scrutiny. Today, every airline crash is closely examined, motivating new technologies and procedures.
Any regulation of industrial robots stems from existing industrial regulation, which has been evolving for many decades. The Occupational Safety and Health Act of 1970 established safety standards for machinery, and the Robotic Industries Association, now merged into the Association for Advancing Automation, has been instrumental in developing and updating specific robot-safety standards since its founding in 1974. Those standards, with obscure names such as R15.06 and ISO 10218, emphasize inherent safe design, protective measures, and rigorous risk assessments for industrial robots.
But as technology continues to change, the government needs to more clearly regulate how and when robots can be used in society. Laws need to clarify who is responsible, and what the legal consequences are, when a robot’s actions result in harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and subjected to proper expert scrutiny.
AI and robotics companies don’t want this to happen. OpenAI, for example, has reportedly fought to “water down” safety regulations and reduce AI-quality requirements. According to an <a href="https://time.com/6288245/openai-eu-lobbying-ai-act/">article</a> in <em>Time</em>, it lobbied European Union officials against classifying models like ChatGPT as “high risk” which would have brought “stringent legal requirements including transparency, traceability, and human oversight.” The reasoning was supposedly that OpenAI did not intend to put its products to high-risk use—a logical twist akin to the Titanic owners lobbying that the ship should not be inspected for lifeboats on the principle that it was a “general purpose” vessel that also could sail in warm waters where there were no icebergs and people could float for days. (OpenAI did not comment when asked about its stance on regulation; previously, it <a href="https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/">has said</a> that “achieving our mission requires that we work to mitigate both current and longer-term risks,” and that it is working toward that goal by “collaborating with policymakers, researchers and users.”)
Large corporations have a tendency to develop computer technologies to self-servingly shift the burdens of their own shortcomings onto society at large, or to claim that safety regulations protecting society impose an unjust cost on corporations themselves, or that security baselines stifle innovation. We’ve heard it all before, and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned, and human operators trying to assist were killed in unexpected ways. Since the first-known death resulting from the feature in <a href="https://www.thesun.co.uk/news/1787336/shocking-dashcam-footage-shows-horror-tesla-crash-that-killed-driver-while-car-was-on-autopilot/">January 2016</a>, Tesla’s Autopilot has been implicated in more than 40 deaths according to <a href="http://tesladeaths.com">official report estimates</a>. Malfunctioning Teslas on Autopilot have deviated from their advertised capabilities by <a href="https://gothamist.com/news/nj-driver-says-his-tesla-got-confused-on-autopilot-and-crashed-into-signs">misreading road markings</a>, suddenly <a href="https://www.mlive.com/news/muskegon/2023/05/michigan-woman-hospitalized-after-self-driving-tesla-crashes-into-tree.html">veering into other cars or trees</a>, crashing into <a href="https://arstechnica.com/cars/2020/08/movie-watching-tesla-driver-charged-after-autopilot-hits-cop-car/">well-marked service vehicles</a>, or <a href="https://www.sacbee.com/news/local/article275584756.html">ignoring red lights</a>, <a href="https://www.mercurynews.com/2023/02/16/tesla-full-self-driving-can-let-cars-speed-run-stop-signs-and-cause-crashes-feds/">stop signs</a>, and <a href="https://jalopnik.com/not-amazing-tesla-using-fsd-blew-through-crosswalk-1850450480">crosswalks</a>. We’re concerned that AI-controlled robots already are moving beyond accidental killing in the name of efficiency and “deciding” to kill someone in order to achieve opaque and remotely controlled objectives.
As we move into a future where robots are becoming integral to our lives, we can’t forget that safety is a crucial part of innovation. True technological progress comes from applying comprehensive safety standards across technologies, even in the realm of the most futuristic and captivating robotic visions. By learning lessons from past fatalities, we can enhance safety protocols, rectify design flaws, and prevent further unnecessary loss of life.
For example, the UK government already <a href="https://www.adalovelaceinstitute.org/report/regulating-ai-in-the-uk/">sets out</a> statements that safety matters. Lawmakers must reach further back in history to become more future-focused on what we must demand right now: modeling threats, calculating potential scenarios, enabling technical blueprints, and ensuring responsible engineering for building within parameters that protect society at large. Decades of experience have given us the empirical evidence to guide our actions toward a safer future with robots. Now we need the political will to regulate.
This essay was written with Davi Ottenheimer, and <a href="https://www.theatlantic.com/technology/archive/2023/09/robot-safety-standards-regulation-human-fatalities/675231/">previously appeared</a> on Atlantic.com.
<h2>Cars Have Terrible Data Privacy</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/cars-have-terrible-data-privacy.html"><strong>[2023.09.12]</strong></a> A new Mozilla Foundation <a href="https://foundation.mozilla.org/en/privacynotincluded/articles/its-official-cars-are-the-worst-product-category-we-have-ever-reviewed-for-privacy/">report</a> concludes that cars, all of them, have terrible data privacy.
<blockquote>All 25 car brands we researched earned our *Privacy Not Included warning label—making cars the official worst category of products for privacy that we have ever reviewed.</blockquote>
There’s a lot of details in the report. They’re all bad.
BoingBoing <a href="https://boingboing.net/2023/09/06/new-vehicles-a-privacy-nightmare-where-you-consent-to-carmakers-collecting-data-on-behavioral-biological-even-sexual-activity.html">post</a>.
<h2>Zero-Click Exploit in iPhones</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/zero-click-exploit-in-iphones.html"><strong>[2023.09.13]</strong></a> Make sure you <a href="https://www.bleepingcomputer.com/news/security/apple-zero-click-imessage-exploit-used-to-infect-iphones-with-spyware/">update your iPhones</a>:
<blockquote>Citizen Lab says two zero-days fixed by Apple today in emergency security updates were actively abused as part of a zero-click exploit chain (dubbed BLASTPASS) to deploy NSO Group’s Pegasus commercial spyware onto fully patched iPhones.
The two bugs, <a href="https://www.bleepingcomputer.com/news/apple/apple-discloses-2-new-zero-days-exploited-to-attack-iphones-macs/">tracked as CVE-2023-41064 and CVE-2023-41061</a>, allowed the attackers to infect a fully-patched iPhone running iOS 16.6 and belonging to a Washington DC-based civil society organization via PassKit attachments containing malicious images.
“We refer to the exploit chain as BLASTPASS. The exploit chain was capable of compromising iPhones running the latest version of iOS (16.6) without any interaction from the victim,” Citizen Lab <a href="https://citizenlab.ca/2023/09/blastpass-nso-group-iphone-zero-click-zero-day-exploit-captured-in-the-wild/">said</a>.
“The exploit involved PassKit attachments containing malicious images sent from an attacker iMessage account to the victim.”</blockquote>
<h2>Fake Signal and Telegram Apps in the Google Play Store</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/fake-signal-and-telegram-apps-in-the-google-play-store.html"><strong>[2023.09.14]</strong></a> Google <a href="https://arstechnica.com/security/2023/08/google-removes-fake-signal-and-telegram-apps-hosted-on-play/">removed</a> fake Signal and Telegram apps from its Play store.
<blockquote>An app with the name Signal Plus Messenger was available on Play for nine months and had been downloaded from Play roughly 100 times before Google took it down last April after being tipped off by security firm ESET. It was also available in the Samsung app store and on signalplus[.]org, a dedicated website mimicking the official Signal.org. An app calling itself FlyGram, meanwhile, was created by the same threat actor and was available through the same three channels. Google removed it from Play in 2021. Both apps remain available in the Samsung store.
Both apps were built on open source code available from Signal and Telegram. Interwoven into that code was an espionage tool tracked as BadBazaar. The Trojan has been linked to a China-aligned hacking group tracked as GREF. BadBazaar has been used previously to target Uyghurs and other Turkic ethnic minorities. The FlyGram malware was also shared in a Uyghur Telegram group, further aligning it to previous targeting by the BadBazaar malware family.
Signal Plus could monitor sent and received messages and contacts if people connected their infected device to their legitimate Signal number, as is normal when someone first installs Signal on their device. Doing so caused the malicious app to send a host of private information to the attacker, including the device IMEI number, phone number, MAC address, operator details, location data, Wi-Fi information, emails for Google accounts, contact list, and a PIN used to transfer texts in the event one was set up by the user.</blockquote>
This kind of thing is really scary.
<h2>Upcoming Speaking Engagements</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/upcoming-speaking-engagements-31.html"><strong>[2023.09.14]</strong></a> This is a current list of where and when I am scheduled to speak:
<li>I’m speaking at <a href="https://swampup.jfrog.com/">swampUP 2023</a> in San Jose, California, on September 13, 2023 at 11:35 AM PT.</li>
The list is maintained on <a href="https://www.schneier.com/events/">this page</a>.
<h2>On Technologies for Automatic Facial Recognition</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/on-technologies-for-automatic-facial-recognition.html"><strong>[2023.09.15]</strong></a> Interesting <a href="https://dnyuz.com/2023/09/09/the-technology-facebook-and-google-didnt-dare-release/">article</a> on technologies that will automatically identify people:
<blockquote>With technology like that on Mr. Leyvand’s head, Facebook could prevent users from ever forgetting a colleague’s name, give a reminder at a cocktail party that an acquaintance had kids to ask about or help find someone at a crowded conference. However, six years later, the company now known as Meta has not released a version of that product and Mr. Leyvand has departed for Apple to work on its Vision Pro augmented reality glasses.</blockquote>
The technology is here. Maybe the implementation is still dorky, but that will change. The social implications will be enormous.
<h2>LLM Summary of My Book <em>Beyond Fear</em></h2>
<a href="https://www.schneier.com/blog/archives/2023/09/llm-summary-of-my-book-beyond-fear.html"><strong>[2023.09.15]</strong></a> Claude (Anthropic’s LLM) was given this prompt: Please summarize the themes and arguments of Bruce Schneier’s book <em>Beyond Fear</em>. I’m particularly interested in a taxonomy of his ethical arguments—please expand on that. Then lay out the most salient criticisms of the book.
<blockquote>Here’s a brief summary of the key themes and arguments made in security expert Bruce Schneier’s book <em>Beyond Fear: Thinking Sensibly About Security in an Uncertain World</em>:
<ul><li>Effective security requires realistic risk assessment, not fear-driven responses. We need to rationally determine actual threats versus perceived ones.
<li>Security trade-offs need to be carefully balanced against costs to other values like privacy, freedom, and social/economic impacts.
<li>Human psychology and biases lead to exaggerated fears that politicians/marketers can exploit, resulting in wasteful and theatrical “security theater.”
<li>Security is a process that requires ongoing flexibility, not total prevention. We can’t anticipate every threat but can build resilience.</ul>
<ul><il>Analyze risks empirically using evidence, statistics, and probability rather than intuition. Avoid underestimating everyday risks while overestimating novel or sensational ones.
<li>Reject the temptation to implement oppressive security restrictions after tragic but unlikely events. Focus only on proportional responses.
<li>Where possible, favor openness and transparency over aggressive data collection or restrictions which erode civil liberties.
<li>Seek security policies and technologies that enhance society broadly rather than benefit special interests.</ul>
Schneier argues for rational, ethical approaches to security aimed at genuine protection while avoiding violations of human rights in the name of safety.
Here are some of the key ethical arguments Bruce Schneier makes in <em>Beyond Fear</em> regarding security policy and practices:
<ul><li>Utilitarian Ethics – Security measures should aim to provide the greatest societal benefit and well-being for the most people, not just special interests.
<li>Individual Autonomy – Restrictions should be minimized to avoid excessive infringements on civil liberties and individual freedoms. Surveillance creep risks violating autonomy.
<li>Privacy Rights – Pervasive monitoring and data collection erode privacy rights and dignity. Citizens are entitled to some degree of opacity.
<li>Proportional Responses – Reactive policies should weigh costs versus statistically likely benefits. Draconian rules beyond proportionality are unethical.
<li>Justice and Fairness – Security should not create racial, religious or other profiling biases that punish groups rather than conduct.
<li>Accountability and Transparency – Ethical security requires checks against excessive power and oversight for responsibility. Secrecy enables abuses.
<li>Honest Representation – Officials and pundits have an ethical duty not to stoke disproportionate public fears. Security theater manufactures consent.
<li>Integrity and Character – Leaders should have the wisdom and courage to make difficult trade-offs rather than exploit fears for authority and control.</ul>
Overall Schneier argues security policies grounded in ethics rather than fear promote human well-being while recognizing inherent trade-offs. Prudence, transparency, and proportionality are essential.
Here are some of the most salient criticisms made against the arguments put forth in Bruce Schneier’s book <em>Beyond Fear</em>:
<ul><li>Downplays real threats – Some argue Schneier dangerously downplays serious risks like terrorism and cybercrime by calling fears overblown.
<li>Too theoretical – His risk analysis approach may rely too much on statistics versus real-world uncertainties that defy probabilistic reasoning.
<li>No easy answers – While citing misguided policies, critics say Schneier provides few actionable solutions or concrete policy guidance.
<li>Conflicting values – His balancing of security, privacy and liberty involves inherent trade-offs and tough choices he gives little direction on.
<li>Politically naive – Suggesting special interests drive policy may under-appreciate legitimate national security concerns of elected officials.
<li>Hindsight bias – Assessing past overreactions easier than predicting future threats that justify caution today.
<li>No accounting for psychology – Factoring human irrationality into policy is essential, not dismissed as “security theater.”
<li>Technical focus – Focuses narrowly on certain technologies while ignoring broader organizational and sociological dimensions.</ul>
In summary, critics argue Schneier’s risk analysis orientation is too clinical, abstract, and devoid of political and human realities to transform concrete policy in the ways he intends. His framework may therefore lack sufficient nuance or guidance.</blockquote>
It’s interesting. The summary is pretty accurate, and so are the criticisms. The book was a product of my thinking twenty years ago, and my own thinking has evolved to address those criticisms.
Of course, this only works with older books that the LLM has ingested, and probably works better with books that have been written about.
<h2>Using Hacked LastPass Keys to Steal Cryptocurrency</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/using-hacked-lastpass-keys-to-steal-cryptocurrency.html"><strong>[2023.09.18]</strong></a> Remember last November, when hackers <a href="https://www.theverge.com/2022/12/22/23523322/lastpass-data-breach-cloud-encrypted-password-vault-hackers">broke into</a> the network for LastPass—a password database—and stole password vaults with both encrypted and plaintext data for over 25 million users?
Well, they’re now <a href="https://krebsonsecurity.com/2023/09/experts-fear-crooks-are-cracking-keys-stolen-in-lastpass-breach/">using that data</a> break into crypto wallets and drain them: $35 million and counting, all going into a single wallet.
That’s a really profitable hack. (It’s also bad opsec. The hackers need to move and launder all that money quickly.)
Look, I know that online password databases are more convenient. But they’re also risky. This is why my <a href="https://www.schneier.com/academic/passsafe/">Password Safe</a> is local only. (I know this sounds like a commercial, but Password Safe is not a commercial product.)
<h2>Detecting AI-Generated Text</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/detecting-ai-generated-text.html"><strong>[2023.09.19]</strong></a> There are no reliable ways to distinguish text written by a human from text written by an large language model. OpenAI <a href="https://help.openai.com/en/articles/8313351-how-can-educators-respond-to-students-presenting-ai-generated-content-as-their-own">writes</a>:
<blockquote>Do AI detectors work?
<ul><li>In short, no. While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.
<li>Additionally, ChatGPT has no “knowledge” of what content could be AI-generated. It will sometimes make up responses to questions like “did you write this [essay]?” or “could this have been written by AI?” These responses are random and have no basis in fact.
<li>To elaborate on our research into the shortcomings of detectors, one of our key findings was that these tools sometimes suggest that human-written content was generated by AI.
<ul><li>When we at OpenAI tried to train an AI-generated content detector, we found that it labeled human-written text like Shakespeare and the Declaration of Independence as AI-generated.
<li>There were also indications that it could disproportionately impact students who had learned or were learning English as a second language and students whose writing was particularly formulaic or concise.</ul>
<li>Even if these tools could accurately identify AI-generated content (which they cannot yet), students can make small edits to evade detection.</ul></blockquote>
There is some good research in <a href="https://www.technologyreview.com/2023/08/29/1078620/google-deepmind-has-launched-a-watermarking-tool-for-ai-generated-images/">watermarking</a> LLM-generated text, but the watermarks are not generally robust.
I don’t think the detectors are going to win this arms race.
<h2>On the Cybersecurity Jobs Shortage</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/on-the-cybersecurity-jobs-shortage.html"><strong>[2023.09.20]</strong></a> In April, Cybersecurity Ventures <a href="https://apnews.com/press-release/ein-presswire-newsmatics/technology-steve-morgan-ein-presswire-newsmatics-2c99c00b8673966bde5eca81f6535320">reported</a> on extreme cybersecurity job shortage:
<blockquote>Global cybersecurity job vacancies grew by 350 percent, from one million openings in 2013 to 3.5 million in 2021, according to <a href="https://cybersecurityventures.com">Cybersecurity Ventures</a>. The number of unfilled jobs leveled off in 2022, and remains at 3.5 million in 2023, with more than 750,000 of those positions in the U.S. Industry efforts to source new talent and tackle burnout continues, but we predict that the disparity between demand and supply will remain through at least 2025.</blockquote>
The numbers never made sense to me, and Ben Rothke has dug in and <a href="https://brothke.medium.com/is-there-really-an-information-security-jobs-crisis-a492665f6823?sk=9dfae4d5614a4ad4681bbfb8e58a99dc">explained the reality</a>:
<blockquote>…there is not a shortage of security generalists, middle managers, and people who claim to be competent CISOs. Nor is there a shortage of thought leaders, advisors, or self-proclaimed cyber subject matter experts. What there is a shortage of are computer scientists, developers, engineers, and information security professionals who can code, understand technical security architecture, product security and application security specialists, analysts with threat hunting and incident response skills. And this is nothing that can be fixed by a newbie taking a six-month information security boot camp.
Most entry-level roles tend to be quite specific, focused on one part of the profession, and are not generalist roles. For example, hiring managers will want a network security engineer with knowledge of networks or an identity management analyst with experience in identity systems. They are not looking for someone interested in security.
In fact, security roles are often not considered entry-level at all. Hiring managers assume you have some other background, usually technical before you are ready for an entry-level security job. Without those specific skills, it is difficult for a candidate to break into the profession. Job seekers learn that entry-level often means at least two to three years of work experience in a related field.</blockquote>
That makes a lot more sense, and matches what I experience.
<h2>New Revelations from the Snowden Documents</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/new-revelations-from-the-snowden-documents.html"><strong>[2023.09.21]</strong></a> Jake Appelbaum’s PhD thesis contains several <a href="https://www.computerweekly.com/news/366552520/New-revelations-from-the-Snowden-archive-surface">new</a> <a href="https://www.electrospaces.net/2023/09/some-new-snippets-from-snowden-documents.html">revelations</a> from the classified NSA documents provided to journalists by Edward Snowden. Nothing major, but a few more tidbits.
Kind of amazing that that all happened ten years ago. At this point, those documents are more historical than anything else.
And it’s unclear who has those archives anymore. According to Appelbaum, <i>The Intercept</i> destroyed their copy.
I recently <a href="https://www.schneier.com/blog/archives/2023/06/snowden-ten-years-later.html">published</a> an essay about my experiences ten years ago.
<h2>Signal Will Leave the UK Rather Than Add a Backdoor</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/signal-will-leave-the-uk-rather-than-add-a-backdoor.html"><strong>[2023.09.26]</strong></a> Totally expected, but still <a href="https://techcrunch.com/2023/09/21/meredith-whittaker-reaffirms-that-signal-would-leave-u-k-if-forced-by-privacy-bill/">good to hear</a>:
<blockquote>Onstage at <https://techcrunch.com/events/tc-disrupt-2023/>TechCrunch Disrupt 2023, Meredith Whittaker, the president of the Signal Foundation, which maintains the nonprofit Signal messaging app, reaffirmed that Signal would leave the U.K. if the country’s recently passed Online Safety Bill forced Signal to build “backdoors” into its end-to-end encryption.
“We would leave the U.K. or any jurisdiction if it came down to the choice between backdooring our encryption and betraying the people who count on us for privacy, or leaving,” Whittaker said. “And that’s never not true.”</blockquote>
<h2>Critical Vulnerability in libwebp Library</h2>
<a href="https://www.schneier.com/blog/archives/2023/09/critical-vulnerability-in-libwebp-library.html"><strong>[2023.09.27]</strong></a> Both <a href="https://arstechnica.com/gadgets/2023/09/apple-patches-clickless-0-day-image-processing-vulnerability-in-ios-macos/">Apple</a> and <a href="https://chromereleases.googleblog.com/2023/09/stable-channel-update-for-desktop_11.html">Google</a> have recently reported critical vulnerabilities in their systems—iOS and Chrome, respectively—that are ultimately the result of the <a href="https://arstechnica.com/security/2023/09/incomplete-disclosures-by-apple-and-google-create-huge-blindspot-for-0-day-hunters/">same vulnerability</a> in the libwebp library:
<blockquote>On Thursday, researchers from security firm Rezillion published evidence that they said made it “highly likely” both indeed stemmed from the same bug, specifically in libwebp, the code library that apps, operating systems, and other code libraries incorporate to process WebP images.
Rather than Apple, Google, and Citizen Lab coordinating and accurately reporting the common origin of the vulnerability, they chose to use a separate CVE designation, the researchers said. The researchers concluded that “millions of different applications” would remain vulnerable until they, too, incorporated the libwebp fix. That, in turn, they said, was preventing automated systems that developers use to track known vulnerabilities in their offerings from detecting a critical vulnerability that’s under active exploitation.</blockquote>