Page 86

Google Reportedly Disconnecting Employees from the Internet

Supposedly Google is starting a pilot program of disabling Internet connectivity from employee computers:

The company will disable internet access on the select desktops, with the exception of internal web-based tools and Google-owned websites like Google Drive and Gmail. Some workers who need the internet to do their job will get exceptions, the company stated in materials.

Google has not confirmed this story.

More news articles.

Posted on July 24, 2023 at 7:09 AMView Comments

Friday Squid Blogging: Chromatophores

Neat:

Chromatophores are tiny color-changing cells in cephalopods. Watch them blink back and forth from purple to white on this squid’s skin in an Instagram video taken by Drew Chicone…

It’s completely hypnotic to watch these tiny cells flash with color. It’s as if the squid has a little sky full of twinkling stars on its skin. This has to be one of the coolest looking sea creatures I’ve seen.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Posted on July 21, 2023 at 5:10 PMView Comments

AI and Microdirectives

Imagine a future in which AIs automatically interpret—and enforce—laws.

All day and every day, you constantly receive highly personalized instructions for how to comply with the law, sent directly by your government and law enforcement. You’re told how to cross the street, how fast to drive on the way to work, and what you’re allowed to say or do online—if you’re in any situation that might have legal implications, you’re told exactly what to do, in real time.

Imagine that the computer system formulating these personal legal directives at mass scale is so complex that no one can explain how it reasons or works. But if you ignore a directive, the system will know, and it’ll be used as evidence in the prosecution that’s sure to follow.

This future may not be far off—automatic detection of lawbreaking is nothing new. Speed cameras and traffic-light cameras have been around for years. These systems automatically issue citations to the car’s owner based on the license plate. In such cases, the defendant is presumed guilty unless they prove otherwise, by naming and notifying the driver.

In New York, AI systems equipped with facial recognition technology are being used by businesses to identify shoplifters. Similar AI-powered systems are being used by retailers in Australia and the United Kingdom to identify shoplifters and provide real-time tailored alerts to employees or security personnel. China is experimenting with even more powerful forms of automated legal enforcement and targeted surveillance.

Breathalyzers are another example of automatic detection. They estimate blood alcohol content by calculating the number of alcohol molecules in the breath via an electrochemical reaction or infrared analysis (they’re basically computers with fuel cells or spectrometers attached). And they’re not without controversy: Courts across the country have found serious flaws and technical deficiencies with Breathalyzer devices and the software that powers them. Despite this, criminal defendants struggle to obtain access to devices or their software source code, with Breathalyzer companies and courts often refusing to grant such access. In the few cases where courts have actually ordered such disclosures, that has usually followed costly legal battles spanning many years.

AI is about to make this issue much more complicated, and could drastically expand the types of laws that can be enforced in this manner. Some legal scholars predict that computationally personalized law and its automated enforcement are the future of law. These would be administered by what Anthony Casey and Anthony Niblett call “microdirectives,” which provide individualized instructions for legal compliance in a particular scenario.

Made possible by advances in surveillance, communications technologies, and big-data analytics, microdirectives will be a new and predominant form of law shaped largely by machines. They are “micro” because they are not impersonal general rules or standards, but tailored to one specific circumstance. And they are “directives” because they prescribe action or inaction required by law.

A Digital Millennium Copyright Act takedown notice is a present-day example of a microdirective. The DMCA’s enforcement is almost fully automated, with copyright “bots” constantly scanning the internet for copyright-infringing material, and automatically sending literally hundreds of millions of DMCA takedown notices daily to platforms and users. A DMCA takedown notice is tailored to the recipient’s specific legal circumstances. It also directs action—remove the targeted content or prove that it’s not infringing—based on the law.

It’s easy to see how the AI systems being deployed by retailers to identify shoplifters could be redesigned to employ microdirectives. In addition to alerting business owners, the systems could also send alerts to the identified persons themselves, with tailored legal directions or notices.

A future where AIs interpret, apply, and enforce most laws at societal scale like this will exponentially magnify problems around fairness, transparency, and freedom. Forget about software transparency—well-resourced AI firms, like Breathalyzer companies today, would no doubt ferociously guard their systems for competitive reasons. These systems would likely be so complex that even their designers would not be able to explain how the AIs interpret and apply the law—something we’re already seeing with today’s deep learning neural network systems, which are unable to explain their reasoning.

Even the law itself could become hopelessly vast and opaque. Legal microdirectives sent en masse for countless scenarios, each representing authoritative legal findings formulated by opaque computational processes, could create an expansive and increasingly complex body of law that would grow ad infinitum.

And this brings us to the heart of the issue: If you’re accused by a computer, are you entitled to review that computer’s inner workings and potentially challenge its accuracy in court? What does cross-examination look like when the prosecutor’s witness is a computer? How could you possibly access, analyze, and understand all microdirectives relevant to your case in order to challenge the AI’s legal interpretation? How could courts hope to ensure equal application of the law? Like the man from the country in Franz Kafka’s parable in The Trial, you’d die waiting for access to the law, because the law is limitless and incomprehensible.

This system would present an unprecedented threat to freedom. Ubiquitous AI-powered surveillance in society will be necessary to enable such automated enforcement. On top of that, research—including empirical studies conducted by one of us (Penney)—has shown that personalized legal threats or commands that originate from sources of authority—state or corporate—can have powerful chilling effects on people’s willingness to speak or act freely. Imagine receiving very specific legal instructions from law enforcement about what to say or do in a situation: Would you feel you had a choice to act freely?

This is a vision of AI’s invasive and Byzantine law of the future that chills to the bone. It would be unlike any other law system we’ve seen before in human history, and far more dangerous for our freedoms. Indeed, some legal scholars argue that this future would effectively be the death of law.

Yet it is not a future we must endure. Proposed bans on surveillance technology like facial recognition systems can be expanded to cover those enabling invasive automated legal enforcement. Laws can mandate interpretability and explainability for AI systems to ensure everyone can understand and explain how the systems operate. If a system is too complex, maybe it shouldn’t be deployed in legal contexts. Enforcement by personalized legal processes needs to be highly regulated to ensure oversight, and should be employed only where chilling effects are less likely, like in benign government administration or regulatory contexts where fundamental rights and freedoms are not at risk.

AI will inevitably change the course of law. It already has. But we don’t have to accept its most extreme and maximal instantiations, either today or tomorrow.

This essay was written with Jon Penney, and previously appeared on Slate.com.

Posted on July 21, 2023 at 7:16 AMView Comments

Commentary on the Implementation Plan for the 2023 US National Cybersecurity Strategy

The Atlantic Council released a detailed commentary on the White House’s new “Implementation Plan for the 2023 US National Cybersecurity Strategy.” Lots of interesting bits.

So far, at least three trends emerge:

First, the plan contains a (somewhat) more concrete list of actions than its parent strategy, with useful delineation of lead and supporting agencies, as well as timelines aplenty. By assigning each action a designated lead and timeline, and by including a new nominal section (6) focused entirely on assessing effectiveness and continued iteration, the ONCD suggests that this is not so much a standalone text as the framework for an annual, crucially iterative policy process. That many of the milestones are still hazy might be less important than the commitment. the administration has made to revisit this plan annually, allowing the ONCD team to leverage their unique combination of topical depth and budgetary review authority.

Second, there are clear wins. Open-source software (OSS) and support for energy-sector cybersecurity receive considerable focus, and there is a greater budgetary push on both technology modernization and cybersecurity research. But there are missed opportunities as well. Many of the strategy’s most difficult and revolutionary goals—­holding data stewards accountable through privacy legislation, finally implementing a working digital identity solution, patching gaps in regulatory frameworks for cloud risk, and implementing a regime for software cybersecurity liability—­have been pared down or omitted entirely. There is an unnerving absence of “incentive-shifting-focused” actions, one of the most significant overarching objectives from the initial strategy. This backpedaling may be the result of a new appreciation for a deadlocked Congress and the precarious present for the administrative state, but it falls short of the original strategy’s vision and risks making no progress against its most ambitious goals.

Third, many of the implementation plan’s goals have timelines stretching into 2025. The disruption of a transition, be it to a second term for the current administration or the first term of another, will be difficult to manage under the best of circumstances. This leaves still more of the boldest ideas in this plan in jeopardy and raises questions about how best to prioritize, or accelerate, among those listed here.

Posted on July 20, 2023 at 7:12 AMView Comments

Practice Your Security Prompting Skills

Gandalf is an interactive LLM game where the goal is to get the chatbot to reveal its password. There are eight levels of difficulty, as the chatbot gets increasingly restrictive instructions as to how it will answer. It’s a great teaching tool.

I am stuck on Level 7.

Feel free to give hints and discuss strategy in the comments below. I probably won’t look at them until I’ve cracked the last level.

Posted on July 19, 2023 at 1:03 PMView Comments

Disabling Self-Driving Cars with a Traffic Cone

You can disable a self-driving car by putting a traffic cone on its hood:

The group got the idea for the conings by chance. The person claims a few of them walking together one night saw a cone on the hood of an AV, which appeared disabled. They weren’t sure at the time which came first; perhaps someone had placed the cone on the AV’s hood to signify it was disabled rather than the other way around. But, it gave them an idea, and when they tested it, they found that a cone on a hood renders the vehicles little more than a multi-ton hunk of useless metal. The group suspects the cone partially blocks the LIDAR detectors on the roof of the car, in much the same way that a human driver wouldn’t be able to safely drive with a cone on the hood. But there is no human inside to get out and simply remove the cone, so the car is stuck.

Delightfully low-tech.

Posted on July 18, 2023 at 7:13 AMView Comments

Tracking Down a Suspect through Cell Phone Records

Interesting forensics in connection with a serial killer arrest:

Investigators went through phone records collected from both midtown Manhattan and the Massapequa Park area of Long Island—two areas connected to a “burner phone” they had tied to the killings. (In court, prosecutors later said the burner phone was identified via an email account used to “solicit and arrange for sexual activity.” The victims had all been Craigslist escorts, according to officials.)

They then narrowed records collected by cell towers to thousands, then to hundreds, and finally down to a handful of people who could match a suspect in the killings.

From there, authorities focused on people who lived in the area of the cell tower and also matched a physical description given by a witness who had seen the suspected killer.

In that narrowed pool, they searched for a connection to a green pickup truck that a witness had seen the suspect driving, the sources said.

Investigators eventually landed on Heuermann, who they say matched a witness’ physical description, lived close to the Long Island cell site and worked near the New York City cell sites that captured the other calls.

They also learned he had often driven a green pickup truck, registered to his brother, officials said. But they needed more than just circumstantial evidence.

Investigators were able to obtain DNA from an immediate family member and send it to a specialized lab, sources said. According to the lab report, Heuermann’s family member was shown to be related to a person who left DNA on a burlap sack containing one of the buried victims.

There’s nothing groundbreaking here; it’s casting a wide net with cell phone geolocation data and then winnowing it down using other evidence and investigative techniques. And right now, those are expensive and time consuming, so only used in major crimes like murder (or, in this case, murders).

What’s interesting to think about is what happens when this kind of thing becomes cheap and easy: when it can all be done through easily accessible databases, or even when an AI can do the sorting and make the inferences automatically. Cheaper digital forensics means more digital forensics, and we’ll start seeing this kind of thing for even routine crimes. That’s going to change things.

Posted on July 17, 2023 at 7:13 AMView Comments

Buying Campaign Contributions as a Hack

The first Republican primary debate has a popularity threshold to determine who gets to appear: 40,000 individual contributors. Now there are a lot of conventional ways a candidate can get that many contributors. Doug Burgum came up with a novel idea: buy them:

A long-shot contender at the bottom of recent polls, Mr. Burgum is offering $20 gift cards to the first 50,000 people who donate at least $1 to his campaign. And one lucky donor, as his campaign advertised on Facebook, will have the chance to win a Yeti Tundra 45 cooler that typically costs more than $300—just for donating at least $1.

It’s actually a pretty good idea. He could have spent the money on direct mail, or personalized social media ads, or television ads. Instead, he buys gift cards at maybe two-thirds of face value (sellers calculate the advertising value, the additional revenue that comes from using them to buy something more expensive, and breakage when they’re not redeemed at all), and resells them. Plus, many contributors probably give him more than $1, and he got a lot of publicity over this.

Probably the cheapest way to get the contributors he needs. A clever hack.

EDITED TO ADD (7/16): These might be “straw donors” and illegal:

The campaign’s donations-for-cash strategy could raise potential legal concerns, said Paul Ryan, a campaign finance lawyer. Voters who make donations in exchange for gift cards, he said, might be considered straw donors because part or all of their donations are being reimbursed by the campaign.

“Federal law says ‘no person shall make a contribution in the name of another person,'” Mr. Ryan said. “Here, the candidate is making a contribution to himself in the name of all these individual donors.”

Richard L. Hasen, a law professor at the University of California, Los Angeles, who specializes in election law, said that typically, campaigns ask the Federal Election Commission when engaging in new forms of donations.

The Burgum campaign’s maneuver, he said, “certainly seems novel” and “raises concerns about whether it violates the prohibition on straw donations.”

Something for the courts to figure out, if this matter ever gets that far.

Posted on July 14, 2023 at 7:09 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.