Entries Tagged "impersonation"

Page 1 of 7

Swatting as a Service

Motherboard is reporting on AI-generated voices being used for “swatting”:

In fact, Motherboard has found, this synthesized call and another against Hempstead High School were just one small part of a months-long, nationwide campaign of dozens, and potentially hundreds, of threats made by one swatter in particular who has weaponized computer generated voices. Known as “Torswats” on the messaging app Telegram, the swatter has been calling in bomb and mass shooting threats against highschools and other locations across the country. Torswat’s connection to these wide ranging swatting incidents has not been previously reported. The further automation of swatting techniques threatens to make an already dangerous harassment technique more prevalent.

Posted on April 17, 2023 at 7:15 AMView Comments

A Device to Turn Traffic Lights Green

Here’s a story about a hacker who reprogrammed a device called “Flipper Zero” to mimic Opticom transmitters—to turn traffic lights in his path green.

As mentioned earlier, the Flipper Zero has a built-in sub-GHz radio that lets the device receive data (or transmit it, with the right firmware in approved regions) on the same wireless frequencies as keyfobs and other devices. Most traffic preemption devices intended for emergency traffic redirection don’t actually transmit signals over RF. Instead, they use optical technology to beam infrared light from vehicles to static receivers mounted on traffic light poles.

Perhaps the most well-known branding for these types of devices is called Opticom. Essentially, the tech works by detecting a specific pattern of infrared light emitted by the Mobile Infrared Transmitter (MIRT) installed in a police car, fire truck, or ambulance when the MIRT is switched on. When the receiver detects the light, the traffic system then initiates a signal change as the emergency vehicle approaches an intersection, safely redirecting the traffic flow so that the emergency vehicle can pass through the intersection as if it were regular traffic and potentially avoid a collision.

This seems easy to do, but it’s also very illegal. It’s called “impersonating an emergency vehicle,” and it comes with hefty penalties if you’re caught.

Posted on February 22, 2023 at 7:30 AMView Comments

Defending against AI Lobbyists

When is it time to start worrying about artificial intelligence interfering in our democracy? Maybe when an AI writes a letter to The New York Times opposing the regulation of its own technology.

That happened last month. And because the letter was responding to an essay we wrote, we’re starting to get worried. And while the technology can be regulated, the real solution lies in recognizing that the problem is human actors—and those we can do something about.

Our essay argued that the much heralded launch of the AI chatbot ChatGPT, a system that can generate text realistic enough to appear to be written by a human, poses significant threats to democratic processes. The ability to produce high quality political messaging quickly and at scale, if combined with AI-assisted capabilities to strategically target those messages to policymakers and the public, could become a powerful accelerant of an already sprawling and poorly constrained force in modern democratic life: lobbying.

We speculated that AI-assisted lobbyists could use generative models to write op-eds and regulatory comments supporting a position, identify members of Congress who wield the most influence over pending legislation, use network pattern identification to discover undisclosed or illegal political coordination, or use supervised machine learning to calibrate the optimal contribution needed to sway the vote of a legislative committee member.

These are all examples of what we call AI hacking. Hacks are strategies that follow the rules of a system, but subvert its intent. Currently a human creative process, future AIs could discover, develop, and execute these same strategies.

While some of these activities are the longtime domain of human lobbyists, AI tools applied against the same task would have unfair advantages. They can scale their activity effortlessly across every state in the country—human lobbyists tend to focus on a single state—they may uncover patterns and approaches unintuitive and unrecognizable by human experts, and do so nearly instantaneously with little chance for human decision makers to keep up.

These factors could make AI hacking of the democratic process fundamentally ungovernable. Any policy response to limit the impact of AI hacking on political systems would be critically vulnerable to subversion or control by an AI hacker. If AI hackers achieve unchecked influence over legislative processes, they could dictate the rules of our society: including the rules that govern AI.

We admit that this seemed far fetched when we first wrote about it in 2021. But now that the emanations and policy prescriptions of ChatGPT have been given an audience in the New York Times and innumerable other outlets in recent weeks, it’s getting harder to dismiss.

At least one group of researchers is already testing AI techniques to automatically find and advocate for bills that benefit a particular interest. And one Massachusetts representative used ChatGPT to draft legislation regulating AI.

The AI technology of two years ago seems quaint by the standards of ChatGPT. What will the technology of 2025 seem like if we could glimpse it today? To us there is no question that now is the time to act.

First, let’s dispense with the concepts that won’t work. We cannot solely rely on explicit regulation of AI technology development, distribution, or use. Regulation is essential, but it would be vastly insufficient. The rate of AI technology development, and the speed at which AI hackers might discover damaging strategies, already outpaces policy development, enactment, and enforcement.

Moreover, we cannot rely on detection of AI actors. The latest research suggests that AI models trying to classify text samples as human- or AI-generated have limited precision, and are ill equipped to handle real world scenarios. These reactive, defensive techniques will fail because the rate of advancement of the “offensive” generative AI is so astounding.

Additionally, we risk a dragnet that will exclude masses of human constituents that will use AI to help them express their thoughts, or machine translation tools to help them communicate. If a written opinion or strategy conforms to the intent of a real person, it should not matter if they enlisted the help of an AI (or a human assistant) to write it.

Most importantly, we should avoid the classic trap of societies wrenched by the rapid pace of change: privileging the status quo. Slowing down may seem like the natural response to a threat whose primary attribute is speed. Ideas like increasing requirements for human identity verification, aggressive detection regimes for AI-generated messages, and elongation of the legislative or regulatory process would all play into this fallacy. While each of these solutions may have some value independently, they do nothing to make the already powerful actors less powerful.

Finally, it won’t work to try to starve the beast. Large language models like ChatGPT have a voracious appetite for data. They are trained on past examples of the kinds of content that they will be asked to generate in the future. Similarly, an AI system built to hack political systems will rely on data that documents the workings of those systems, such as messages between constituents and legislators, floor speeches, chamber and committee voting results, contribution records, lobbying relationship disclosures, and drafts of and amendments to legislative text. The steady advancement towards the digitization and publication of this information that many jurisdictions have made is positive. The threat of AI hacking should not dampen or slow progress on transparency in public policymaking.

Okay, so what will help?

First, recognize that the true threats here are malicious human actors. Systems like ChatGPT and our still-hypothetical political-strategy AI are still far from artificial general intelligences. They do not think. They do not have free will. They are just tools directed by people, much like lobbyist for hire. And, like lobbyists, they will be available primarily to the richest individuals, groups, and their interests.

However, we can use the same tools that would be effective in controlling human political influence to curb AI hackers. These tools will be familiar to any follower of the last few decades of U.S. political history.

Campaign finance reforms such as contribution limits, particularly when applied to political action committees of all types as well as to candidate operated campaigns, can reduce the dependence of politicians on contributions from private interests. The unfair advantage of a malicious actor using AI lobbying tools is at least somewhat mitigated if a political target’s entire career is not already focused on cultivating a concentrated set of major donors.

Transparency also helps. We can expand mandatory disclosure of contributions and lobbying relationships, with provisions to prevent the obfuscation of the funding source. Self-interested advocacy should be transparently reported whether or not it was AI-assisted. Meanwhile, we should increase penalties for organizations that benefit from AI-assisted impersonation of constituents in political processes, and set a greater expectation of responsibility to avoid “unknowing” use of these tools on their behalf.

Our most important recommendation is less legal and more cultural. Rather than trying to make it harder for AI to participate in the political process, make it easier for humans to do so.

The best way to fight an AI that can lobby for moneyed interests is to help the little guy lobby for theirs. Promote inclusion and engagement in the political process so that organic constituent communications grow alongside the potential growth of AI-directed communications. Encourage direct contact that generates more-than-digital relationships between constituents and their representatives, which will be an enduring way to privilege human stakeholders. Provide paid leave to allow people to vote as well as to testify before their legislature and participate in local town meetings and other civic functions. Provide childcare and accessible facilities at civic functions so that more community members can participate.

The threat of AI hacking our democracy is legitimate and concerning, but its solutions are consistent with our democratic values. Many of the ideas above are good governance reforms already being pushed and fought over at the federal and state level.

We don’t need to reinvent our democracy to save it from AI. We just need to continue the work of building a just and equitable political system. Hopefully ChatGPT will give us all some impetus to do that work faster.

This essay was written with Nathan Sanders, and appeared on the Belfer Center blog.

Posted on February 17, 2023 at 7:33 AMView Comments

Experian Privacy Vulnerability

Brian Krebs is reporting on a vulnerability in Experian’s website:

Identity thieves have been exploiting a glaring security weakness in the website of Experian, one of the big three consumer credit reporting bureaus. Normally, Experian requires that those seeking a copy of their credit report successfully answer several multiple choice questions about their financial history. But until the end of 2022, Experian’s website allowed anyone to bypass these questions and go straight to the consumer’s report. All that was needed was the person’s name, address, birthday and Social Security number.

Posted on January 12, 2023 at 7:18 AMView Comments

Obligatory ChatGPT Post

Seems like absolutely everyone everywhere is playing with Chat GPT.

So I did, too….

Write an essay in the style of Bruce Schneier on how ChatGPT will affect cybersecurity.

As with any new technology, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. In many ways, ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats. At the same time, however, the use of these technologies also raises important questions and concerns about the potential risks and challenges they may pose.

One of the key ways in which ChatGPT is likely to affect cybersecurity is by enabling organizations to more quickly and accurately identify potential security threats. With its ability to analyze vast amounts of data and identify patterns and trends that might be difficult for humans to spot, ChatGPT could be a valuable tool for detecting and responding to cyberattacks. This could be especially useful in the case of complex, coordinated attacks that might otherwise be difficult to identify and defend against.

Another potential benefit of ChatGPT in the realm of cybersecurity is its ability to improve communication and coordination among different organizations and agencies. By enabling these groups to more easily share information and work together to defend against cyber threats, ChatGPT could help to make the overall cybersecurity landscape more effective and efficient. This could be especially important in the case of large-scale attacks that require a coordinated response from multiple organizations.

Of course, as with any new technology, there are also potential risks and challenges associated with the use of ChatGPT in the field of cybersecurity. One key concern is the potential for ChatGPT to be used by malicious actors to conduct cyberattacks. By automating the process of planning and executing attacks, ChatGPT could make it easier for attackers to carry out sophisticated, coordinated attacks at a much faster pace than would be possible without AI. This could potentially make it more difficult for organizations and individuals to defend against such attacks.

Another potential risk is the possibility that ChatGPT could be used to create fake news or propaganda, which could be used to spread misinformation or confusion. In the context of cybersecurity, the spread of false information could potentially lead to confusion and panic, making it more difficult for organizations and individuals to respond effectively to cyber threats. Additionally, the use of ChatGPT to create fake news or propaganda could also undermine trust in information and communications systems, potentially undermining the overall security of these systems.

Overall, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. While ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats, it is important to carefully consider the potential risks and challenges they may pose, and to take appropriate measures to mitigate these risks. This may include implementing strict security protocols and safeguards to prevent unauthorized access to ChatGPT and other AI systems, as well as ongoing monitoring and evaluation to ensure that these technologies are being used in a safe and responsible manner.

Not bad. I don’t think I can retire, but I probably can go on vacation for a few days. And, of course, it’s going to get better…a lot better…and probably sooner than we expect. This is a big deal.

Posted on December 13, 2022 at 7:17 AMView Comments

Complex Impersonation Story

This is a story of one piece of what is probably a complex employment scam. Basically, real programmers are having their resumes copied and co-opted by scammers, who apply for jobs (or, I suppose, get recruited from various job sites), then hire other people with Western looks and language skills are to impersonate those first people on Zoom job interviews. Presumably, sometimes the scammers get hired and…I suppose…collect paychecks for a while until they get found out and fired. But that requires a bunch of banking fraud as well, so I don’t know.

EDITED TO ADD (10/11): Brian Krebs writes about fake LinkedIn profiles, which is probably another facet of this fraud system. Someone needs to unravel all of the threads.

Posted on October 10, 2022 at 6:09 AMView Comments

Authentication Failure

This is a weird story of a building owner commissioning an artist to paint a mural on the side of his building—except that he wasn’t actually the building’s owner.

The fake landlord met Hawkins in person the day after Thanksgiving, supplying the paint and half the promised fee. They met again a couple of days later for lunch, when the job was mostly done. Hawkins showed him photographs. The patron seemed happy. He sent Hawkins the rest of the (sorry) dough.

But when Hawkins invited him down to see the final result, his client didn’t answer the phone. Hawkins called again. No answer. Hawkins emailed. Again, no answer.

[…]

Two days later, Hawkins got a call from the real Comte. And that Comte was not happy.

Comte says that he doesn’t believe Hawkins’s story, but I don’t think I would have demanded to see a photo ID before taking the commission.

Posted on December 14, 2020 at 6:31 AMView Comments

Nation-State Espionage Campaigns against Middle East Defense Contractors

Report on espionage attacks using LinkedIn as a vector for malware, with details and screenshots. They talk about “several hints suggesting a possible link” to the Lazarus group (aka North Korea), but that’s by no means definite.

As part of the initial compromise phase, the Operation In(ter)ception attackers had created fake LinkedIn accounts posing as HR representatives of well-known companies in the aerospace and defense industries. In our investigation, we’ve seen profiles impersonating Collins Aerospace (formerly Rockwell Collins) and General Dynamics, both major US corporations in the field.

Detailed report.

Posted on June 23, 2020 at 6:22 AMView Comments

Bluetooth Vulnerability: BIAS

This is new research on a Bluetooth vulnerability (called BIAS) that allows someone to impersonate a trusted device:

Abstract: Bluetooth (BR/EDR) is a pervasive technology for wireless communication used by billions of devices. The Bluetooth standard includes a legacy authentication procedure and a secure authentication procedure, allowing devices to authenticate to each other using a long term key. Those procedures are used during pairing and secure connection establishment to prevent impersonation attacks. In this paper, we show that the Bluetooth specification contains vulnerabilities enabling to perform impersonation attacks during secure connection establishment. Such vulnerabilities include the lack of mandatory mutual authentication, overly permissive role switching, and an authentication procedure downgrade. We describe each vulnerability in detail, and we exploit them to design, implement, and evaluate master and slave impersonation attacks on both the legacy authentication procedure and the secure authentication procedure. We refer to our attacks as Bluetooth Impersonation AttackS (BIAS).

Our attacks are standard compliant, and are therefore effective against any standard compliant Bluetooth device regardless the Bluetooth version, the security mode (e.g., Secure Connections), the device manufacturer, and the implementation details. Our attacks are stealthy because the Bluetooth standard does not require to notify end users about the outcome of an authentication procedure, or the lack of mutual authentication. To confirm that the BIAS attacks are practical, we successfully conduct them against 31 Bluetooth devices (28 unique Bluetooth chips) from major hardware and software vendors, implementing all the major Bluetooth versions, including Apple, Qualcomm, Intel, Cypress, Broadcom, Samsung, and CSR.

News articles.

Posted on May 26, 2020 at 6:54 AMView Comments

New SHA-1 Attack

There’s a new, practical, collision attack against SHA-1:

In this paper, we report the first practical implementation of this attack, and its impact on real-world security with a PGP/GnuPG impersonation attack. We managed to significantly reduce the complexity of collisions attack against SHA-1: on an Nvidia GTX 970, identical-prefix collisions can now be computed with a complexity of 261.2rather than264.7, and chosen-prefix collisions with a complexity of263.4rather than267.1. When renting cheap GPUs, this translates to a cost of 11k US$ for a collision,and 45k US$ for a chosen-prefix collision, within the means of academic researchers.Our actual attack required two months of computations using 900 Nvidia GTX 1060GPUs (we paid 75k US$ because GPU prices were higher, and we wasted some time preparing the attack).

It has practical applications:

We chose the PGP/GnuPG Web of Trust as demonstration of our chosen-prefix collision attack against SHA-1. The Web of Trust is a trust model used for PGP that relies on users signing each other’s identity certificate, instead of using a central PKI. For compatibility reasons the legacy branch of GnuPG (version 1.4) still uses SHA-1 by default for identity certification.

Using our SHA-1 chosen-prefix collision, we have created two PGP keys with different UserIDs and colliding certificates: key B is a legitimate key for Bob (to be signed by the Web of Trust), but the signature can be transferred to key A which is a forged key with Alice’s ID. The signature will still be valid because of the collision, but Bob controls key A with the name of Alice, and signed by a third party. Therefore, he can impersonate Alice and sign any document in her name.

From a news article:

The new attack is significant. While SHA1 has been slowly phased out over the past five years, it remains far from being fully deprecated. It’s still the default hash function for certifying PGP keys in the legacy 1.4 version branch of GnuPG, the open-source successor to PGP application for encrypting email and files. Those SHA1-generated signatures were accepted by the modern GnuPG branch until recently, and were only rejected after the researchers behind the new collision privately reported their results.

Git, the world’s most widely used system for managing software development among multiple people, still relies on SHA1 to ensure data integrity. And many non-Web applications that rely on HTTPS encryption still accept SHA1 certificates. SHA1 is also still allowed for in-protocol signatures in the Transport Layer Security and Secure Shell protocols.

Posted on January 8, 2020 at 9:38 AMView Comments

1 2 3 7

Sidebar photo of Bruce Schneier by Joe MacInnis.