Entries Tagged "social media"

Page 1 of 13

Political Disinformation and AI

Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.

Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the US presidential election. Over the next seven years, a number of countries—most prominently China and Iran—used social media to influence foreign elections, both in the US and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.

But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert, I believe it’s a tool uniquely suited to Internet-era propaganda.

This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.

Election season will soon be in full swing in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June, and the US in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the UK don’t have fixed dates, but elections are likely to occur in 2024.

Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan, Indonesia, India, and many African countries. Russia cares about the UK, Poland, Germany, and the EU in general. Everyone cares about the United States.

And that’s only considering the largest players. Every US national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.

A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the US. They talked about their expectations regarding election interference in 2024. They expected the usual players—Russia, China, and Iran—and a significant new one: “domestic actors.” That is a direct result of this reduced cost.

Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups, and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal, and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.

Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.

Many propaganda outlets moved from Facebook to messaging platforms such as Telegram and WhatsApp, which makes them harder to identify and remove. TikTok is a newer platform that is controlled by China and more suitable for short, provocative videos—ones that AI makes much easier to produce. And the current crop of generative AIs are being connected to tools that will make content distribution easier as well.

Generative AI tools also allow for new techniques of production and distribution, such as low-level propaganda at scale. Imagine a new AI-powered personal account on social media. For the most part, it behaves normally. It posts about its fake everyday life, joins interest groups and comments on others’ posts, and generally behaves like a normal user. And once in a while, not very often, it says—or amplifies—something political. These persona bots, as computer scientist Latanya Sweeney calls them, have negligible influence on their own. But replicated by the thousands or millions, they would have a lot more.

That’s just one scenario. The military officers in Russia, China, and elsewhere in charge of election interference are likely to have their best people thinking of others. And their tactics are likely to be much more sophisticated than they were in 2016.

Countries like Russia and China have a history of testing both cyberattacks and information operations on smaller countries before rolling them out at scale. When that happens, it’s important to be able to fingerprint these tactics. Countering new disinformation campaigns requires being able to recognize them, and recognizing them requires looking for and cataloging them now.

In the computer security world, researchers recognize that sharing methods of attack and their effectiveness is the only way to build strong defensive systems. The same kind of thinking also applies to these information campaigns: The more that researchers study what techniques are being employed in distant countries, the better they can defend their own countries.

Disinformation campaigns in the AI era are likely to be much more sophisticated than they were in 2016. I believe the US needs to have efforts in place to fingerprint and identify AI-produced propaganda in Taiwan, where a presidential candidate claims a deepfake audio recording has defamed him, and other places. Otherwise, we’re not going to see them when they arrive here. Unfortunately, researchers are instead being targeted and harassed.

Maybe this will all turn out okay. There have been some important democratic elections in the generative AI era with no significant disinformation issues: primaries in Argentina, first-round elections in Ecuador, and national elections in Thailand, Turkey, Spain, and Greece. But the sooner we know what to expect, the better we can deal with what comes.

This essay previously appeared in The Conversation.

Posted on October 5, 2023 at 7:12 AMView Comments

Banning TikTok

Congress is currently debating bills that would ban TikTok in the United States. We are here as technologists to tell you that this is a terrible idea and the side effects would be intolerable. Details matter. There are several ways Congress might ban TikTok, each with different efficacies and side effects. In the end, all the effective ones would destroy the free Internet as we know it.

There’s no doubt that TikTok and ByteDance, the company that owns it, are shady. They, like most large corporations in China, operate at the pleasure of the Chinese government. They collect extreme levels of information about users. But they’re not alone: Many apps you use do the same, including Facebook and Instagram, along with seemingly innocuous apps that have no need for the data. Your data is bought and sold by data brokers you’ve never heard of who have few scruples about where the data ends up. They have digital dossiers on most people in the United States.

If we want to address the real problem, we need to enact serious privacy laws, not security theater, to stop our data from being collected, analyzed, and sold—by anyone. Such laws would protect us in the long term, and not just from the app of the week. They would also prevent data breaches and ransomware attacks from spilling our data out into the digital underworld, including hacker message boards and chat servers, hostile state actors, and outside hacker groups. And, most importantly, they would be compatible with our bedrock values of free speech and commerce, which Congress’s current strategies are not.

At best, the TikTok ban considered by Congress would be ineffective; at worst, a ban would force us to either adopt China’s censorship technology or create our own equivalent. The simplest approach, advocated by some in Congress, would be to ban the TikTok app from the Apple and Google app stores. This would immediately stop new updates for current users and prevent new users from signing up. To be clear, this would not reach into phones and remove the app. Nor would it prevent Americans from installing TikTok on their phones; they would still be able to get it from sites outside of the United States. Android users have long been able to use alternative app repositories. Apple maintains a tighter control over what apps are allowed on its phones, so users would have to “jailbreak”—or manually remove restrictions from—their devices to install TikTok.

Even if app access were no longer an option, TikTok would still be available more broadly. It is currently, and would still be, accessible from browsers, whether on a phone or a laptop. As long as the TikTok website is hosted on servers outside of the United States, the ban would not affect browser access.

Alternatively, Congress might take a financial approach and ban US companies from doing business with ByteDance. Then-President Donald Trump tried this in 2020, but it was blocked by the courts and rescinded by President Joe Biden a year later. This would shut off access to TikTok in app stores and also cut ByteDance off from the resources it needs to run TikTok. US cloud-computing and content-distribution networks would no longer distribute TikTok videos, collect user data, or run analytics. US advertisers—and this is critical—could no longer fork over dollars to ByteDance in the hopes of getting a few seconds of a user’s attention. TikTok, for all practical purposes, would cease to be a business in the United States.

But Americans would still be able to access TikTok through the loopholes discussed above. And they will: TikTok is one of the most popular apps ever made; about 70% of young people use it. There would be enormous demand for workarounds. ByteDance could choose to move its US-centric services right over the border to Canada, still within reach of American users. Videos would load slightly slower, but for today’s TikTok users, it would probably be acceptable. Without US advertisers ByteDance wouldn’t make much money, but it has operated at a loss for many years, so this wouldn’t be its death knell.

Finally, an even more restrictive approach Congress might take is actually the most dangerous: dangerous to Americans, not to TikTok. Congress might ban the use of TikTok by anyone in the United States. The Trump executive order would likely have had this effect, were it allowed to take effect. It required that US companies not engage in any sort of transaction with TikTok and prohibited circumventing the ban. . If the same restrictions were enacted by Congress instead, such a policy would leave business or technical implementation details to US companies, enforced through a variety of law enforcement agencies.

This would be an enormous change in how the Internet works in the United States. Unlike authoritarian states such as China, the US has a free, uncensored Internet. We have no technical ability to ban sites the government doesn’t like. Ironically, a blanket ban on the use of TikTok would necessitate a national firewall, like the one China currently has, to spy on and censor Americans’ access to the Internet. Or, at the least, authoritarian government powers like India’s, which could force Internet service providers to censor Internet traffic. Worse still, the main vendors of this censorship technology are in those authoritarian states. China, for example, sells its firewall technology to other censorship-loving autocracies such as Iran and Cuba.

All of these proposed solutions raise constitutional issues as well. The First Amendment protects speech and assembly. For example, the recently introduced Buck-Hawley bill, which instructs the president to use emergency powers to ban TikTok, might threaten separation of powers and may be relying on the same mechanisms used by Trump and stopped by the court. (Those specific emergency powers, provided by the International Emergency Economic Powers Act, have a specific exemption for communications services.) And individual states trying to beat Congress to the punch in regulating TikTok or social media generally might violate the Constitution’s Commerce Clause—which restricts individual states from regulating interstate commerce—in doing so.

Right now, there’s nothing to stop Americans’ data from ending up overseas. We’ve seen plenty of instances—from Zoom to Clubhouse to others—where data about Americans collected by US companies ends up in China, not by accident but because of how those companies managed their data. And the Chinese government regularly steals data from US organizations for its own use: Equifax, Marriott Hotels, and the Office of Personnel Management are examples.

If we want to get serious about protecting national security, we have to get serious about data privacy. Today, data surveillance is the business model of the Internet. Our personal lives have turned into data; it’s not possible to block it at our national borders. Our data has no nationality, no cost to copy, and, currently, little legal protection. Like water, it finds every crack and flows to every low place. TikTok won’t be the last app or service from abroad that becomes popular, and it is distressingly ordinary in terms of how much it spies on us. Personal privacy is now a matter of national security. That needs to be part of any debate about banning TikTok.

This essay was written with Barath Raghavan, and previously appeared in Foreign Policy.

EDITED TO ADD (3/13): Glenn Gerstell, former general counsel of the NSA, has similar things to say.

Posted on February 27, 2023 at 7:06 AMView Comments

The EARN IT Act Is Back

Senators have reintroduced the EARN IT Act, requiring social media companies (among others) to administer a massive surveillance operation on their users:

A group of lawmakers led by Sen. Richard Blumenthal (D-CT) and Sen. Lindsey Graham (R-SC) have re-introduced the EARN IT Act, an incredibly unpopular bill from 2020 that was dropped in the face of overwhelming opposition. Let’s be clear: the new EARN IT Act would pave the way for a massive new surveillance system, run by private companies, that would roll back some of the most important privacy and security features in technology used by people around the globe. It’s a framework for private actors to scan every message sent online and report violations to law enforcement. And it might not stop there. The EARN IT Act could ensure that anything hosted online—backups, websites, cloud photos, and more—is scanned.

Slashdot thread.

Posted on February 4, 2022 at 9:44 AMView Comments

TikTok Can Now Collect Biometric Data

This is probably worth paying attention to:

A change to TikTok’s U.S. privacy policy on Wednesday introduced a new section that says the social video app “may collect biometric identifiers and biometric information” from its users’ content. This includes things like “faceprints and voiceprints,” the policy explained. Reached for comment, TikTok could not confirm what product developments necessitated the addition of biometric data to its list of disclosures about the information it automatically collects from users, but said it would ask for consent in the case such data collection practices began.

Posted on June 14, 2021 at 10:11 AMView Comments

AIs and Fake Comments

This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of US democracy ­—the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.

That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service. Three AstroTurf companies—Fluent, Opt-Intelligence and React2Media ­—agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

This essay was written with Henry Farrell, and previously appeared in the Washington Post.

EDITED TO ADD: CSET published an excellent report on AI-generated partisan content. Short summary: it’s pretty good, and will continue to get better. Renee DeRista has also written about this.

This paper is about a lower-tech version of this threat. Also this.

EDITED TO ADD: Another essay on the same topic.

Posted on May 24, 2021 at 6:20 AMView Comments

Hiding Malware in Social Media Buttons

Clever tactic:

This new malware was discovered by researchers at Dutch cyber-security company Sansec that focuses on defending e-commerce websites from digital skimming (also known as Magecart) attacks.

The payment skimmer malware pulls its sleight of hand trick with the help of a double payload structure where the source code of the skimmer script that steals customers’ credit cards will be concealed in a social sharing icon loaded as an HTML ‘svg’ element with a ‘path’ element as a container.

The syntax for hiding the skimmer’s source code as a social media button perfectly mimics an ‘svg’ element named using social media platform names (e.g., facebook_full, twitter_full, instagram_full, youtube_full, pinterest_full, and google_full).

A separate decoder deployed separately somewhere on the e-commerce site’s server is used to extract and execute the code of the hidden credit card stealer.

This tactic increases the chances of avoiding detection even if one of the two malware components is found since the malware loader is not necessarily stored within the same location as the skimmer payload and their true purpose might evade superficial analysis.

Posted on December 7, 2020 at 6:32 AMView Comments

Using Disinformation to Cause a Blackout

Interesting paper: “How weaponizing disinformation can bring down a city’s power grid“:

Abstract: Social media has made it possible to manipulate the masses via disinformation and fake news at an unprecedented scale. This is particularly alarming from a security perspective, as humans have proven to be one of the weakest links when protecting critical infrastructure in general, and the power grid in particular. Here, we consider an attack in which an adversary attempts to manipulate the behavior of energy consumers by sending fake discount notifications encouraging them to shift their consumption into the peak-demand period. Using Greater London as a case study, we show that such disinformation can indeed lead to unwitting consumers synchronizing their energy-usage patterns, and result in blackouts on a city-scale if the grid is heavily loaded. We then conduct surveys to assess the propensity of people to follow-through on such notifications and forward them to their friends. This allows us to model how the disinformation may propagate through social networks, potentially amplifying the attack impact. These findings demonstrate that in an era when disinformation can be weaponized, system vulnerabilities arise not only from the hardware and software of critical infrastructure, but also from the behavior of the consumers.

I’m not sure the attack is practical, but it’s an interesting idea.

Posted on August 18, 2020 at 10:03 AMView Comments

1 2 3 13

Sidebar photo of Bruce Schneier by Joe MacInnis.