AI Regulation in Campaigns and Elections: 2024 and Beyond

 

Graphics by Taylor Whirley

Artificial intelligence (AI) is capable of doing your homework, giving advice through chatbots, executing facial recognition, creating covers of any song using any real artist’s voice, training autonomous vehicles, and so much more. But what happens when these incredible capabilities are unleashed into a turbulent and vulnerable political climate? 

AI has the potential to become a virulent force in both American and global politics, especially generative AI which works in tandem with traditional AI to create personalized content. With the 2024 election cycle looming, this is a particularly salient issue. A brief examination of the current state of AI capabilities will make clear the case for why AI regulation, particularly within campaigns and elections, is so desperately needed. While many differing calls for regulation exist, all should have the goal of protecting voters from an undercover onslaught of consumption of artificial intelligence in future elections. Maintaining fairness and transparency relative to AI consumption, misinformation, and disinformation should be paramount in regulation efforts. 

Effective legislative regulation on AI used for political purposes is urgently needed. Here, the focus will remain on what federal, state, and local governments might be able to do through legislation to regulate the use of AI for political gain in America specifically. The most feasible and impactful measures include the implementation of AI disclosure policies. However, potential also exists in certain states and localities for the regulation of generative AI’s political speech as it relates to disinformation. All of these options will be expanded upon and explored in an effort to find multiple possible ways in which AI might successfully be regulated in an effort to promote free and fair elections.  

Both the use and accessibility of AI have exponentially increased in recent months. To understand why its regulation is important, it is crucial to identify the current and evolving capabilities of AI, specifically as it relates to campaigns and elections. The exponential proliferation of chatbots within search engines and social media will make the 2024 election cycle the very first in which an abundance of voters will routinely absorb information that is actually produced by AI. 

We are currently living in an infodemic. Our lives consist of an unyielding bombardment of fast-spreading political information, misinformation, and disinformation. A study conducted by researchers at science.org found that generative AI, such as GPT-3, inform and misinform more effectively than human-generated content. To conduct this study, the research team mimicked the prominent social media previously known as Twitter (now known as X), which is currently used by over 368 million daily active users who consume news and political information. They chose Twitter because of the platform’s popularity, but also because of its bot-generated content ratio. Roughly 5% of Twitter users are bots, but the content they generate accounts for 29% of all content posted on the app. Other social media platforms that share similar characteristics with Twitter are particularly vulnerable to facilitating the spread of AI-generated disinformation. When researchers asked participants to rate the accuracy of tweets generated by Twitter users and those generated by GPT-3,  participants consistently rated the accuracy of the AI-generated content as more believable than the human-generated content. Generative AI can clearly produce remarkably convincing disinformation. Another alarming result from this study demonstrated that humans cannot meaningfully or reliably distinguish between Tweets written by real Twitter users and those generated by GPT-3. These unsettling findings have even more disconcerting implications for our upcoming campaigns and elections.

During a May 2023 U.S. Senate hearing, OpenAI CEO Sam Altman, one of the foremost experts and developers of AI technology, described his concern that people would use generative AI along with other AI language models to manipulate and persuade voters via one-on-one interactions. One such use of generative AI can be found in curating what are known as deepfakes. Deepfakes are manufactured videos in which someone’s likeness is incorporated into fabricated, false, or otherwise ridiculous circumstances and statements. For instance, consider a deepfake in which users were able to create a video where it appears that Elizabeth Warren declares Republicans shouldn’t be able to vote. As AI capabilities have evolved, convincing deepfakes can now be created by almost anyone within seconds and at little to no cost. Deepfake audio now possesses the ability to clone anyone’s voice with fairly little initial data. These deepfakes could create viral moments centered around artificial scandals and controversies. Left unmarked in any particular way, a viral deepfake could fool millions of voters and potentially impact their vote during an election season. 

Generative AI uses existing data as training data to improve its model. AI cannot distinguish fact from fiction, propaganda from policy, or disinformation from truth. Current training data is rife with election disinformation, and with it, generative AI can exponentially heighten election disinformation in 2024 and beyond. The intentional malignant use of AI online can create a myriad of other problems for free and fair elections. For example, a bad actor could create the illusion of widespread belief in untrue election narratives, release manipulative chatbots with programmed persuasion efforts to millions, or even use AI to falsify comments to representatives from nonexistent constituents. A particularly jarring aspect of some AI technology is their “black box” nature. Black boxes are “AI systems with internal workings that are invisible to the user.” This means that a user can receive output from the technology, but is unable to access its code and strategies used to produce the output. Black box AI technologies continue to proliferate at rapid rates and pose unique challenges to any attempts at legislative regulation. 

The United States’ particularly high rates of political polarization make the regulation of AI an even more pressing issue. In order to win an election, especially at the federal level, campaigns must remain hyperfocused on only the small subset of undecided voters. Using the vast amounts of available data, AI could easily target those who are yet to make up their minds and manipulate their advertising and chatbot experiences to have the greatest chance of achieving their desired outcome. Since the 2024 presidential election in particular may very well be decided by just tens of thousands of votes in a handful of states, every one of these undecided votes targeted by AI will be crucial to determining the next U.S. President. What’s more, almost anyone can become a political influencer. With the ease and efficiency of new AI technologies, anyone can become a persuasive force in an election. 

AI capabilities are evolving at an incredible and unprecedented rate. With this in mind, it might not be so hard to look toward the future and imagine a world in which political campaigns could utilize AI technology to achieve the sole objective of changing people’s voting behavior in favor of their candidate. This technology would have no way of differentiating true from false, or ethical from unethical, and it would pursue whatever path necessary to maximize vote share. It could devise strategies ranging from projecting distasteful advertisements when an opponent’s messaging appears on screen to manipulating social media feeds to portray a skewed sense of which candidate one’s friends, family, and peers might support. 

Now consider the possibility that the competing Republican and Democratic presidential campaigns each use a similar AI technology with the lone goal of garnering the most votes for their candidate. The outcome of an entire national election could be a direct result of which political party is in possession of the more effective AI tools. In this world, it would not be a stretch to say that the most persuasive machine, rather than the best candidate, had won an election. In this way, it becomes clear that democracy has lost some of its original quality and that the election was not entirely free nor fair. Voters’ decisions would ultimately reflect the manipulation of AI technology more than their own freely chosen political opinions and representatives.

Because AI technologies are relatively new, all efforts to legislate AI are fairly recent. One example is a successful bill passed by the California State Legislature in November 2020, the California Privacy Rights Act, an amendment to the California Consumer Privacy Act. This bill legislates data privacy as it relates to AI, an important step towards diminishing the effectiveness of targeted political ads or other AI persuasion tactics. It introduces additional limitations on data retention, sharing, and personal information in relation to AI. Similarly, the Vermont State Legislature successfully passed a bill relating to the use and oversight of AI in the state government. The bill created a Division of Artificial Intelligence responsible for reviewing “all aspects of artificial intelligence developed, employed, or procured by the state government” as well as creating a code of ethics for the use of AI by the state government.

AI technology only recently began to take up significant space in legislative agendas. As its capabilities have grown, so have attempts to regulate it within federal, state, and local governments. Mentions of AI in U.S. Committee Reports rose from less than ten in the 114th session (2015-2016) to over 75 in the 117th (2021-22). As seen in Figure 1, only two percent of all federal AI bills in the U.S. were passed into law; by 2022, that number jumped to 10 percent. At the state level that same year, 35 percent of all AI bills were passed into law. The growing prominence of AI legislation signals a growing window of opportunity for legislative regulation. 

Figure 1

When considering the types of regulation that federal, state, and local governments ought to consider in order to protect the future of free and fair elections, disclosure of AI use and increased transparency are paramount. A couple of real-world examples already exist of large, private companies instituting transparency and AI-identification policies. In September of 2023, social media heavyweight TikTok introduced a synthetic media policy requiring users to label “AI-generated content that contains realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content.” TikTok is also developing a technology to detect and automatically label such videos as “AI generated,” which MIT professor David Rand says is the “most effective labeling language across demographic groups globally.” Similarly, in November of 2023, Microsoft released election protection commitments. Instead of regulating the use of AI, Microsoft is protecting elections by helping candidates and campaigns develop a unique digital watermark for all sponsored content. This means any false, misleading, or misrepresented information would be easily identified as not belonging to the campaign with the absence of their watermark. Additionally, these watermarks contain technology that can be attached to images and videos to show when, how, and by whom the content was edited or altered by AI. By showing if it was altered after creation, the advanced watermarks ensure that voters have transparent information. 

A vital aspect of the generative AI regulation conversation centers around the First Amendment. If the proposed regulation involves the restriction of what political content AI can generate, it is important to consider the fierce debate over whether or not generative AI is protected by First Amendment rights. Some academics argue that the language of the Amendment should not be interpreted to mean “the freedom of speech by machines” as well as “the freedom of speech by humans.” However, others would argue that the resulting content from generative AI is a form of speech belonging to the AI’s parent company. Following the Supreme Court’s current First Amendment doctrine, if the government were to regulate generative AI’s content, it would need to be done in a manner consistent with the narrow tolerance for regulation of speech under the First Amendment. This is because governments “shall make no law” abridging an AI’s parent company’s expression. Without a significant shift in First Amendment doctrine, AI technology will benefit from the same political protections as you and me.

Yet, some scholars push back and argue that this requires the First Amendment be read to prevent our vulnerable democracy from protecting itself against the threat of generated or “replicant” speech. A possible restriction on AI political speech has arisen in the European Union in order to protect the real government interest in preserving democracy. EU regulators have included a requirement to designate “AI systems to influence voters in campaigns” as “high risk” and subject to regulatory scrutiny in the European Parliament’s Artificial Intelligence Act. Ultimately, both the arguments for and against the First Amendment rights of AI are compelling. The EU “high-risk” designation requirement manages to find a compromise between competing ideas. Although the U.S. is beholden to the Constitution, there may be room for a similar middle-ground if the issue comes before the Supreme Court in the future. For these reasons of complexity and unknown feasibility, the regulations proposed below will not involve restrictions on AI political speech that violate the First Amendment.

The case for urgently-needed legislation to help control the rampant spread of political disinformation is apparent through the examination of AI capabilities in politics. Ahead of upcoming election cycles, legislatures should work towards passing legislation that requires transparency and identification of AI when used for political purposes. At the state level, many frameworks and bill proposals are already taking shape. New York’s pending Disclosure of the Use of Artificial Intelligence bill would require AI used for political communications to be disclosed. The state of California adopted a rule which requires AI chatbots to identify themselves as both non-human and AI-generated. When considering the best route for identification and disclosure, the most transparent versions might look something like “This AI-generated advertisement was paid for by the Johnny Appleseed for Senate Committee, because [AI company] has predicted that it will increase your chances of voting for Johnny by [X%].” This approach is consistent with Leerssen et al.’s (2019) suggestion that AI ad buyers and the public should have access to the same information, specifically why they are being targeted and what other groups are being targeted. 

At the federal level, a bi-partisan bill was introduced in September 2023 by Sens. Klobuchar (D-MN), Collins (R-ME), Hawley (R-MO), and Coons (D-DE) with the intention of amending the Federal Election Campaign Act of 1971. This amendment, the Protect Elections from Deceptive AI Act, would prohibit the use of AI to generate deceptive content falsely depicting federal political candidates in political ads in order to influence federal elections. In 2019, the Deep Fakes Accountability Act failed to pass but attempted to put in place a regulatory framework for watermarking and disclosing “false impersonations with altered audio or visual elements.” Disclosure would include one verbal statement of altered audio and visual elements, as well as an unobscured written description of alteration throughout the duration of the video. 

In order to be widely and successfully implemented on federal, state, and local scales, AI regulation must include cooperation and involvement from multiple stakeholders. If agencies, technology companies, and think tanks can all adopt similar policies of disclosure, transparency, and AI risk assessment frameworks, voters can feel as though free and fair elections are being maintained in the face of expansive AI capabilities. Additionally, lawmakers might require algorithmic impact assessments for government AI systems, including an independent audit of any AI systems used in election administration or election offices actively running elections. If AI companies were required to release the datasets and sources their models are trained on, specifically in election-targeting ads, independent fact-checking could easily take place, and black box models would be a thing of the past. Of course, this option is not entirely feasible, as developers want to keep their competitive edge, but it would help defend against any potential damage to election integrity in future election cycles. The Cybersecurity Agency could monitor AI and, specifically, harmful AI-generated political disinformation, similarly to the National Science Foundation’s current AI Institute for Agent-Based Cyber Threat Intelligence Operation. Here, the executive branch could develop AI detection tools for use in federal, state, and local election offices. The government could invest in detection efforts in order to stay competitive in the race between new AI tools to generate disinformation and the tools that can accurately detect AI-generated content. With these efforts in place, voters would have increased confidence in free and fair elections, as well as a foundation of transparency on which to understand their relationship with politically-oriented AI. 


With these examples and proposals in mind, it is obvious that many relevant legislative pathways exist to regulating AI in campaigns and elections. Regardless of the particulars, effective local, state, and federal AI regulation should involve collaboration among academia, industry, policy experts, and government agencies. The future of AI regulation remains a promising avenue for protecting our elections as the policy window widens and AI capabilities become increasingly pervasive.

 
Previous
Previous

The Erosion of Democracy: Examining the Decline of Democratic Values in the United States

Next
Next

A Lone Star State of Affairs: The Dangers of Targeted Democratic Oversight