Senate Bills Exploit AI Concerns to Push Unrelated Speech Restrictions

May 14, 2024 | Brian Hawkins

Artificial intelligence (AI) is a leading topic in national discourse, and just as election season heats up, Congress is inserting itself into the conversation with ill-conceived and problematic legislation that ostensibly regulates AI, but more harmfully chills both political and issue speech, including from nonprofits.

On Wednesday, May 15, the U.S. Senate Committee on Rules and Administration will hold a Business Meeting to consider two AI-related bills, the “AI Transparency in Elections Act of 2024” (S. 3875) and the “Protect Elections from Deceptive AI Act” (S. 2770). Don’t let the aspirational titles fool you. Both proposals take advantage of the interest in (and confusion about) the application of AI in advertising to enact policies that have no effect other than to chill speech while burdening speakers.

The AI Transparency in Elections Act of 2024 is worth particular attention for its cynicism. On its surface, S. 3875 requires paid and unpaid political and issue advertisements depicting federal candidates that use AI-generated “text, images, audio, video, or other media” to display a disclaimer informing viewers of the use of AI. While the mandatory disclaimer is controversial enough, the legislation quietly expands the electioneering communications window concept in federal law to an absurd degree.

Under federal law, an “electioneering communication” is a broadcast, cable, or satellite communication that mentions a federal candidate – but does not advocate for that candidate – and that is targeted toward the relevant electorate within 30 days of a primary election or 60 days of a general election. Because of these criteria, electioneering communications necessarily regulate speech about policy issues that mention elected officials, many of whom are also candidates for federal office. Organizations that produce electioneering communications must file reports with the Federal Election Commission and disclose their supporters who earmark their contribution for the ad in question.

S. 3875 broadens the existing electioneering communications window for ads produced with AI to 120 days before the date of the primary through the date of the general election. In effect, the bill expands federal regulation of unpaid issue speech online and elsewhere to the entirety of an election year (other than the seven weeks following Election Day), dramatically expanding the type and amount of speech that is ultimately subject to reporting and disclosure.

The mandatory disclaimer similarly imposes its own burdens on speech. The legislation prescribes a lengthy script informing the viewer that the speaker “used artificial intelligence to generate the contents of this communication.” This script must also be conveyed via text on screen and in an audio recording. This compelled speech consumes the message of the speaker who paid for the advertisement, forcing the speaker to express the government’s message at the expense of their own.

Ample academic research suggests that disclaimers have limited value. Viewers do not change their opinion of an advertisement based on the information displayed in a disclaimer, and the disclosed information has little practical value for the viewer, culminating in a dynamic known as “junk disclosure.”

Taken together, S. 3875 broadens the universe of political and issue speech requiring a burdensome disclaimer that will swallow a speaker’s message in place of the government’s, all while providing limited value, at best, for the audience.

Unfortunately, the Protect Elections from Deceptive AI Act is arguably worse for its attempt to prohibit an even broader swath of speech. The legislation would ban the publication of any “deceptive AI-generated audio or visual media,” as defined, that relates to a candidate for federal office. The definition of “deceptive-AI generated audio or visual media” is broad enough to cover memes and common editing software, effectively criminalizing an unfathomable array of online content discussing elected officials. A more appropriate name for this bill would be “the Protect Members of Congress from Memes Act.” In a revealing twist, S. 2770 also creates a private right of action allowing elected officials who are the subject of prohibited AI-generated content to sue for unlawful use of their image.

People United for Privacy has previously expressed concern about another proposal in Congress, the “REAL Political Advertisements Act” (S. 1596), that was nothing more than a dishonest attempt to use concerns about emerging technology to increase government control over political and issue speech online by bootstrapping the perennially rejected “Honest Ads Act” to ongoing conversations about AI in Congress. Fortunately, that objectionable bill was shelved, but these latest bills are cut from the same cloth. Lawmakers would do well to better understand the technology they seek to regulate and tread cautiously before upending existing law to regulate a new universe of speech.

The U.S. Senate Committee on Rules and Administration will debate both pieces of legislation at a Business Meeting on Wednesday, May 15 at 10:00 AM Eastern. Let’s hope Members of the Committee come to their senses and send these bills to the Recycle Bin, where they belong.