FCC Chair Proposes AI Disclosure Rules for Political and Issue Advertisements

May 30, 2024 | Luke Wachob

People United for Privacy recently warned policymakers and the public about legislation in the U.S. Senate that would expand restrictions on campaign and issue advocacy under the cover of addressing concerns about the use of artificial intelligence in political advertising. Of particular concern to PUFP is the impact these proposals would have on nonprofits that advocate on public policy issues rather than elections and candidates.

Unfortunately, the Senate is not the only source of the threat. On May 22, Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel, a Democratic appointee, announced a proposal to initiate a rulemaking that would require groups running political and issue ads to disclose the use of AI-generated content both on-air and in government filings. While different in some respects from the Senate proposals, the FCC plan raises many of the same concerns for nonprofit advocacy and free speech.

According to the Chairwoman’s release, “If adopted, this proposal aims to increase transparency by:

  • Seeking comment on whether to require an on-air disclosure and written disclosure in broadcasters’ political files when there is AI-generated content in political ads,
  • Proposing to apply the disclosure rules to both candidate and issue advertisements,
  • Requesting comment on a specific definition of AI-generated content, and
  • Proposing to apply the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees.”

By proposing to apply disclosure rules for AI-generated content to both candidate and issue ads, numerous nonprofit groups that do not participate in any activities to support or oppose candidates will nonetheless be affected. The reporting and disclaimer requirements will create additional burdens for these nonprofits, including grassroots organizations, attempting to convey a message to the public about a policy issue. Moreover, the informational benefits of including a disclaimer for AI-generated content are dubious at best, and disclaimers, by their very nature, limit the speaker’s message in place of speech mandated by the government.

Adding to the concern is the partisan structure of the FCC. The agency has only 5 members, of which 3 are currently Democrats. By contrast, the Federal Election Commission (FEC) – the agency created to enforce campaign finance regulations – is a 6-member agency where no party is allowed to obtain a majority of seats. The FEC’s independent, bipartisan structure and mandate to enforce campaign finance laws make it a more appropriate forum for hashing out regulations that touch so closely on First Amendment rights and political activity. Indeed, the FEC is currently wrestling with whether to amend existing disclaimer regulations to require a disclosure for AI-generated content in political ads.

Perhaps anticipating this objection, the FCC Chair’s announcement couches the proposal in the authority bestowed upon the agency by the McCain-Feingold campaign finance law: “The Bipartisan Campaign Reform Act provides the Commission with authority regarding political advertising. There is also a clear public interest obligation for Commission licensees, regulatees, and permittees to protect the public from false, misleading, or deceptive programming and to promote an informed public – and the proposed rules seek to achieve that goal.”

Yet, not everyone at the FCC agrees with that claim. Commissioner Brendan Carr, a Republican appointee, released a statement opposing the proposal and identifying the FEC as the appropriate agency to lead on such matters. As Commissioner Carr explains:

“Congress has not given the FCC the type of freewheeling authority over these issues that would be necessary to turn this plan into law. And for good reason. The FCC can only muddy the waters…

Applying new regulations to candidate ads and issue ads but not to other forms of political speech just means that the government will be favoring one set of speakers over another. And applying new regulations on the broadcasters the FCC regulates but not on their largely unregulated online competitors only exacerbates regulatory asymmetries. All of this confirms that the FCC is not the right entity to consider these issues.”

The fate of the proposal remains to be seen. What is clear is that there is an organized effort underway to increase restrictions on the use of artificial intelligence in political and issue advertising – at the FCC, the FEC, and in the Senate. Policymakers must remember that groups besides candidates, political parties, and super PACs have rights at stake in these discussions. Nonprofits that promote ideas about government, legislation in Congress, proposals in regulatory agencies, and legal issues before the courts may also use AI tools to help craft their messages to the public.

Regulators should exercise caution when considering regulations to govern the nascent role of AI in political speech. That is especially true for agencies like the FCC that lack the proper authority and structural safeguards to implement a fair, effective, and constitutional policy.