Skip to content

The FCC Proposes Requirements for Disclosures About the Use of Artificial Intelligence in Political Ads – Looking at Some of the Many Issues for Broadcasters

August 9, 2024

David Oxenford

David Oxenford

By: David Oxenford, Wilkinson Barker Knauer

On July 25, the FCC released a Notice of Proposed Rulemaking that was first announced by the FCC Chairwoman three months ago (see our article here), proposing to require that the use of artificial intelligence in political advertising be disclosed when it airs on broadcast stations, local cable systems, or satellite radio or TV.  This proposal has been controversial, even before the details were released, with many (including the Chair of the Federal Election Commission and some in Congress) questioning whether the FCC had the authority to adopt rules in this area, and also asking whether it would be wise to adopt rules so close to the upcoming election (the Chairwoman had indicated an interest in completing the proceeding so that rules could be in place before November’s election).  The timing of the release of the NPRM seems to rule out any new rules becoming effective before this year’s election (see below), and the NPRM itself asks questions as to whether the FCC’s mandate to regulate in the public interest and other specific statutory delegations of power are sufficient to cover regulation in this area.  So, these fundamental questions are asked, along with many basic questions of how any obligation that would be adopted by the Commission would work.

The FCC is proposing that broadcasters and the other media it regulates be required to transmit an on-air notice (either immediately before, after, or during a political ad) to identify an ad that was created in whole or in part using AI.  In addition, broadcasters and other media subject to the rule would need to upload a notice to their online public files identifying any political ads that were created using AI.  The NPRM sets forth many questions for public comment – and also raises many practical and policy issues that will need to be considered by the FCC and the industry in evaluating these proposals.

Before getting to the specifics, it is important to note the context in which these proceeding arises.  As we have noted before, many states have considered and, by our last count 19 have adopted, statutes or rules dealing with AI in political advertising (the most recent being New Hampshire with a bill signed by its governor in mid-July).  The FCC in its NPRM notes that there are state actions (though indicates that there are only 11 states with rules, citing a tracker last updated in April), but does not seem to consider many of the difficult issues that were considered and addressed in these state laws – issues often addressed in a broad and often perfunctory manner in the NPRM.

One of the lessons learned from dealing with many of these state legislative efforts to adopt AI regulation for political advertising is that the most important party to be covered by these laws is the party who creates the ad.  Only the producer of the ad truly knows whether it contains uses of artificial intelligence or other deceptive audio or video content.  While some states have made some attempt to impose a degree of responsibility on broadcasters and other media who distribute these ads (in these limited cases, generally requiring that media companies have policies against the use of unlabeled AI), most states have exempted media companies, including broadcasters, from any liability for deceptive AI used in paid advertising, for the simple reason that the media does not know whether or not an ad contains AI and, even when a media company receives a complaint about the alleged use of AI in political ads, there is no foolproof way to quickly determine if in fact AI was used.  Even the means that do exist to evaluate media content for the use of AI routinely takes weeks if not months to reach any results (far too long to be helpful in the fast-paced political buying world), and needs to be done by experts who are not routinely on the payroll of broadcast stations (see our article here noting that, even in situations where law enforcement is involved, the evaluation can take months for even simple AI created by unskilled players using readily available free technology).

States have, in the vast majority of cases, placed the burden of adding disclaimers, and the penalties for not doing so, solely on the creator of the political message, not on the media company transmitting it.  Why does the FCC not take that approach?  Principally because the FCC does not have jurisdiction over the creators of political advertising, but instead only on some of the media players who distribute it (those licensed or otherwise regulated by the FCC).  In its haste to do something about AI in political advertising, its NPRM seemingly advanced proposals that cannot reach those who have the real power to limit undisclosed AI in political advertising — the creators of those AI ads — and instead proposes to impose burdens on those who really have little power or ability to police the use of AI.

The final contextual point to make is one pointed out in the dissents filed to these proposals by the Republican commissioners.  The labeling requirements proposed by the FCC, even if they could be enforced by broadcasters and other media companies that are covered by them, misses so much of the media universe that they may end up causing more confusion, not less.  The FCC can only regulate broadcasters, satellite radio and TV, and the content (mostly ads) developed by local cable systems (and the FCC’s authority over those systems) is statutorily limited.  Online media – more and more becoming the focus of political content – is beyond FCC regulation, including streaming media that looks and sounds like that provided by broadcasters.  Because this media is totally outside the FCC’s jurisdiction, the same ad could be running on a broadcast station with a disclaimer and on an online platform without one.  That certainly does not provide a consistent warning to consumers.

Perhaps because of this context, the FCC proposal is a relatively limited one – requiring that the broadcaster ask advertisers if they are using AI in political ads, and then making sure that, in those ads voluntarily identified by the advertiser as containing AI, there is a disclaimer contained in the ads that are broadcast (and the broadcaster’s public file) disclosing the use of that AI.  While that limited proposal seems straightforward, the devil is in the details.  The FCC asks about some of those details.  First, it asks what the broadcaster should do if the advertiser does not agree to certify – one way of the other.  Is the broadcaster prohibited from running the ad without a certification?  While the FCC does not ask, if the advertiser is a federal candidate, and they refuse to certify, can the broadcaster refuse to run the ad consistent with its obligations under Section 312 of the Communications Act to give reasonable access to all federal candidates and the Act’s prohibition in Section 315 on broadcasters censoring their message?

The FCC seems to gloss over the concerns that requiring the disclosures of AI in candidate ads appears to run contrary to the statutory requirement that broadcasters and local cable providers not censor a candidate’s ad (see our articles here and here for more on the “no censorship” requirement).  The FCC points to cases saying that content neutral disclaimers may be permissible – but is a disclaimer that says that the following ad may contain content that is not real really content neutral?  Given all the scary publicity that AI has received, we expect that most audience members would find such a disclaimer to raise serious questions about the reliability of the content of the ad.

The FCC also asks what the broadcaster should do if they have a certification claiming that no AI was used in the ad, and then get a “credible” objection saying that AI was in fact used?  This is the hardest question to answer because, as noted above, broadcasters may simply have no way to confirm whether or not AI was used.  If a broadcaster cannot run an ad when someone objects that AI is used in an ad, you can bet that we will see many, many ads accused of containing AI, putting the broadcaster in the impossible situation of trying to decide one way or the other as to whether AI was used in the generation of the ad.

That really leads to one of the fundamental questions raised by this rulemaking – when is an ad considered to be an ad created by AI that needs to contain the required disclaimer?  The FCC’s proposed rules define AI broadly – as follows:

as an image, audio, or video that has been generated using computational technology or other machine-based system that depicts an individual’s appearance, speech, or conduct, or an event, circumstance, or situation, including, in particular, AI-generated voices that sound like human voices, and AI generated actors that appear to be human actors.

Unlike in most state rules, the FCC proposal does not require that the content be “materially deceptive” or even that it depicts a real person or event.  As “computational technology or other machine-based systems” are used in all sorts of production techniques, particularly in video production, the rule could cover a candidate standing in front of a greenscreen that puts images into the ad that are not actually in a studio, or even techniques that enhance sound quality, balance brightness, or adjust other video elements without materially changing what real people did or said.  Does all of that really require a disclaimer that will suggest to the public that the ad was projecting some fake image?

There are all sorts of other details in the NPRM on which there are likely to be many comments.  The  concern that had been expressed when this proposal was first introduced, that these rules could come into play in the middle of the upcoming election, is less likely to be a concern.  With the NPRM just being released last week, it still needs to be published in the Federal Register to start the comment period – 30 days from that publication for comments, 45 days from publication for reply comments.  How the many complex technical issues raised by this NPRM can be adequately addressed in that short comment period is a question that may be faced – but more importantly, even if the NPRM were to be published in the Federal Register this week (and it is not scheduled for publication tomorrow), final reply comments would not be filed until the latter part of September.  Even if the FCC were to adopt rules the day after comments were filed (an impossibility given the requirements of the Administrative Procedure Act for the FCC to meaningfully consider the comments and fully evaluate them in making any decision), the rules would have to be again published in the Federal Register, and also the paperwork requirements would need to be sent to the Office of Management and Budget for consideration for compliance with the terms of the Paperwork Reduction Act (see our article here on the FCC’s decision to extend its foreign government sponsored programming verification requirements to issue ads, the implementation of which is currently stalled by the need for OMB approval).  Thus, even though this proposal is important in dealing with political ads for the future, it is almost certain to not apply to ads running in connection with the November election.

Even if any new rules will not be effective in time for the election this year, it is still an important proceeding.  Broadcasters and other media regulated by the FCC should consider filing comments in this proceeding.

David Oxenford is MAB’s Washington Legal Counsel and provides members with answers to their legal questions with the MAB Legal Hotline. Access information here. (Members only access). There are no additional costs for the call; the advice is free as part of your MAB membership.

Scroll To Top