The US Federal Communication Commission (FCC) proposed that all political ads aired on television and radio disclose whether they feature AI-generated content.
The proposal by the FCC was prompted by the realization that various candidates and parties could tap artificial intelligence (AI) to create political advertisements. The Commission mandates such aspects be disclosed.
FCC Mandates Labeling of AI-generated Political Ads
The Commission published the notice on Wednesday 22 neatly three months ago, when AI-generated robocall addressed voters within New Hampshire.
Under the new FCC proposal, parties airing political ads on-air should furnish a written disclosure that broadcasters should custody affirming the inclusion of AI-generated content.
The FCC chair, Jessica Roaenworcel, explained the basis of such disclosure as a necessity to inform consumers fully amid the prevalence of AI-powered tools becoming a mainstay of society today.
Rosenworcel added that consumers can know whenever AI is deployed to generate political content they watch or listen to.
The FCC disclosure requirements apply to the candidate, the party issuing the advertisement, and all parties involved in offering the origination programming. Rosenworcel added that entities involved in the programming produced or obtained via a license for transmission targeting subscribers should comply with the disclosure requirements. The scope extends to cable, radio, and Satellite TV providers.
The FCC notice only stipulates the disclosure requirement. The proposed policy hardly issues an outright ban on AI-generated content. However, the FCC has previously undertaken restrictive measures that banned AI-generated content.
A February saw the Rosenworcel-led FCC impose a ban that illegalized the application of AI-generated robocalls when an audio deepfake imitating US President Joe Biden was released, tricking the New Hampshire residents to forego voting during the state’s primary election.
President Biden would call for the ban of AI-powered voice impersonation when addressing the March State of the Union. He decried the inherent risks when the AI-generated deep fakes trick the recipients.
AI-Generated Content and Media in Election Campaigns
Biden’s call to prohibit AI voice impersonation hardly hindered the Congressional Candidate eyeing Ohio’s 7th district, Matt Diemer, from tapping AI developer Civox’s technology to reach voters.
Diemer hailed the Civox system as enabling him to voice his message to the Ohio residents. He indicated he would reach out to an audience estimated to surpass 730,000 citizens through the system.
Diemer likened the AI-powered message to sending out blogs, emails, and text messages. It is no different from TikToks and tweets as it is an alternative channel through which one can interact with one’s subject, guaranteeing more connection.
Diemer, who appears in a podcast on the crypto media platform Decrypt, indicated that AI was the only tool he leveraged. The disclosure coincides with the news of a Republican presidential candidate declaring support and acceptance of crypto donations.
Unlike Donald Trump, who declared he would create a crypto army, Diemer indicated that AI is the only technology added to his campaign machinery.
FCC Label Order to Avert Deceptive Communication
The FCC directive arose when leading generative AI model developers, such as Microsoft, Meta, Google, and OpenAI, joined Anthropic to ban the usage of their large language model (LLM) platforms to generate political ads.
Google spokesperson indicated that the elections held around the world in 2024 necessitated adequate caution to restrict the utilization of Gemini. As such, Google restricted the prompt that users could use on election-related queries.
The FCC, reflecting on the US, whose elections edge closer to a start this fall, urged vigilance to avoid the snares and misinformation made possible by deceptive AI-generated deep fakes.
The FCC indicated that AI is poised to play a critical role, particularly in creating political ads in 2024 and after that.
The FCC added that using AI-generated content for political ads yields an opportunity to convey deceptive information to win voters. The agency warned against the likelihood of deepfakes flooding during the election campaign.
The agency illustrated actors could alter the images, videos, or audio recordings to skew messages and events that hardly occurred.
Editorial credit: Tada Images / Shutterstock.com