Meta will label political ads that use AI media to protect 2024 voters
ALBAWABA – Facebook-parent company Meta will label political ads that use AI media, including audio and imagery that is altered or generated using Artificial Intelligence (AI) or other software, ahead of the 2024 elections.
Starting in 2024, all political advertisement containing AI media will be required to disclose the use of such elements when used on Facebook or Instagram, Meta said, as reported by Associated Press (AP).
"Advertisers who run ads about social issues, elections and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven't done or said," Meta global affairs president Nick Clegg said in a Threads post.
In addition, Meta's fact-checking partners, which include a unit of AFP, can tag content as "altered" if they determine it was created or edited in ways that could mislead people. Including through the use of AI or other digital tools, the company said.
Meta Platforms Inc. and other tech companies have been criticized for not doing enough to address the risk of AI-altered or generated media on political campaigns.

Meta will label political ads that use AI media to protect 2024 voters - Shutterstock
Notably, Wednesday’s announcement by Meta came on the same day United States (US) House of Representatives lawmakers held a hearing on deep-fake media, according to AP.
Fact-checkers as well as Meta will label political ads that use AI media
Advertisers will also have to reveal when AI is used to create completely fake yet realistic people or events, a Meta statement said on Wednesday, as carried by Agence France-Presse (AFP).
Meta will add notices to ads to let viewers know what they are seeing or hearing is the product of software tools, the company said.
Meanwhile, officials in Europe are working on comprehensive regulations for the use of AI.
Microsoft this week also announced new measures it will take as part of its efforts to help protect elections from "technology-based threats" such as AI.
"The world in 2024 may see multiple authoritarian nation-states seeking to interfere in electoral processes," Microsoft chief legal officer Brad Smith and corporate VP Teresa Hutson said in a blog post.
Fears of increasingly powerful AI tools include the potential for them to be used to deceive voters during elections.