New York: Meta, the parent company of Facebook, has rolled out significant updates to its policies regarding digitally manipulated media, including deepfakes, just before the upcoming US elections. This move aims to enhance Meta’s ability to combat deceptive content arising from advancements in artificial intelligence technology. Monika Bickert, Vice President of Content Policy at Meta, announced the introduction of “Made with AI” labels, slated to be implemented from May onwards.
These labels will be attached to AI-generated content, spanning videos, images, and audio, across Meta’s platforms. This represents an expansion of their previous policy, which had been limited to addressing a narrow scope of manipulated media. Additionally, Bickert revealed plans to introduce distinctive labels for digitally altered media posing a significant risk of misinformation on critical issues, regardless of whether AI or traditional editing tools were used.
Also Read: U.S. Officials Warn of Deadly Terrorist Attacks on American Soil
Meta’s Evolving Strategy: Transparency in Content Management
Meta’s new approach signifies a departure from merely removing specific posts to maintaining them on its platforms while ensuring users are aware of their origin. Previously, Meta had announced intentions to detect images generated by external AI tools using embedded invisible markers, though a specific rollout date was not provided at the time.
According to a spokesperson speaking to the international news agency Reuters, Meta’s revised labeling strategy will encompass content shared on Facebook, Instagram, and Threads platforms. However, different guidelines apply to services like WhatsApp and Quest virtual reality devices.
Also Read: Fermi Paradox | Elon Musk’s Musings on Extraterrestrial Existence and Humanity’s Cosmic Future
The spokesperson indicated that Meta will promptly implement the more prominent “high-risk” labels.
These updates come ahead of the US presidential election in November, amid concerns from tech experts regarding the potential impact of new generative AI technologies. Political campaigns have already begun leveraging AI tools in various countries, pushing the boundaries of Meta’s guidelines and those set by leading AI providers like OpenAI.
Also Read: US, Australia, UK Mull Expanding Vital Security Pact to Counter China’s Influence: Report
Meta’s Oversight Board Criticizes Existing Rules on Deepfakes
In February, Meta’s oversight board scrutinized the company’s current rules on manipulated media, deeming them “incoherent.” This criticism stemmed from the review of a Facebook video from last year featuring altered footage of US President Joe Biden, creating a false impression of inappropriate behavior.
Under Meta’s current policy, manipulated videos can remain on the platform if they aren’t AI-generated or don’t fabricate words attributed to individuals. The oversight board recommended expanding this policy to encompass non-AI content, audio-only content, and videos depicting fabricated actions, recognizing their potential for misinformation.