Meta Exposes Israeli Firm’s ‘Deceptive’ AI-Generated Content on Gaza War: Impersonating Jews, African Americans

Operating under guises such as Jewish students, African Americans, and concerned citizens, these accounts targeted audiences in the United States and Canada, Meta revealed in its quarterly security report.

New York: Meta has identified the Israeli political marketing firm STOIC for utilizing deceptive and “likely AI-generated” content on its Facebook and Instagram platforms.

This revelation includes comments lauding Israel’s management of the Gaza war, strategically placed beneath posts from global news outlets and U.S. legislators, as reported by Reuters.

Operating under guises such as Jewish students, African Americans, and concerned citizens, these accounts targeted audiences in the United States and Canada, Meta revealed in its quarterly security report.

This report marks the first disclosure of text-based generative AI technology employment since its emergence in late 2022, according to the international news agency Reuters. It outlines six covert influence operations that Meta thwarted in the initial quarter.

Also Read | Elon Musk Agrees to Testify in SEC Probe on Twitter Stock Disclosures

Meta’s head of threat investigations, Mike Dvilyanski, elaborated to Reuters, “There are several examples across these networks of how they use likely generative AI tooling to create content. Perhaps it gives them the ability to do that quicker or to do that with more volume. But it hasn’t really impacted our ability to detect them.”

Concerns among researchers have arisen regarding generative AI’s potential to expedite and cheaply produce human-like text, imagery, and audio, potentially amplifying disinformation campaigns and influencing elections.

Also Read | US Boycotted UN Tribute to Late Iranian President Ebrahim Raisi

Meta, alongside other tech giants, faces the challenge of addressing the potential misuse of emerging AI technologies, particularly in electoral contexts. Despite policies against such content, researchers have unearthed instances of image generators from companies like OpenAI and Microsoft generating photos containing voting-related disinformation.

Also Read | OpenAI Thwarts Five Attempts to Exploit AI for Deception

Moreover, doubts persist regarding the efficacy of digital labeling systems, which only apply to images and not text, as revealed in the report.

Recent News