New York: OpenAI, led by Sam Altman, revealed on Thursday its interception of five clandestine influence campaigns aiming to exploit its AI models for deceptive activities online.
Over the past three months, threat actors hailing from Russia, China, Iran, and Israel utilized OpenAI’s AI models to craft short comments, lengthy articles in various languages, as well as fictitious names and bios for social media profiles.
These operations, spanning topics such as Russia’s invasion of Ukraine, the conflict in Gaza, Indian elections, and global politics, were orchestrated to manipulate public opinion and influence political outcomes, according to OpenAI.
The emergence of these deceptive campaigns fuels concerns about the potential misuse of advanced AI technology, capable of producing human-like text, imagery, and audio with remarkable ease and speed.
In response, OpenAI has established a Safety and Security Committee, spearheaded by board members including CEO Sam Altman, to oversee the training of its forthcoming AI model.
Despite the efforts of these covert campaigns, OpenAI asserts that they did not garner increased audience engagement or reach through its services.
It’s noteworthy that while AI-generated content was part of these operations, they also incorporated manually crafted texts or memes sourced from various online platforms.
In a separate development, Meta Platforms disclosed in its quarterly security report the discovery of “likely AI-generated” content deployed deceptively on Facebook and Instagram. These instances included comments lauding Israel’s actions during the Gaza conflict, strategically placed under posts from prominent news outlets and U.S. lawmakers.