In preparation for the multitude of elections scheduled in countries housing half the global population this year, OpenAI, the creator of ChatGPT, has announced measures to combat the spread of disinformation. Despite the revolutionary success of ChatGPT, concerns have arisen about the potential misuse of such tools in influencing public opinion and political outcomes.
On Monday, OpenAI declared that it would prohibit the use of its technology, including ChatGPT and the image generator DALL-E 3, for political campaigns. The company aims to prevent any undermining of the democratic process and is actively working to understand the potential impact of its tools on personalized persuasion.
In a blog post, OpenAI stated, “We want to make sure our technology is not used in a way that could undermine” the democratic process. The company acknowledges the need for caution in the use of AI-driven tools, especially in the context of political campaigning and lobbying, until the effectiveness and potential risks are better understood.
The World Economic Forum has identified AI-driven disinformation and misinformation as significant short-term global risks, capable of destabilizing newly elected governments in major economies. OpenAI’s commitment to restricting its tools for political use aligns with the growing awareness of the potential threats posed by AI-generated content.
To address concerns over the authenticity of generated content, OpenAI is developing tools that will provide reliable attribution to the text produced by ChatGPT and enable users to detect if an image has been created using DALL-E 3. The company plans to implement the Coalition for Content Provenance and Authenticity’s digital credentials, enhancing content traceability using cryptographic methods.
OpenAI emphasizes its commitment to ensuring accurate information dissemination, especially regarding elections. ChatGPT, when queried about procedural questions related to US elections, will guide users to authoritative websites. The lessons learned from this initiative will inform OpenAI’s approach in other countries and regions.
Furthermore, OpenAI highlights that DALL-E 3 has built-in guardrails preventing users from generating images of real people, including political candidates. This proactive approach by OpenAI mirrors efforts by other tech giants, such as Google and Meta, to limit election interference, particularly through the misuse of AI.
As the threat of AI-driven disinformation looms, OpenAI’s commitment to responsible use and transparency represents a step towards mitigating risks and preserving trust in political institutions.