New York: In a notable shift in policy, OpenAI, led by Sam Altman, has revised its usage policy to permit applications of its AI technologies for military and warfare purposes. The change involves the removal of explicit language prohibiting the deployment of OpenAI’s technology for military uses, aiming to establish universal principles, as reported by The Intercept.
OpenAI justified the revision by emphasizing the need for easy-to-remember and applicable universal principles, especially considering the global use of its tools by everyday users building GPTs. The company spokesperson stated, “Don’t harm others” as a broad principle, applicable in various contexts, explicitly citing weapons and harm to others as clear examples.
The real-world implications of this policy adjustment are yet unclear. The report raised concerns about the potential involvement of large language models (LLMs) like ChatGPT in “killing-adjacent tasks”, such as code writing or procurement processing. TechCrunch highlighted the usefulness of OpenAI’s platforms for army engineers summarizing extensive documentation related to a region’s water infrastructure.
Despite the shift, OpenAI maintains a ban on AI applications for weapons development. The delicate balance between enabling military-related tasks and preventing weaponization remains a central focus amid the evolving landscape of AI technology applications.