Nations Urge Human Control of Nukes as China Refuses to Rule Out AI-Managed Nuclear Weapons

The Seoul declaration emphasized that AI applications should prioritize ethical guidelines and human decision-making, a principle that China did not support.

Seoul: Sixty countries convened in Seoul, South Korea, to advocate for human control over nuclear weapons, a stance reinforced at the Responsible AI in the Military Domain (REAIM) Summit. The gathering saw major powers like the United States and the United Kingdom endorse the declaration, with China being the notable exception.

The Call for Human Oversight

At the summit, around 60 out of 100 participating countries adopted the “Blueprint for Action”, a non-binding agreement emphasizing the need for human control over decisions regarding nuclear weapons. The declaration states it is crucial to “maintain human control and involvement for all actions… concerning nuclear weapons employment,” according to AFP.

While the declaration aims to ensure ethical and human-centric use of AI in military applications, China continues to refrain from ruling out the potential use of AI to control its nuclear arsenal. The Seoul declaration emphasized that AI applications should prioritize ethical guidelines and human decision-making, a principle that China did not support.

Understanding the ‘Blueprint for Action’ on Military AI

This year’s summit marked the second edition of the REAIM Summit, co-hosted by the United Kingdom, the Netherlands, Singapore, and Kenya. The latest version of the “Blueprint for Action” took a broader stance on military AI issues compared to its initial edition. While 100 countries participated in the first summit, the number of signatories dropped to 60 this year.

Despite its broader framework, the declaration acknowledged that significant progress remains to be made. It called for further discussions among nations to develop clear policies and procedures regarding AI in military use. Dutch Defence Minister Ruben Brekelmans stated that while the inaugural summit focused on creating a shared understanding, this year’s summit sought to promote actionable steps.

The declaration outlined essential risk assessments, human control requirements, and confidence-building measures to mitigate the risks associated with AI in military applications. However, it lacked provisions for sanctions or punitive measures in the event of non-compliance.

Also Read | Kim Jong Un Pledges to Expand North Korea’s Nuclear Arsenal

China’s Refusal to Rule Out AI-Controlled Nuclear Weapons

China has consistently refused to rule out the use of AI in controlling its nuclear weapons, despite global consensus that the decision to launch such weapons should rest solely with humans. While militaries worldwide are currently utilizing AI for tasks like reconnaissance, surveillance, and data analysis, there is growing concern that AI could eventually be used to autonomously select nuclear targets.

In June, the White House revealed that China rejected a U.S. proposal to limit AI’s role in nuclear weapons decision-making. Tarun Chhabra, Director of Technology at the White House National Security Council, reiterated the longstanding U.S. position that AI should not be involved in decisions regarding nuclear weapon launches.

Chhabra stated, “Our position has been publicly clear for a very long time: We don’t think that autonomous systems should be getting near any decision to launch a nuclear weapon. That’s long-stated U.S. policy.”

Although it remains unclear how China plans to integrate AI into its nuclear command systems, historical precedent exists. During the Cold War, the Soviet Union implemented an autonomous “Dead Hand” system that could launch nuclear weapons if key leaders were incapacitated or killed. There are reports that Russia still maintains this system today, raising concerns about future AI developments in nuclear decision-making.

Recent News