US Military Embraces AI in Combat Operations, Utilizing Tech for Target Selection

New York The rise of ChatGPT in 2022 sparked widespread interest in AI, prompting users worldwide to explore its capabilities across various domains. With AI’s rapid progress, experts had speculated about its potential integration into military operations—a conjecture that now seems to have materialized.

According to a report from Bloomberg, the United States has recently leveraged artificial intelligence to identify targets for airstrikes in the Middle East, as disclosed by a defense official.

AI Directing Bomb Drops: A Paradigm Shift

The Bloomberg report reveals that the U.S. military has transitioned from theoretical discussions to practical application, employing artificial intelligence in live combat scenarios.

As detailed, the Pentagon has adopted computer vision algorithms to pinpoint targets for airstrikes. During a mission on February 2nd, over 85 airstrikes were executed with AI assistance, targeting various assets such as rockets, missiles, drone storage facilities, and militia operations centers across Iraq and Syria.

Schuyler Moore, Chief Technology Officer for U.S. Central Command, emphasized the role of computer vision in threat detection. Computer vision entails training algorithms to visually identify specific objects—a capability harnessed in these airstrikes. The algorithms deployed were developed under Project Maven, an initiative launched in 2017 to enhance automation within the Department of Defense.

This utilization of AI for target identification mirrors similar initiatives in other nations. In December 2023, Israel unveiled “The Gospel,” a program employing AI to recommend bombing targets in Gaza by analyzing extensive datasets. The system identifies up to 200 targets within a mere 10-12 days, spanning weapons, vehicles, and even individuals.

However, Israeli authorities underscored that AI’s role in targeting represents just the initial phase, with human analysts overseeing a broader review process.

Yoshua Bengio’s Cautionary Words on AI in Warfare

Yoshua Bengio, a prominent Canadian computer scientist hailed as one of the pioneers of AI, had previously expressed apprehensions regarding AI’s integration into military contexts. In an interview with the BBC, Bengio raised concerns about the potential misuse of AI technology by various actors, including military entities and malicious individuals.

He cautioned against the risks associated with deploying AI systems programmed for nefarious purposes, stressing the challenges posed by systems surpassing human intelligence. Bengio acknowledged the toll these concerns had taken on his mental well-being, emphasizing the need for collective discourse to address the ethical implications of AI development.

Recent News