Redmond, Washington: Tech giant Microsoft issued a statement on Wednesday, revealing that adversaries of the US, including Iran, North Korea, Russia, and China, are harnessing generative artificial intelligence (AI) for offensive cyber operations.
Based in Redmond, Washington, Microsoft disclosed its identification and thwarting of threats in collaboration with partner OpenAI, which utilized AI technology developed by these adversaries.
While describing these techniques as “early-stage” and lacking innovation, Microsoft stressed the importance of exposing them publicly. This move comes as rival nations increasingly employ large-language models (LLMs) to enhance their capabilities in breaching networks and conducting influence operations.
While cybersecurity firms traditionally employ machine learning for defense, the rise of LLMs, notably led by OpenAI’s ChatGPT, intensifies the cat-and-mouse game between defenders and malicious actors.
Microsoft’s substantial investment in OpenAI coincided with its announcement, accompanied by a report highlighting the potential of generative AI to enhance malicious social engineering, leading to more sophisticated deepfakes and voice cloning. This poses a significant threat to democratic processes, particularly amid upcoming elections in over 50 countries.
The report details instances where Microsoft disabled generative AI accounts and assets of various groups. This includes North Korea’s cyberespionage group Kimsuky, which used the models for researching foreign think tanks and spear-phishing hacking campaigns.
Iran’s Revolutionary Guard leveraged LLMs for social engineering, troubleshooting software errors, and studying network intrusion evasion tactics. The Russian GRU military intelligence unit Fancy Bear focused on researching satellite and radar technologies related to the conflict in Ukraine. China’s cyberespionage groups Aquatic Panda and Maverick Panda explored ways to enhance their technical operations using LLMs.
In a separate statement, OpenAI noted that its current GPT-4 model chatbot offers limited capabilities for malicious cybersecurity tasks beyond what non-AI-powered tools can achieve. However, cybersecurity researchers anticipate advancements in this area.
Jen Easterly, director of the US Cybersecurity and Infrastructure Security Agency, has previously highlighted the dual threats posed by China and artificial intelligence, emphasizing the need for AI development with security in mind.
Critics argue that the public release of large-language models, including ChatGPT, without adequate consideration for security measures was hasty and irresponsible. Some cybersecurity professionals urge Microsoft to focus on enhancing the security of LLMs rather than selling defensive tools to address vulnerabilities.
Experts warn that while the immediate threat from AI and LLMs may not be apparent, they could potentially become powerful weapons in the arsenal of every nation-state military.