top of page
Search

AI used nuclear signalling in 95% of simulated crises, King's study finds

  • 3 days ago
  • 1 min read



A new study by Professor Kenneth Payne from the Defence Studies Department at King’s offers one of the most detailed empirical examinations to date of how frontier large language models behave in high-stakes strategic competition.


The study placed three leading AI models – GPT-5.2, Claude Sonnet 4 and Gemini 3 Flash – in a tournament of 21 simulated nuclear crisis scenarios. Across 329 turns of play, the models generated approximately 780,000 words of structured reasoning – more than the combined length of War and Peace and The Iliad.


Rather than focusing on outcomes alone, the study made AI decision-making processes visible. Each turn followed a three-phase architecture: reflection (situational assessment), forecasting (predicting the opponent’s move), and decision (public signal and private action). This innovative “reflection–forecast–decision” structure enabled researchers to analyse the AI’s deception, credibility management, prediction accuracy and self-awareness in unprecedented depth.


Across 21 crisis games, all featured nuclear signaling by at least one side, and 95% involved mutual nuclear signaling. However, while models readily threatened nuclear action, crossing the tactical nuclear threshold was less common, and ‘strategic’ full-scale nuclear war was rare.


Read the full story  |  KING’S COLLEGE




 
 
  • Twitter

© 2026 UnmissableAI

bottom of page