In recent years, the Department of Defense (DoD), academics, and other policymakers have grappled extensively with concerns regarding the use of artificial intelligence (AI) in warfare. These concerns have often led to calls for establishing or, more recently, “appropriate human judgment” in deploying AI-driven technologies on the battlefield. Although these terms remain vague, recent government guidelines, such as DoD Directive 3000.09, increasingly highlight the importance of human judgment in military AI integration. However, current regulatory frameworks may not sufficiently address a critical risk: the potential for AI systems to trigger rapid, uncontrolled escalation even with human oversight.
The Trump administration’s “Removing Barriers to American Leadership in Artificial Intelligence” executive order creates an opportunity to address critical escalation risks in AI-enabled warfare. These risks stem from two distinct but related dynamics. First, AI-to-AI interactions can trigger rapid, unexpected escalations as systems respond to each other in ways humans cannot predict or control, rendering meaningful human oversight ineffective. Second, AI-human interactions under time pressure may lead commanders to defer increasingly to machine judgment, effectively ceding critical decision-making authority to AI systems. Current approaches to human oversight fail to address these escalation risks. Without effective mechanisms to halt AI-driven escalation across all sides of a conflict, human involvement in the decision loop may only delay rather than prevent battlefield escalation. In some cases, human intervention might even exacerbate escalation by introducing additional complexity and uncertainty into already volatile AI interactions.
A warning from the 2010 “flash crash”
Wall Street offers a compelling parallel to potential AI-driven military escalation. On May 6, 2010, the Dow Jones Industrial Average plummeted by over 9% in just five minutes. This “flash crash”—exacerbated by unexpected algorithmic behavior—resulted in approximately $1 trillion in market losses, demonstrating how automated systems can drive rapid, uncontrollable escalation. The incident reveals striking parallels to military risk. Just as interacting trading algorithms triggered a market collapse, interacting military artificial intelligence systems could trigger uncontrolled conflict escalation. This risk becomes increasingly urgent as militaries deploy sophisticated AI decision support systems, such as Palantir Gotham. As U.S. AI policy shifts to accelerate development, military leaders must learn from financial regulators’ response to the flash crash by implementing robust safeguards to prevent AI-driven systems from inadvertently escalating conflicts.
In the wake of the 2010 Flash Crash, equity markets implemented stock-specific “circuit breakers”—mechanisms that temporarily halt trading during periods of extreme volatility to restore market stability. This regulatory innovation suggests a promising approach for military AI systems: automated safeguards that temporarily restrict AI-driven operations during signs of dangerous escalation. Just as circuit breakers prevent cascading market failures by forcing a pause in trading, battlefield equivalents could prevent uncontrolled escalation by automatically limiting artificial intelligence systems’ operational scope when conflict intensity exceeds predetermined thresholds. These safeguards would give military leaders time to assess situations and prevent minor engagements from spiraling into broader conflicts.
While initially blamed on human error, investigations by the Securities and Exchange Commission and Commodity Futures Trading Commission revealed that high-frequency trading algorithms (HFTs) were the primary drivers of the Flash Crash. The incident began with a large selloff of E-Mini S&P 500 futures contracts, triggering a cascade of algorithmic responses as HFTs rapidly traded back and forth, amplifying the market decline. Technical glitches and market structure issues like decentralized trading also contributed to the severity of the crash. While the definitive cause of the crash remains elusive, the incident demonstrated how complex interactions—between multiple AI systems, between AI and human traders, and among humans themselves—can rapidly escalate. This pattern of escalation in financial markets parallels the risks of AI-enabled military systems: seemingly rational individual responses can combine to produce catastrophic systemic effects.
AI decision-making and the risk of escalation
When AI systems interact—both with each other and with humans—they can behave in unexpected ways, creating emergent dynamics that complicate human oversight. In the 2010 Flash Crash, high-frequency trading algorithms rapidly exchanged declining securities in an unanticipated edge case. These unconstrained algorithmic responses were driven by their inherently adversarial design, as they simultaneously sought to exploit market inefficiencies and aggressively avoid losses, amplifying volatility. Moreover, human intervention did not stop the crash—it worsened it, as investors liquidated portions of their portfolios, reinforcing the downward spiral. Notably, this occurred in a tightly regulated financial market, where AI behavior should, in theory, be easier to predict. If a flash crash can unfold in such a controlled environment, a flash war could erupt even more rapidly on the dynamic and unpredictable modern battlefield.
The speed and unpredictability of artificial intelligence interactions pose even greater risks in warfare. Unlike financial markets, where trading algorithms operate under structured constraints, military AI systems will confront adversarial counterparts in unstructured, high-pressure environments. These systems can process information and make tactical adjustments in fractions of a second, far exceeding human reaction times. When multiple AI-driven decision systems (AI-DDS) interact, the potential for unintended escalation rises, as rapid, recursive decision-making can trigger unpredictable conflict dynamics. This challenge underscores the need for AI safeguards that ensure military commanders retain effective control in order to prevent automated chain reactions from triggering uncontrolled escalation.
While existing DoD policies mandate human judgment over AI behavior, future combat scenarios may not afford extended deliberation. Leaders might prefer to rely on AI for targeting and weaponeering recommendations in dynamic combat situations. Although humans are still involved in approving AI-suggested targets, this often amounts to placing substantial trust in the machine’s decisions. This deference to artificial intelligence might not only arise during intense battles; even the perception of territorial aggression by Chinese or Russian forces could prompt humans to trust AI judgments.
Command decision or AI directive?
The complexity of time-sensitive military decisions highlights the risks of runaway escalation. Consider an AI system alerting defenders about Russian tanks crossing the Suwalki Gap or Chinese paratroopers heading to Taiwan. In this scenario, the system would warn of an attack and propose several defensive options. With only minutes to respond, military commanders would have to weigh the risk of escalating the conflict against the necessity of engaging the enemy. Verifying enemy actions or consulting contingency plans would be impractical, given the brief warning period. Consequently, commanders might depend entirely on the AI’s judgment. These time constraints would likely lead commanders to prioritize rapid response over thorough analysis, potentially reducing human judgment to a procedural step.
Speed provides a crucial tactical advantage in both trading and warfare. Historically, militaries have maintained operational speed through static, flow-chart-like operational plans. However, these plans’ frequent obsolescence and costly updates underscore the appeal of AI-enabled decision-making, which can dynamically incorporate enemy actions into real-time military decision-making processes. AI tools can process data faster and more thoroughly than human command staff, allowing for a comprehensive strategy that incorporates subtle cues and patterns that are not immediately obvious to humans. Unlike traditional plans, AI decision-making is adaptable, enabling effective countermeasures against evolving threats. Just as algorithms revolutionized trading, AI tools will soon empower military commanders to innovate and thwart enemy tactics effectively.
What factors might prompt a military AI system to escalate a conflict? A key driver is likely the fundamental goal of such systems: to maximize the likelihood of winning. In strategic decision-making, certain actions may appear rational in isolation but lead to worse long-term outcomes. In a one-shot Prisoner’s Dilemma, defecting is instrumentally rational—ensuring a better individual outcome regardless of the opponent’s choice—but sub-optimal when both players defect, resulting in a worse overall outcome. In financial markets, selling during a decline may be rational for individuals but harmful collectively, increasing volatility and hastening a crash. An artificial intelligence system prioritizing battlefield victory may see escalation as the best immediate strategy. Yet, much like defection undermines cooperation and panic selling worsens financial crises, an AI focused on short-term military wins might achieve a battle victory but spark an unintended broader conflict—a flash war.
Circuit breakers for the battlefield
In response to the massive selloff on Black Monday, the New York Stock Exchange (NYSE) implemented circuit breakers to halt all trading when the prices of major indices, like the S&P 500, shift too rapidly. The 2010 flash crash exposed the limitations of index-wide circuit breakers, leading to the adoption of stock-specific circuit breakers to address systemic vulnerabilities on a more granular scale. However, adapting this type of solution to a battlefield setting presents challenges. The essence of a circuit breaker lies in its centralized control; it’s the NYSE, not individual traders, who activates the breaker. In contrast, combat environments lack centralized control, making traditional circuit breakers impractical and ineffective in managing escalation during conflicts.
The circuit breaker model offers a key framework for controlling AI-driven escalation. However, implementing an analogous system in warfare would require quantifying conflict escalation through measurable parameters. Factors like the increase in the number of combatants, the expansion of battlefronts, and the diversity of weaponry used could all contribute to a robust “escalation metric.” International arms treaties could then mandate the integration of a circuit breaker mechanism, based on this escalation metric, into all AI systems. There is already precedent for such international collaboration, as shown by the highly successful Organisation for the Prohibition of Chemical Weapons. If the situation on the battlefield escalates too rapidly, this breaker would activate across all autonomous weapon systems and AI-DDS, temporarily pausing or scaling back offensive military actions to prevent unchecked escalation. However, critical intelligence, surveillance, target acquisition, and reconnaissance (ISTAR) assets would remain operational to ensure commanders can maintain oversight and make informed decisions during the pause.
One concern about such a mechanism is that it could potentially favor aggressors if they operate in ways that avoid triggering the circuit breaker while achieving key military objectives. To mitigate this, escalation metrics would need to be designed to account for subtler forms of aggression, and the framework could include mechanisms allowing defenders to respond proportionally without risking further escalation. Furthermore, even if this approach risks short-term tactical defeats for defending forces, it ensures that a skirmish does not quickly escalate into a broader war.
Stopping wars before they start
The absence of an international regulatory regime presents an opportunity for U.S. military leadership in managing escalation risk. The Pentagon’s Chief Digital and Artificial Intelligence Office, already established as a leader in AI deployment and policy development, is positioned to develop and analyze models simulating escalation scenarios. These simulations could provide empirical foundations for testing de-escalation mechanisms, including circuit breaker implementations, thereby informing evidence-based policy development aligned with the United States’ strategic interests.
U.S. financial markets demonstrate how effective regulation reinforces dominance by preventing catastrophic errors and building trust. After the flash crash, circuit breakers and other controls enhanced the United States’ position as the world’s financial leader by assuring investors that algorithmic trading wouldn’t spiral into market-destroying behavior. The same applies to military AI: commanders will only deploy advanced systems if they trust they won’t trigger catastrophic conflicts. Like trading algorithms that operate within guardrails while preserving their competitive edge, military artificial intelligence systems need controls that prevent escalation while maintaining tactical effectiveness.
Preventing unintended escalation on the battlefield aligns with President Trump’s commitment to “stop wars, not start them.” Just as financial regulators implemented safeguards to prevent flash crashes while maintaining market effectiveness, military leaders can develop controls ensuring AI-driven decisions enhance rather than compromise U.S. military superiority. The integration of circuit breakers into military AI systems through international diplomacy represents a pragmatic approach to maintaining U.S. strategic dominance while preventing uncontrolled escalation.
This approach is consistent with the administration’s broader AI strategy, which emphasizes U.S. leadership and innovation while managing strategic risks. Without appropriate safeguards, the interaction of multiple AI systems in combat scenarios could lead to rapid, uncontrolled escalation, potentially engaging U.S. forces in conflicts beyond our strategic interests. Such an outcome would undermine our fundamental objective: ensuring the United States’ long-term security, stability, and prosperity. By taking the lead in developing effective controls for AI-driven military systems, the United States can maintain its technological edge while preventing unintended escalation that could compromise American interests.
Lieutenant Colonel Kyle Brown is a visiting fellow at the Center for Ethics and the Rule of Law. He is the commander of 2nd Battalion, 12th Infantry Regiment at Fort Carson, CO. Kyle has previously served in the U.S. Army Asymmetric Warfare Group and in leadership roles from Afghanistan to Alaska.
Julian Gould is a Ph.D. candidate in mathematics at the University of Pennsylvania. His primary research interests are homotopy theory, sheaf theory, and modal logic. Before Penn, Julian was a quantitative analyst at T. Rowe Price, working with the Capital Appreciation Fund.
Dr. Jesse Hamilton is Departmental Lecturer in Military Leadership and Ethics at the Blavatnik School of Government, University of Oxford. He previously worked as a bond trader and served in the U.S. Army as an artilleryman, a drill sergeant, and an embedded advisor to the Iraqi Army.
Image: Natalia / stock.adobe.com