Military Applications Published:

Autonomous Weapons and the Military-AI Complex

Explore lethal autonomous weapon systems (LAWS), military AI funding, defense contractors like Palantir and Anduril, civilian casualties in Gaza and Ukraine, and the lack of international treaties.

Why Major Militaries Are Funding LAWS

Major militaries including the United States, China, Israel, Russia, and Turkey are investing heavily in Lethal Autonomous Weapon Systems (LAWS) driven by several strategic imperatives that go far beyond simple "force multiplication." The race to deploy autonomous systems has accelerated dramatically since 2022, with real-world combat in Ukraine and Gaza providing unprecedented testing grounds for AI-driven warfare.

Force Multiplication and Decision Superiority

AI-powered systems can process information and make decisions faster than human operators, providing what military strategists call "decision superiority" in combat situations. The U.S. Department of Defense's Joint All-Domain Command and Control (JADC2) system explicitly aims to consolidate data across all domains—land, sea, air, space, and cyber—using AI to deliver information advantages at "the speed of relevance." The Pentagon's 2025 budget allocated over $1.8 billion specifically for autonomous and AI-enabled combat systems, a 50% increase from the prior fiscal year.

The underlying logic is straightforward: a human operator can track and assess a handful of targets simultaneously, while an AI system can process thousands of sensor inputs, cross-reference intelligence databases, and generate targeting recommendations in milliseconds. In contested environments where electronic warfare degrades communications, autonomous systems that can operate without continuous human oversight become strategically decisive.

Reduced Political Barriers to War

Autonomous weapons allow for military aggression without risking soldiers' lives, potentially lowering the political barriers to armed conflict. As defense analysts note, leaders typically hesitate before sending troops into battle, but autonomous weapons enable action without the domestic political cost of casualties. This dynamic is already visible in drone-heavy conflicts: nations deploy unmanned systems in operations where they would never commit ground troops.

The psychological distance between a decision-maker and the consequences of force is stretched further when autonomous systems remove even the drone operator from the kill chain. Military ethicists warn this "moral buffer" effect could make governments more willing to initiate conflicts, conduct targeted killings, and sustain prolonged military campaigns that would otherwise face public opposition.

Scalability and Mass Deployment

AI-enabled weapons can be mass-manufactured and deployed at scale. Low-cost automated weapons, such as drone swarms equipped with explosives, could autonomously hunt human targets with high precision. The 2020 deployment of a Kargu-2 drone in Libya marked the first reported use of a lethal autonomous weapon in combat, according to a United Nations panel of experts.

The economics of scale are transforming military procurement. A single F-35 fighter jet costs approximately $80 million; a swarm of 1,000 autonomous drones with equivalent strike capability could cost under $10 million total. This asymmetry has prompted every major military to invest in both swarm offensive capabilities and counter-swarm defenses, creating a new arms race dynamic where quantity and AI sophistication matter more than individual platform performance.

Speed of Kill Chain Compression

Modern military doctrine increasingly emphasizes compressing the "kill chain"—the time between identifying a target and engaging it. Traditional kill chains involving human decision-makers at each stage take minutes to hours. AI-enabled systems can compress this to seconds. The U.S. military's Project Maven, which applies machine learning to drone surveillance footage, demonstrated that AI could reduce target identification time from hours to minutes. Fully autonomous systems promise to compress this further to near-instantaneous response.

This compression creates dangerous dynamics. When opposing forces both deploy AI-accelerated kill chains, the incentive shifts toward pre-emptive strikes and "launch on warning" doctrines similar to those that made Cold War nuclear standoffs so perilous. Military strategists worry about "flash wars"—conflicts that escalate from incident to full engagement faster than human decision-makers can intervene.

Ukraine: The World's First AI War Laboratory

Scale of Drone Deployment

The Russia-Ukraine war has become the defining testing ground for autonomous military systems. By late 2025, Ukraine was deploying approximately 9,000 drones per day against Russian forces, with around 500 domestic drone manufacturers producing up to 200,000 first-person-view (FPV) drones per month. The sheer scale of drone warfare in this conflict has no historical precedent and has fundamentally altered how militaries worldwide plan for future conflicts.

Both sides have rapidly iterated on drone technology, with development cycles measured in weeks rather than the years typical of traditional defense procurement. Ukrainian manufacturers have moved from off-the-shelf commercial drones to purpose-built strike platforms with increasing levels of autonomy, GPS-denied navigation, and electronic warfare resistance.

Autonomous Strike Systems in Combat

Swiss-American company Auterion launched its Nemyx drone swarm strike engine in September 2025 and began shipping more than 33,000 Skynode AI "strike kits" to Ukraine under a Pentagon contract. These kits convert conventional drones into semi-autonomous platforms capable of terminal guidance without operator input, representing a significant step toward fully autonomous combat operations.

Ukraine's Minister of Digital Transformation predicted that 2025 would "significantly increase the percentage of autonomous drones with targeting," with the first real drone swarm uses expected. By late 2025, Ukrainian forces demonstrated coordinated attacks using groups of 10-20 drones operating with minimal human oversight, though true swarm behavior with hundreds of coordinated autonomous units remained in advanced testing rather than routine deployment.

Russian Autonomous Systems

The Russian Lancet loitering munition has been extensively deployed in Ukraine, with Russian sources claiming a 77.7% hit rate. The Lancet uses machine vision for terminal guidance, allowing it to lock onto and strike targets autonomously in its final approach phase. Russia has also deployed AI-enabled electronic warfare systems that can identify and classify drone signals, automatically selecting jamming frequencies and power levels.

Russia's Marker unmanned ground vehicle, tested in Ukraine, demonstrated autonomous patrol and target identification capabilities. While its combat effectiveness has been questioned, the platform represents Russia's commitment to ground-based autonomous combat systems alongside its more successful aerial programs.

Lessons Learned and Global Implications

Military planners worldwide are drawing several conclusions from Ukraine: autonomous systems dramatically reduce per-strike costs; electronic warfare countermeasures drive development of AI-based navigation that does not rely on GPS; attrition rates for autonomous systems are acceptable in ways they never would be for crewed platforms; and the integration of AI targeting with mass-produced hardware creates a model replicable by any industrialized nation. NATO, China, India, and dozens of smaller militaries have accelerated autonomous weapons programs citing Ukraine's battlefield evidence.

AI Targeting in Gaza: The Lavender System

How Lavender Works

In April 2024, Israeli intelligence officials revealed the existence of Lavender, an AI system used to generate targeting lists during operations in Gaza. The system processes surveillance data, communications intercepts, social network analysis, and behavioral patterns to automatically identify individuals suspected of affiliation with Hamas or Palestinian Islamic Jihad. Israeli intelligence officials acknowledged the system has an error rate of approximately 10%—meaning roughly one in ten people flagged for targeting are incorrectly identified.

A companion system called "Where's Daddy?" tracked Lavender-flagged individuals to their homes, enabling strikes when targets were with their families. This combination of automated target generation and home-strike execution raised profound questions about proportionality under international humanitarian law.

Civilian Casualty Policies

Israeli military policies reportedly deemed 15 to 20 civilian deaths acceptable for every junior Hamas operative identified by the algorithm, and for senior commanders, the acceptable civilian cost could exceed one hundred lives. Human oversight of Lavender's targeting recommendations was reportedly minimal—officers spent approximately 20 seconds reviewing each AI-generated target before approving strikes, according to Israeli intelligence sources who spoke to journalists.

The scale of AI-assisted targeting in Gaza represents the most extensive documented use of algorithmic warfare against a civilian population. Human rights organizations have called for independent investigation into whether the combination of a known 10% error rate, minimal human review, and high civilian casualty thresholds constitutes a systemic violation of the laws of armed conflict.

International Response

The revelations about Lavender and associated systems prompted immediate calls from the International Committee of the Red Cross (ICRC), Human Rights Watch, and Amnesty International for a moratorium on AI targeting systems in populated areas. The International Court of Justice's proceedings regarding Israel's military operations in Gaza have included evidence related to AI targeting systems, marking the first time autonomous weapon use has featured in international legal proceedings at this level.

Who Profits from Autonomous Weapons?

Palantir Technologies

Palantir has become a central player in military AI, developing the AI Platform (AIP) for command, targeting, and intelligence synthesis. The company secured over $1.3 billion in U.S. government contracts in fiscal year 2025 alone, with its military AI revenue growing 40% year-over-year. Palantir's technology integrates with virtually every branch of the U.S. military, providing the data infrastructure layer that connects sensors, intelligence databases, and weapons systems.

The company's Maven Smart System, developed under the Pentagon's Project Maven, uses machine learning to analyze drone and satellite imagery. Palantir has positioned itself as the essential middleware of modern warfare—the company that connects AI capabilities to kinetic military operations. Its technology was reportedly used in operations including the Venezuela mission, where questions arose about appropriate use, and has been adopted by the UK Ministry of Defence and multiple NATO allies.

Anduril Industries

Founded by Oculus VR co-founder Palmer Luckey, Anduril has grown from a border surveillance startup to one of the Pentagon's most important autonomous weapons contractors. The company's "Lattice" AI operating system powers autonomous drones, counter-drone systems, underwater vehicles, and ground-based sensors, positioning it at the forefront of the new military-industrial complex.

In January 2025, Anduril announced Arsenal-1, a 5-million-square-foot autonomous weapons manufacturing facility in Pickaway County, Ohio, set to begin production in 2026 and create approximately 4,000 jobs. The facility represents a shift toward mass production of autonomous military hardware at scales previously associated only with traditional defense primes like Lockheed Martin and Raytheon. Anduril also secured a 10-year, $642 million contract to deliver AI-powered counter-unmanned aerial system technology for the Marine Corps.

The company's YFQ-44A autonomous combat aircraft, part of the Air Force's Collaborative Combat Aircraft (CCA) program, began flight testing in October 2025. This program aims to produce AI-piloted fighter jets that can operate alongside crewed aircraft, with production decisions expected in fiscal year 2026.

Shield AI

Shield AI deploys "Hivemind" systems for drone swarm coordination and autonomous operations without requiring GPS or communication links. The company's technology enables swarms of drones to operate autonomously in contested environments where GPS jamming and electronic warfare would disable conventional systems. In October 2025, Shield AI unveiled a fully autonomous VTOL fighter jet concept, signaling its ambition to move beyond small drones into crewed-aircraft-class autonomous combat systems.

The Air Force selected both RTX (Raytheon) and Shield AI in September 2025 to provide mission autonomy software for the CCA program, integrating Hivemind's autonomous capabilities onto the drone fighter prototypes built by General Atomics and Anduril. Shield AI's valuation exceeded $4 billion in its latest funding round, reflecting investor confidence in the autonomous combat aircraft market.

Israeli Defense Firms

Israel has been at the forefront of AI warfare, with the "Lavender" system used for targeting in Gaza and extensive combat-tested autonomous systems. Israel Aerospace Industries (IAI) produces the Harop and Harpy loitering munitions, which have been exported to Azerbaijan, India, and other nations. Elbit Systems develops autonomous ground vehicles and AI-enabled targeting systems that have seen combat deployment.

Israeli defense companies, backed by mandatory cloud service purchases from Google and Amazon under Project Nimbus (a $1.2 billion contract), have developed the world's most combat-tested AI targeting infrastructure. The commercial success of these systems—validated by real-world combat data from Gaza and other operations—gives Israeli firms a significant competitive advantage in the global autonomous weapons export market.

Other Key Players

  • Scale AI: Develops "Defense Llama" for military command applications and holds contracts with the Pentagon for AI data labeling and model evaluation
  • STM (Turkey): Manufacturer of the Kargu loitering munition, the first LAWS confirmed to have autonomously engaged targets in combat (Libya, 2020)
  • Milrem (Estonia): THeMIS UGV deployed in Ukraine for logistics and reconnaissance, with armed variants under development
  • Skydio (U.S.): Autonomous reconnaissance drones deployed extensively in Ukraine, with AI-based obstacle avoidance and target tracking
  • General Atomics: Producer of the MQ-9 Reaper and developer of the Gambit series of autonomous combat drones for the CCA program
  • L3Harris Technologies: Developing autonomous underwater vehicles and AI-enabled electronic warfare systems

Drone Swarm Technology and Mass Autonomous Warfare

Current Swarm Capabilities

Drone swarm technology has progressed from laboratory demonstrations to near-operational capability. The U.S. military's Replicator initiative, launched in 2023, aims to field "multiple thousands" of small autonomous systems by 2026. The program prioritizes attritable (expendable) autonomous platforms that can be produced cheaply and deployed in overwhelming numbers, fundamentally shifting warfare from expensive precision platforms to cheap autonomous mass.

China has demonstrated swarms of over 200 drones operating in coordinated formations, with military exercises showing autonomous target assignment, formation flying, and obstacle avoidance. Chinese state media has shown tests of drone swarms launched from the backs of trucks, cargo ships, and modified civilian vehicles, demonstrating the logistical flexibility of swarm deployment.

The Pentagon's War Game Failures

In late 2025, the New York Times reported that the Pentagon lost multiple war games against simulated adversaries deploying autonomous drone swarms. In these exercises, conventional U.S. forces—aircraft carriers, fighter jets, and traditional defense systems—were overwhelmed by swarms of thousands of cheap autonomous drones. The results prompted urgent calls from senior military leaders to accelerate autonomous weapons development and rethink fundamental assumptions about force structure.

The war game results revealed a critical vulnerability: existing air defense systems designed to engage individual high-value targets (cruise missiles, manned aircraft) are poorly suited to countering hundreds or thousands of small, cheap drones approaching simultaneously from multiple directions. The cost asymmetry is stark—a $2 million interceptor missile used against a $500 drone represents an economically unsustainable defense posture.

Counter-Swarm Technologies

The swarm threat has spawned a parallel industry in counter-swarm defense systems. Directed energy weapons (lasers and high-powered microwaves) are being developed by Lockheed Martin, Raytheon, and Northrop Grumman to provide cost-effective drone defeat at pennies per shot. The U.S. Army's DE M-SHORAD system, mounting a 50-kilowatt laser on a Stryker vehicle, entered testing in 2025 with deployment expected by 2027.

Electronic warfare systems that can jam swarm communications and GPS signals provide another layer of defense, though autonomous swarms designed to operate without external communications are inherently resistant to jamming. AI-enabled counter-drone systems that can autonomously detect, classify, and engage drone threats are being developed by Anduril, Rafael (Israel), and several Chinese firms, creating an autonomous-versus-autonomous arms race.

Autonomous Naval and Underwater Warfare

Unmanned Surface Vessels

The U.S. Navy's Ghost Fleet Overlord program has demonstrated large unmanned surface vessels (USVs) capable of autonomous navigation across thousands of miles of open ocean. These vessels, operating without crew, can carry sensors, weapons, and electronic warfare equipment, serving as scouts, decoys, or autonomous missile platforms. The Navy plans to field a fleet of over 150 unmanned and optionally manned vessels by the early 2030s.

Ukraine has pioneered the combat use of unmanned surface vessels, deploying AI-guided explosive boats against Russian naval targets in the Black Sea. These attacks, which sank or damaged multiple Russian warships, demonstrated that cheap autonomous maritime platforms can threaten billion-dollar warships—a lesson that has accelerated USV programs worldwide.

Autonomous Underwater Vehicles

Boeing's Orca Extra Large Unmanned Undersea Vehicle (XLUUV) is the U.S. Navy's first production autonomous submarine, capable of operating independently for months on missions including mine laying, surveillance, and anti-submarine warfare. The Orca program has faced delays and cost overruns, but represents the beginning of autonomous undersea warfare.

China is reportedly developing autonomous underwater vehicles for South China Sea operations, including systems designed to monitor and potentially interdict undersea communications cables. The prospect of autonomous submarines operating in contested waters without human control raises unique escalation risks—an autonomous system that damages critical infrastructure could trigger conflict before human decision-makers understand what happened.

Seabed Warfare

NATO has identified autonomous seabed warfare as a critical emerging domain. Autonomous underwater drones can be pre-positioned on the ocean floor near critical infrastructure—undersea cables, pipelines, naval bases—and activated remotely or autonomously when triggered by specific conditions. This capability creates persistent threats that are nearly impossible to detect or neutralize, fundamentally changing the calculus of maritime security.

Failure Modes and Civilian Casualties

Documented Failures in Combat

The Russian Lancet loitering munition, while reportedly achieving a 77.7% hit rate in Ukraine, has also been documented striking civilian targets including agricultural equipment, civilian vehicles, and residential structures. AI vision systems trained to identify military vehicles have consistently confused tractors, construction equipment, and civilian trucks with military targets, particularly in conditions of poor visibility or when targets are partially obscured.

In Gaza, AI systems used for targeting have been linked to significant civilian casualties, with reports suggesting automated targeting systems contributing to high civilian death tolls. The combination of Lavender's acknowledged 10% error rate with the acceptance of high civilian casualty thresholds produced what human rights investigators describe as systematic and foreseeable civilian harm at industrial scale.

Technical Failure Modes

AI weapons systems exhibit several dangerous failure patterns:

  • Brittleness: Systems trained in specific environments often fail catastrophically when deployed in unfamiliar conditions—a system trained on Ukrainian terrain performs differently in desert, jungle, or urban environments
  • Adversarial Vulnerability: Research has demonstrated that small, carefully designed visual perturbations (adversarial patches) can cause AI vision systems to misclassify military targets as civilians or vice versa, potentially weaponizing the AI's own recognition system against it
  • Opacity: Black-box decision-making obscures human accountability—when a deep neural network decides a target is valid, no one can fully explain why
  • Overtrust: Military personnel tend to defer to AI recommendations under time pressure, even when flawed—a phenomenon called "automation bias" that worsens as operators become accustomed to system reliability
  • Misidentification: AI systems have incorrectly identified civilian objects as military targets, with testing revealing that changes in lighting, weather, or angle can cause confident misclassification of clearly civilian objects
  • Cascading Failure: Networked autonomous systems can propagate errors at machine speed—a single misidentification shared across a swarm can result in coordinated engagement of civilian targets before any human can intervene

The Accountability Vacuum

When autonomous weapons cause civilian casualties, establishing accountability becomes nearly impossible. The chain of responsibility—spanning programmers who wrote the algorithms, engineers who trained the models, commanders who deployed the systems, and the AI system itself—creates what legal scholars call an "accountability gap." No individual made the lethal decision, yet someone must bear responsibility under international humanitarian law.

This problem intensifies with each generation of autonomous systems. First-generation systems like the Kargu-2 operated under relatively direct human supervision. Current systems like Lavender generate targets with minimal human review. Future fully autonomous swarms may engage targets with no human involvement whatsoever. International humanitarian law's requirement for individual criminal responsibility for war crimes becomes increasingly difficult to satisfy as autonomy increases.

Collaborative Combat Aircraft: Autonomous Fighter Jets

The CCA Program

The U.S. Air Force's Collaborative Combat Aircraft (CCA) program represents the most ambitious autonomous combat aviation project in history. The program aims to produce over 1,000 AI-piloted fighter-class aircraft that will operate alongside crewed jets like the F-35, performing missions including air-to-air combat, suppression of enemy air defenses, and strike operations. The Air Force selected General Atomics and Anduril for Increment 1 prototypes, with both companies beginning live flight tests in mid-to-late 2025.

In February 2026, the Air Force began testing mission autonomy software packages from RTX and Shield AI on the CCA prototypes. These packages integrate Shield AI's Hivemind autonomous pilot with the airframes, enabling the CCAs to conduct combat missions including target identification, threat avoidance, and weapons employment with varying levels of human oversight.

International Programs

The United States is not alone in pursuing autonomous combat aircraft. The UK's Tempest program (now part of the Global Combat Air Programme with Italy and Japan) includes an autonomous "loyal wingman" component. Australia's MQ-28 Ghost Bat, developed with Boeing, has completed flight testing and is expected to enter service by 2028. China has displayed multiple autonomous combat drone prototypes at international air shows, including designs optimized for swarm operations.

These programs collectively signal a transformation in air warfare: within a decade, autonomous combat aircraft will likely outnumber crewed fighters in major air forces. The implications for arms control, escalation dynamics, and the nature of air combat are profound and largely unaddressed by existing international frameworks.

International Treaties and Governance

Current State: No Binding Regulations

Currently, no binding international treaties govern the development, deployment, or use of lethal autonomous weapons. The UN Convention on Certain Conventional Weapons (CCW) has discussed autonomous weapons since 2014, but consensus has remained elusive due to opposition from major military powers. After a decade of inconclusive CCW discussions, momentum has shifted to the UN General Assembly as the primary diplomatic venue.

The 2025-2026 Treaty Push

In May 2025, UN Secretary-General António Guterres called autonomous weapons systems "politically unacceptable" and "morally repugnant," urging conclusion of a legally binding treaty by the end of 2026. The ICRC President Mirjana Spoljaric Egger joined this call, and 96 countries attended the first-ever UN General Assembly meeting specifically dedicated to autonomous weapons systems.

In November 2025, the UN General Assembly adopted a landmark resolution on autonomous weapons with 156 states in favor and only 5 against (the United States, Russia, India, Israel, and Belarus), with 8 abstentions. Over 120 countries now support negotiations toward a binding treaty that would prohibit autonomous weapons systems operating without meaningful human control or that target people directly.

States Blocking Regulation

The United States, Russia, India, and Israel have consistently blocked binding regulations on LAWS. These nations argue that existing international humanitarian law is sufficient to govern autonomous weapons. The U.S. position, articulated in DoD Directive 3000.09 (updated in 2023), emphasizes "Appropriate Human Judgment" rather than requiring "Meaningful Human Control" over lethal decisions—a semantic distinction that permits significantly greater autonomy than most other nations advocate.

These blocking states have exploited consensus-based decision-making in the CCW to prevent progress for over a decade. The shift to General Assembly proceedings, which operate by majority vote rather than consensus, represents an attempt to bypass this obstruction. However, even a General Assembly resolution is non-binding, and the states most actively developing LAWS are precisely those opposing regulation.

Civil Society and Advocacy

The Campaign to Stop Killer Robots, a coalition of over 250 organizations in 70 countries, has led global advocacy for a preemptive ban on fully autonomous weapons since 2013. The campaign draws explicit parallels to successful international efforts to ban landmines (1997 Ottawa Treaty) and cluster munitions (2008 Convention on Cluster Munitions), arguing that autonomous weapons represent a similarly indiscriminate threat to civilian populations.

Nobel Peace Prize laureates, the International Committee of the Red Cross, and prominent AI researchers including Yoshua Bengio and Stuart Russell have publicly supported a ban. The 2026 treaty deadline set by the UN Secretary-General is widely seen as the last realistic window for preventive regulation before autonomous weapons proliferate beyond the point where arms control can be effective.

The Race Against Time

Military technology experts warn that the window for meaningful regulation is closing rapidly. Unlike nuclear weapons, which required massive state-level infrastructure, autonomous weapons can be developed by relatively small teams using commercially available AI models and drone hardware. As the technology becomes cheaper and more accessible, the number of state and non-state actors capable of deploying autonomous weapons will grow exponentially, making arms control agreements increasingly difficult to negotiate and enforce. The 2026 deadline is not merely aspirational—it may represent the last realistic opportunity for preventive governance.

Ethical Frameworks and Military Doctrine

Meaningful Human Control

The concept of Meaningful Human Control (MHC) has emerged as the central ethical and legal standard proposed by treaty advocates. MHC requires that human operators must have sufficient information, time, and authority to make genuine decisions about the use of force—not merely rubber-stamp AI recommendations. The ICRC defines MHC as requiring humans to understand the context of engagement, supervise the weapon's operation, and have the ability to intervene and deactivate at any point.

Critics within the defense establishment argue that MHC is technically infeasible for time-critical engagements (such as missile defense) and competitively disadvantageous against adversaries who do not impose similar constraints. Proponents counter that the laws of armed conflict have always imposed constraints on military effectiveness in exchange for civilian protection, and autonomous weapons are no exception.

Military Academy Perspectives

The U.S. Military Academy at West Point, through its Lieber Institute for Law and Land Warfare, has produced extensive analysis on autonomous weapons and the laws of armed conflict. West Point ethicists have argued that autonomous weapons can, in some scenarios, be more discriminating than human soldiers—machines do not panic, do not seek revenge, and do not suffer from fatigue-induced errors. However, they also acknowledge that current AI systems lack the contextual judgment necessary for complex targeting decisions in urban environments.

The UK Royal Military Academy Sandhurst and French military academies have adopted more cautious positions, emphasizing that the decision to take human life should always involve human moral agency. These institutional perspectives reveal a deep philosophical divide within professional military communities about whether warfare can be ethically delegated to machines.

The Asymmetry Problem

Nations that adopt restrictions on autonomous weapons face a strategic disadvantage against adversaries who do not. This "asymmetry problem" is the core obstacle to voluntary restraint. If Country A requires meaningful human control over every engagement while Country B deploys fully autonomous swarms, Country B's systems will react faster and engage at scales Country A cannot match. This competitive dynamic pushes every nation toward greater autonomy, regardless of ethical preferences—a textbook arms race spiral.