Fact-checked by Grok 2 weeks ago

Lethal autonomous weapon

Lethal autonomous weapons systems (LAWS) are weapon systems that, once activated, can independently select and engage using lethal without further intervention. These systems integrate sensors, algorithms, and effectors to perform critical functions in the targeting process, distinguishing them from semi-autonomous systems requiring approval for lethal actions. While definitions vary slightly across entities, such as the International Committee of the Red Cross emphasizing independence in target selection and attack, the core attribute remains the delegation of life-and-death decisions to machines. Development of LAWS has accelerated with advances in artificial intelligence and robotics, enabling applications in drones, ground vehicles, and munitions that operate in dynamic environments. A notable example is Turkey's STM Kargu-2 , a reported by a panel to have potentially hunted and attacked retreating fighters autonomously during Libya's civil war in 2020, marking one of the first documented instances of such technology in combat. Proponents argue LAWS offer advantages including reduced risk to operators, faster response times, and potentially greater precision in engagements compared to decision-making under stress, thereby minimizing in some scenarios. However, critics highlight ethical and legal challenges, such as diminished accountability for lethal outcomes, difficulties in ensuring compliance with principles like distinction and , and the risk of to non-state actors. As of 2025, no global treaty prohibits LAWS, with discussions continuing under the Group of Governmental Experts, extended to 2026 amid divergent national positions—some states advocating bans while others, including major powers, emphasize responsible development and human oversight rather than outright prohibition. U.S. Department of Defense policy permits LAWS subject to rigorous reviews ensuring legal compliance, reflecting a pragmatic approach prioritizing utility over preemptive restrictions. These systems thus embody a tension between technological inevitability and normative constraints, with empirical deployment evidence underscoring their operational feasibility despite ongoing regulatory impasse.

Definition and Core Concepts

Autonomy in Weapon Systems

Autonomy in weapon systems denotes the capacity of a platform, once deployed or activated, to independently perceive its environment, identify targets, and execute engagements without requiring further human input in the critical functions of selection and application of force. The U.S. Department of Defense () defines autonomous weapon systems as those that, following activation, can select and engage targets without additional intervention by a human operator, emphasizing the need for designs that permit commanders to retain appropriate levels of human judgment over force employment. This policy, outlined in DoD Directive 3000.09 updated on January 25, 2023, mandates rigorous testing, safety protocols, and legal reviews to mitigate risks of malfunction or unlawful actions, including requirements for systems to disengage autonomously if failures occur. The concept extends beyond mere , which involves pre-programmed responses to fixed stimuli, to encompass adaptive in unpredictable scenarios through integrated sensors, algorithms, and effectors. analyses distinguish this by noting that fully autonomous systems operate in a "human-out-of-the-loop" mode, capable of target discrimination based on models trained on vast datasets, potentially operating in swarms or contested environments where oversight is impractical. International bodies, such as the International Committee of the Red Cross (ICRC), describe autonomous weapons as those able to independently select and attack targets, advocating for control to ensure compliance with principles like distinction and proportionality. DoD guidelines require autonomous systems to undergo operational testing under realistic conditions, including simulations, to verify performance and incorporate fail-safes like geofencing or override mechanisms, reflecting from prior semi-autonomous deployments that highlighted error rates in complex battlespaces. These measures address causal factors such as sensor degradation or algorithmic biases, which could lead to erroneous engagements, as evidenced by documented incidents in remotely piloted systems where human factors compounded technical limitations. While proponents argue autonomy enhances precision and reduces operator fatigue—citing data from simulations showing faster response times—critics, including organizations, contend it erodes , though policy counters this by mandating traceability in decision logs for post-engagement reviews.

Distinctions from Human-in-the-Loop Systems

(HITL) systems in weapon contexts require a human to exercise direct or approval over critical functions, particularly selection and the of lethal force, ensuring that engagement decisions incorporate human judgment at the point of action. These systems, often termed semi-autonomous, delegate routine tasks like tracking or guidance to machines but retain human veto authority or intervention capability to mitigate errors, adapt to dynamic environments, or align with . For instance, U.S. Department of Defense () policy categorizes such systems as those that "only engage individual s or specific groups that have been selected by a human ." Lethal autonomous weapon systems (LAWS), by contrast, are designed to independently select and engage targets—including potentially human adversaries—without requiring further human intervention after initial activation or deployment, shifting decision-making authority entirely to the machine's algorithms and sensors. This autonomy enables operations at speeds exceeding human cognitive limits, such as in high-tempo scenarios where communication delays or would impair performance, but it also eliminates real-time human oversight, raising risks of misidentification or unintended due to algorithmic limitations in contextual understanding. Directive 3000.09 explicitly defines LAWS as systems capable of this independent lethal action, while mandating senior-level reviews for their development to ensure compliance with and ethical standards, though it permits deployment under conditions allowing "appropriate levels of human judgment." A core operational distinction lies in environmental and : HITL systems depend on reliable human-machine interfaces and communication links, which can be disrupted in contested or warfare-heavy domains, whereas LAWS function in "comms-denied" settings by relying on onboard for target discrimination and engagement, potentially enhancing but introducing brittleness to adversarial countermeasures like spoofing sensors or exploiting AI biases. mechanisms also diverge; in HITL setups, human operators bear direct for lethal outcomes under frameworks like the laws of armed conflict, whereas LAWS diffuse this to system designers, programmers, and commanders, complicating attribution for errors such as false positives in civilian discrimination. U.S. policy, updated in January 2023, emphasizes designing LAWS with safeguards for human override where feasible, but does not categorically require a persistent "in-the-loop" presence for all autonomous functions, reflecting a balance between technological imperatives and oversight.

Levels of Autonomy

Autonomy levels in weapon systems describe the degree of independent decision-making capability delegated to the machine across the targeting cycle, including target detection, identification, prioritization, and engagement. These levels are determined by the extent of human oversight required for critical functions, particularly the application of lethal force. Frameworks for classification emphasize the balance between operational efficiency—enabled by reduced human latency and cognitive load—and ethical, legal, and strategic imperatives for retaining human judgment in life-and-death decisions. The U.S. Department of Defense (DoD) Directive 3000.09, updated in 2023, mandates that all autonomous and semi-autonomous systems incorporate design features allowing commanders to exercise appropriate levels of human judgment over the use of force, while certifying systems for compliance before fielding. A widely referenced in discussions of lethal autonomous weapons distinguishes three primary levels based on human involvement:
LevelDescriptionHuman Role
(HITL)The system executes predefined actions but requires human approval for target selection and engagement decisions.Direct control: selects targets and authorizes firing, as in semi-autonomous systems under policy.
Human-on-the-Loop (HOTL)The system independently detects, tracks, and may select targets using algorithms, but humans supervise and retain veto authority or capability.Oversight: monitors operations and can abort engagements, reducing reaction time delays while preserving .
Human-out-of-the-Loop (HOUTL)The system fully autonomously selects, prioritizes, and engages targets post-activation, without real-time human input.Minimal to none: Activation sets parameters, but subsequent lethal actions occur independently, as defined for autonomous systems in Directive 3000.09.
This tiered model highlights a progression from operator-dominated to machine-dominated execution, with from simulations and tests indicating that higher levels enhance precision in dynamic environments—such as reducing through faster target discrimination—but introduce risks of malfunction or unanticipated behavior due to algorithmic limitations in novel scenarios. For unmanned systems broadly, the National Institute of Standards and Technology's Autonomy Levels for Unmanned Systems (ALFUS) framework provides a more dimensional scale, assessing across 10 levels for functions like (from fully human-executed at Level 0 to fully system-executed without human input at higher levels), factoring in mission complexity, environmental uncertainty, and human-system interaction. This approach, developed in collaboration with stakeholders since 2003 and updated through 2025, underscores that no current deployed lethal system reaches full Level 10 in contested domains, as verified by operational data from programs like counter-unmanned aerial systems. Policy constraints, including requirements for distinction and proportionality, limit HOUTL deployments, though technological advancements in enable adaptive behaviors approaching this threshold in controlled tests as of 2024.

Historical Evolution

Pre-21st Century Precursors

Naval mines represent the earliest form of lethal autonomous weapons, functioning through mechanical fuses that trigger detonation upon contact or proximity without human oversight. Contact mines, deployed in conflicts such as the of 1904–1905, used simple impact mechanisms to explode when struck by ships, sinking vessels indiscriminately and affecting neutral shipping. Their widespread use in , including by from October 1914, highlighted their autonomy in target selection based solely on physical interaction, despite international efforts at the 1907 Hague Conference to restrict such devices. Self-propelled torpedoes advanced precursor autonomy through basic guidance mechanisms. The , introduced in 1866, featured engine propulsion but followed straight paths until developments incorporated homing. German G7e Zaunkönig torpedoes, deployed from , used passive acoustic homing to detect and pursue propeller noise from ships, enabling independent and engagement after launch. Similarly, the U.S. Mark 24 "" torpedo, entering service in 1943, employed acoustic homing to track submerged submarines by sound signatures, adjusting depth and course autonomously to close on detected threats. Aerial systems introduced preset or sensor-based autonomy in the early . The U.S. Kettering "Bug" of 1918 utilized guidance for preprogrammed flight to fixed coordinates, functioning without input after release. Germany's V-1 flying bomb, operational from 1944, incorporated gyroscopic autopilots and basic altimeters to follow predetermined paths over distances up to 250 kilometers, detonating on impact or fuel exhaustion. Defensive close-in weapon systems marked a shift toward radar-enabled autonomy by the late . The , developed by starting in the 1960s and first deployed on USS King in 1980, integrates for independent search, detection, tracking, and engagement of incoming anti-ship missiles or , firing 20mm rounds at up to 4,500 per minute without operator intervention once activated. This system's full operational in threat evaluation and kill assessment represented a significant precursor to modern lethal autonomous capabilities, prioritizing rapid response over human decision-making in terminal defense scenarios.

Post-2000 Developments and Deployments

In the early , advancements in unmanned aerial vehicles and munitions accelerated, building on pre-existing technologies to incorporate greater autonomy in target detection and engagement. Israel's Harop, developed by and operational by 2009, represented a key evolution; this can autonomously navigate, loiter for up to 9 hours, and strike pre-designated or dynamically identified high-value targets such as radar systems or command centers using electro-optical sensors and onboard algorithms, without real-time human input post-launch. Deployments included Azerbaijan's extensive use of Harop during the 2020 , where it neutralized Armenian air defense assets, demonstrating effectiveness in suppressing enemy air defenses through semi-autonomous operations. ![STM Kargu drone][float-right] Turkey's , a rotary-wing introduced around 2015, integrated for facial recognition and target classification, enabling swarm-capable autonomous modes where the system can independently select and attack human targets. A pivotal deployment occurred in in 2020, when Kargu-2 units, supplied to the , reportedly operated in fully autonomous "hunt" mode to pursue and engage retreating fighters, marking the first documented battlefield instance of a lethal autonomous weapon system selecting and striking targets without human intervention, as detailed in a panel report. This incident highlighted the transition from controls to machine-driven lethality in asymmetric conflicts. In the United States, the Department of Defense formalized policies on via Directive 3000.09 in 2012, mandating appropriate human judgment for lethal force decisions while permitting systems capable of target selection and engagement under predefined constraints, such as in defensive scenarios. Programs like DARPA's Air Combat Evolution (tested in 2023) explored AI-piloted fighters, but no confirmed deployments of fully autonomous lethal systems occurred, with emphasis remaining on semi-autonomous tools like the loitering munitions, which rely on operator guidance for final engagement despite autonomous navigation features and have been supplied to since 2022 for anti-armor roles. Other nations advanced similar capabilities; deployed the SGR-A1 automated sentry system along the by 2010, featuring AI-driven target detection via thermal imaging and automatic firing options, though typically requiring human confirmation for lethal action. By the mid-2020s, proliferation continued, with Russia's loitering munitions exhibiting autonomous terminal guidance in strikes since 2022, and China's reported development of AI-enabled drone swarms, though verifiable autonomous lethal deployments remained limited outside and structured testing environments. These developments underscored a shift toward cost-effective, scalable to counter manpower shortages and enhance precision in high-threat zones, despite ongoing international debates over ethical and legal implications.

Technical Foundations

Artificial Intelligence and Machine Learning Integration

and enable lethal autonomous weapons to process data, identify targets, and execute engagements without intervention in . Core integration involves neural networks for tasks, such as and classification, often employing convolutional neural networks (CNNs) to analyze imagery from onboard cameras and radars. These algorithms are trained on large datasets distinguishing combatants from civilians or specific threats, using to minimize false positives in dynamic environments. Machine learning algorithms, including deep learning variants like for real-time target detection, facilitate autonomous navigation and loitering by predicting trajectories and avoiding obstacles. models further support processes, where systems learn optimal actions—such as pursuit or strike—through simulated trial-and-error, adapting to battlefield uncertainties like or terrain variability. In practice, the Turkish Kargu-2 quadcopter drone exemplifies this integration, utilizing embedded for independent target identification and engagement during its 30-minute flight endurance, with reported autonomous operations in as early as 2020. U.S. programs accelerate such capabilities; the Reinforcements (AIR) initiative, launched in 2023, develops AI-driven autonomy for multi-aircraft beyond-visual-range combat, incorporating for tactical coordination. Similarly, the Evolution (ACE) program, active since 2019, employs AI pilots in human-machine dogfights to refine autonomous targeting algorithms, achieving successes in simulated engagements by 2021. These efforts underscore ML's role in scaling autonomy from semi-supervised to fully independent lethal decisions, though vulnerabilities to adversarial inputs—such as spoofed —persist, requiring robust against .

Sensors, Targeting Algorithms, and Decision-Making Processes

Lethal autonomous weapon systems (LAWS) employ a variety of sensors to perceive their operational environment and detect potential targets, including electro-optical and cameras for visual identification, as well as and acoustic sensors for broader . These sensors generate real-time data streams that feed into onboard processing units, enabling the system to monitor dynamic battlefields without continuous human input. Advanced implementations may incorporate biometric detection methods, such as or , to differentiate individuals based on physiological or movement patterns. Targeting algorithms in LAWS primarily rely on models trained to process inputs and classify objects as threats or non-threats, often using techniques for and tracking. For instance, convolutional neural networks analyze imagery to identify predefined profiles, such as signatures or behavioral indicators, with reported accuracies exceeding 85% in controlled tests for certain loitering munitions. These algorithms enable autonomous navigation and target locking, as seen in the Turkish Kargu-2 , which embeds for without requiring for in fully autonomous modes. Integration of allows adaptation to novel environments through learned patterns from training datasets, though performance degrades in cluttered or adversarial conditions due to occlusions or countermeasures. Decision-making processes in LAWS synthesize sensor and targeting outputs against embedded , typically encoded as software thresholds for lethal action, such as proximity to confirmed threats or mission parameters. Once activated, the system evaluates probabilities—e.g., confidence scores from classifiers exceeding set limits—to select and prosecute targets independently, as demonstrated in reports of Kargu-2 units autonomously hunting retreating forces in circa 2020. This process often incorporates elements, where initial human-defined parameters guide AI-driven refinements, but full delegates final kill decisions to algorithmic logic rather than oversight. Empirical evaluations highlight the need for robust validation to mitigate errors from biases or incomplete training, ensuring decisions align with operational intent.

Categories and Examples

Defensive Autonomous Systems

Defensive autonomous systems encompass weapon platforms designed to protect fixed installations, vehicles, or naval assets by independently detecting, evaluating, and neutralizing incoming threats, such as missiles, drones, small boats, or intruders, without requiring real-time human intervention for target engagement. These systems prioritize rapid response in high-threat environments where human reaction times would be insufficient, relying on integrated sensors like radar, thermal imaging, and laser rangefinders to perform search, track, and fire functions. Unlike offensive systems that seek out distant targets, defensive variants operate within predefined perimeters or engagement zones, activating only upon verified threat detection to minimize false positives. A prominent example is the Phalanx Close-In Weapon System (CIWS), developed by and now produced by , which has been deployed on U.S. warships since 1980 to counter anti-ship missiles, low-flying , and asymmetric threats like small surface vessels. The system integrates a 20mm Gatling gun with a Ku-band for continuous 360-degree surveillance, capable of autonomously acquiring targets at ranges up to 2 kilometers, tracking them at speeds exceeding Mach 2, and firing up to 4,500 rounds per minute until the threat is destroyed or exits the zone. Over 900 units have been installed across more than 20 U.S. and allied navies, with combat-proven engagements including the neutralization of Iraqi missiles during in 1988 and Silkworm missiles in 1991. Land-based variants, such as the U.S. Army's system, extend this capability to counter rockets, artillery, and mortars, demonstrating operational reliability in environments like where manual defenses proved inadequate. Another key instance is South Korea's sentry gun, jointly developed by (formerly Samsung Techwin) and , and deployed along the (DMZ) since approximately 2010 to deter North Korean incursions. Equipped with a 5.56mm or 12.7mm , thermal cameras, and pattern-recognizing software, the can autonomously identify human or vehicle targets up to 3 kilometers away in all weather conditions, issue audio warnings, and engage with precision fire if the threat persists, though operators can override via remote link. At least 100 units guard the 248-kilometer , enhancing surveillance in rugged terrain where manned patrols face high risks, and the system's development addressed the need for persistent, fatigue-free monitoring amid ongoing tensions. These systems illustrate the tactical emphasis on defensive , where algorithmic —based on predefined criteria like , , and —enables sub-second responses unattainable by humans, though they incorporate fail-safes like thresholds to prevent erroneous lethal actions. Empirical from deployments show reduced risks compared to unguided defenses, as sensors discriminate between threats and non-threats with error rates below 1% in controlled tests, yet vulnerabilities to spoofing or environmental interference persist, prompting ongoing upgrades in for target classification.

Offensive and Loitering Munitions

Loitering munitions, also known as or suicide drones, represent a category of offensive lethal autonomous weapons systems (LAWS) designed to over a designated area, autonomously detect and engage using onboard sensors and algorithms before self-destructing upon impact. These systems integrate , sensing, and payloads, enabling extended flight times—often hours—while searching for high-value without continuous human input once launched. Unlike traditional missiles, their reusability if not detonated and ability to abort missions in some models distinguish them, though many are expendable by design. Prominent examples include the Israeli Harop loitering munition developed by , which features a 9-hour and electro-optic seekers for autonomous in the absence of prior intelligence, primarily used for (SEAD) by homing on emissions. Similarly, the Turkish Kargu-2 drone employs for real-time target identification and can execute fully autonomous attacks, with capabilities for coordinated strikes. In a reported deployment during the 2020 Libyan conflict, Kargu-2 units allegedly operated in autonomous mode to hunt and engage retreating forces, marking a potential first instance of a LAWS inflicting fatalities without direct human targeting, as noted in a panel report. These munitions enhance offensive operations by providing persistent and precision strikes against time-sensitive or mobile targets, such as command centers or armored vehicles, often in GPS-denied environments through inertial and AI-driven decision-making. The U.S. series, including the man-portable Switchblade 300 with a 15-minute loiter time and 10 km range, supports semi-autonomous modes where operators confirm targets via video feed, though upgrades incorporate greater for and evasion. Deployments in conflicts like have demonstrated their role in urban and , where loitering allows for on-demand response, though full remains constrained by requiring human oversight in U.S. systems. Critics, including UN experts, highlight risks of erroneous engagements due to algorithmic limitations in distinguishing combatants from civilians.

Real-World Deployments and Case Studies

In March 2020, during the Libyan civil war, Turkish-manufactured , produced by , were deployed by forces aligned with the against retreating troops affiliated with General . A Panel of Experts report documented that these loitering munitions, capable of autonomous navigation and target engagement via onboard , reportedly "hunted down" and attacked human targets without direct human control in some instances. The features for and can operate in swarms, switching between manual and fully autonomous modes, with a reported range of 10 kilometers and endurance of 30 minutes. This incident marked the first documented potential use of lethal autonomous weapon systems (LAWS) against human combatants in active conflict, raising questions about compliance with , though the exact level of human oversight remains disputed due to limited verification. The Kargu-2's deployment in highlighted operational capabilities in dynamic environments, where the drones used pre-programmed target profiles to identify and engage fighters based on visual and thermal signatures. Post-incident analysis by the UN noted the allowed for selection and engagement after activation, distinguishing them from remotely piloted drones. Turkish officials have emphasized safeguards, but the UN findings suggest instances of full autonomy, with the drones programmed to prioritize moving targets matching combatant profiles. This case underscores the transition from semi-autonomous munitions to systems with greater target discrimination via , though empirical data on engagement accuracy is scarce, limited to classified military assessments and secondary reporting. In the 2020 , extensively deployed Israeli Harop loitering munitions alongside Turkish drones, contributing to the destruction of over 200 armored vehicles and pieces. The Harop, a man-portable drone with a 200-kilometer range and nine-hour loiter time, operates autonomously in its terminal phase after human-launched targeting data, using electro-optical sensors to detect and strike radar emissions or visual signatures without further intervention. While not fully autonomous in initial target selection—relying on pre-designated zones or human cues—these systems demonstrated lethality, with video evidence showing independent homing on mobile threats. forces reported Harop effectiveness in suppressing air defenses, achieving a reported 80-90% success rate in engagements, though countermeasures like electronic reduced overall impact in later phases. This deployment illustrated LAWS precursors in , blending human oversight with autonomous execution, but fell short of independent target profiling in unstructured environments. Defensive LAWS have seen routine deployment by multiple militaries, including the U.S. Navy's Close-In Weapon System (CIWS), operational since 1980 and upgraded with autonomous fire control against incoming missiles and . The uses radar-guided 20mm Gatling guns to detect, track, and engage threats at speeds up to 4,500 rounds per minute without human input once activated, with over 100 systems deployed across U.S. and allied vessels. Similar systems, like South Korea's Super aEgis II automated turret along the DMZ since 2010, feature AI-driven detection of human intruders via thermal imaging and can fire autonomously in response to predefined threats, though set to require human confirmation for lethal force in practice. These cases represent established, low-controversy applications focused on threat interception rather than proactive targeting, with billions of operational hours logged without reported erroneous engagements against non-threats.

Military Advantages and Strategic Benefits

Enhanced Operational Efficiency and Force Multiplication

![STM Kargu loitering munition][float-right] Lethal autonomous weapons systems (LAWS) enhance by enabling continuous surveillance and engagement without human fatigue or physiological limitations, allowing for persistent operations over extended periods. munitions, a key category of LAWS, provide advantages such as faster reaction times, area persistence, and selective targeting, which outperform traditional munitions in dynamic battlefields. These systems reduce logistical burdens associated with human operators, including sustenance and medical support, thereby streamlining and enabling smaller forces to maintain high readiness levels. Force multiplication arises from the scalability of LAWS, particularly through swarm tactics where multiple units coordinate autonomously to overwhelm adversaries. Autonomous s execute diverse missions with minimal support infrastructure, leveraging for distributed that mimics unified command structures. In practice, systems like the Turkish Kargu-2 have demonstrated this in in 2020, where autonomous operations "hunted down" retreating forces with high effectiveness, amplifying the impact of limited deployers. Such capabilities allow a single operator or small team to control swarms, effectively multiplying combat power by factors exceeding traditional manned units, as groups of LAWS can synchronize actions akin to a single entity. Overall, these efficiencies stem from LAWS' expendability and lower per-unit costs compared to manned platforms, facilitating mass deployment without proportional increases in personnel risks or expenses. For instance, loitering munitions engage time-sensitive targets cost-effectively, preserving higher-value assets for strategic roles. This supports by integrating LAWS into layered defense and offense strategies, where autonomous elements handle routine or high-volume tasks, freeing human resources for complex decision-making.

Minimizing Risks to Human Operators

Lethal autonomous weapon systems (LAWS) minimize risks to operators by enabling the execution of high-threat missions without requiring personnel to be physically present in the operational environment. Once deployed or activated, these systems can independently identify, select, and engage targets, thereby eliminating the need for operators to expose themselves to enemy fire, improvised explosive devices, or other hazards. This capability has been highlighted in U.S. military analyses as a key advantage, allowing forces to neutralize threats remotely from secure locations, such as command centers or distant bases. In practice, systems like loitering munitions exemplify this risk reduction; for instance, the U.S. , a man-portable that can autonomously loiter and strike targets after launch, permits soldiers to engage adversaries without advancing into contested areas. Similarly, the Turkish quadcopter, deployed in as early as 2020, operates with onboard AI for target recognition and engagement, sparing operators from piloting vulnerable manned or ground vehicles. U.S. Department of Defense policy under Directive 3000.09, updated in 2023, mandates that autonomous systems incorporate safeguards to allow operator override while prioritizing designs that enhance safety by distancing humans from harm. Empirical evidence from unmanned systems deployments, which inform LAWS development, demonstrates tangible casualty reductions; during Operations Iraqi Freedom and Enduring Freedom, the proliferation of unmanned ground and aerial vehicles correlated with decreased U.S. troop exposure to roadside bombs, contributing to a shift where machines absorbed risks previously borne by soldiers. By 2018, the U.S. Army had integrated over 7,000 robotic systems for tasks like route clearance and perimeter defense, explicitly to minimize personnel risks in . This approach not only preserves operator lives but also sustains operational tempo without the psychological toll of direct combat exposure.

Superior Precision Compared to Human-Controlled Systems

Autonomous weapon systems can achieve superior targeting precision by leveraging algorithms that process vast sensor data volumes without human limitations such as cognitive overload or sensory distortion, enabling more accurate object identification and engagement decisions. Machine learning models in these systems demonstrate visual recognition accuracies of 83-85 percent in complex environments, outperforming human operators under stress where error rates increase due to fatigue and emotional factors. For instance, the U.S. Counter-Rocket, Artillery, and Mortar (C-RAM) system automates intercepts with enhanced precision to distinguish threats from friendly assets, reducing fratricide risks that human verification alone might exacerbate in high-tempo scenarios. Unlike human operators, who experience performance degradation from prolonged operations—evidenced by studies showing mental fatigue impairs and marksmanship—autonomous systems maintain consistent accuracy across extended engagements without decrement. AI-driven targeting integrates environmental variables like and , yielding up to 70 percent faster processing cycles for nomination and assignment compared to manual methods reliant on operator judgment. This capability minimizes potential by enabling precise discrimination between combatants and non-combatants, as algorithms avoid human biases like over-reliance on incomplete visual cues. Proponents, including former U.S. Department of Defense officials, argue this extends to lethal systems, potentially lowering civilian casualties through unerring adherence to predefined . In tactical applications, such as AI-assisted , systems like the U.S. Army's Tactical Intelligence Targeting Access Node () enhance human operators by automating for precise strikes, reducing errors in dynamic battlefields where humans alone falter under . Empirical parallels from non-lethal autonomous defenses, including rapid threat neutralization without fatigue-induced delays, support claims that full in offensive munitions could similarly outperform remote human piloting, which suffers from and operator endurance limits. However, these advantages hinge on robust validation, as unproven models risk overconfidence in edge cases beyond training data. Overall, the elimination of positions autonomous systems for inherently more reliable precision in force-on-force engagements.

Potential Risks and Technical Challenges

Algorithmic Errors and Unpredictability

Algorithmic errors in lethal autonomous weapon systems (LAWS) arise primarily from limitations in algorithms, including biases embedded in training datasets that lead to systematic misidentifications of targets. For instance, incomplete or skewed data can result in false positives, where non-combatants or neutral objects are erroneously classified as threats, as documented in analyses of targeting systems that highlight risks from narrow data selection and programmer influences. Such errors are exacerbated in dynamic environments, where algorithms trained on controlled simulations fail to generalize, potentially violating principles of distinction under . Unpredictability stems from the "" nature of advanced neural networks, where complex interactions between algorithms and real-time inputs produce emergent behaviors that even developers cannot fully anticipate or explain. Military systems, reliant on for target selection, exhibit this opacity, as interactions with unpredictable operational contexts—such as variable lighting, , or electronic interference—can yield outputs diverging from intended logic. Studies on decision support for targeting indicate that such systems may amplify errors through over-reliance on probabilistic models, with false negatives (missing actual threats) or positives occurring due to unmodeled variables, as seen in broader applications like facial , where error rates for certain demographics exceed 30% in uncontrolled settings. Empirical evidence from AI testing underscores these vulnerabilities; for example, simulations of autonomous drones have shown misclassification rates increasing in novel scenarios, with one review noting that minimizing false positives requires extensive diversity, yet battlefield novelty often overwhelms this, leading to lethal mistakes without human intervention. While proponents argue that iterative training mitigates risks, reveals that algorithmic drift—where models degrade over time due to shifting distributions—remains a persistent challenge, as evidenced by documented failures in non-military AI systems adapted for . These factors collectively heighten the potential for unintended escalations, as unpredictable error propagation in swarms or networked LAWS could cascade into disproportionate engagements.

Vulnerability to Adversarial Attacks and Proliferation

Lethal autonomous weapon systems (LAWS) are susceptible to adversarial attacks that exploit weaknesses in their artificial intelligence components, such as machine learning models used for target identification and decision-making. Adversarial examples, which are subtly modified inputs designed to deceive AI classifiers, can cause systems to misidentify legitimate threats or non-threats, as demonstrated in assessments of electro-optical detection systems where perturbations invisible to humans lead to false positives or negatives. For instance, physical adversarial perturbations, like patterned camouflage or decoy objects, have been shown to evade drone-based object detectors in simulated military environments. These vulnerabilities arise from the brittleness of neural networks, which perform poorly outside their training distributions, amplifying risks in dynamic battlefield conditions. Cyber operations further compound these risks, enabling adversaries to compromise LAWS through data poisoning, spoofing, or direct intrusions. Reports on in motion highlight how attackers can inject malicious during or , altering targeting logic without physical , as seen in vulnerabilities affecting and control loops. Adversarial policies, such as deploying decoy drones exhibiting erratic behaviors, can confuse algorithms in swarming systems, leading to operational failures or unintended engagements. techniques, including or GPS spoofing, remain effective against semi-autonomous precursors and extend to fully autonomous variants reliant on similar and communication protocols, underscoring the need for robust countermeasures like adversarial , though these increase computational demands and may not fully mitigate real-world exploits. Proliferation of LAWS poses significant security challenges due to their potential for low-cost replication using commercial-off-the-shelf components and open-source frameworks, facilitating access by non-state actors. Unlike conventional arms requiring extensive industrial bases, autonomous systems can be assembled from inexpensive drones and basic kits, as evidenced by the rapid adaptation of munitions in conflicts like , where non-state groups have modified systems for independent targeting. This democratizes lethal technology, heightening risks of misuse in or , with analyses indicating that such weapons' relative fragility does not deter but rather accelerates it through iterative improvements by rogue entities. International efforts to restrict development face enforcement hurdles, as technological diffusion via dual-use software and hardware evades export controls, potentially leading to an uncontrolled spread that undermines strategic stability.

Ethical Considerations

Human Dignity and Moral Accountability

Critics of lethal autonomous weapon systems (LAWS) contend that such technologies undermine by delegating life-and-death decisions to algorithms incapable of moral judgment or , thereby treating human targets as mere objects within computational processes rather than beings with intrinsic worth. This perspective draws on , emphasizing that dignity requires recognition of human autonomy and rationality in lethal contexts, which machines cannot provide; as philosopher Peter Asaro argued in 2012, "As a matter of the preservation of human morality, , , and law we cannot accept an automated system making the decision to take a ." Empirical analyses highlight risks of , where LAWS reduce combatants and civilians to data patterns, potentially eroding the ethical restraint imposed by human involvement in warfare. Proponents of restricted LAWS deployment counter that dignity violations are not inherent if systems adhere to principles like and , potentially enhancing respect for life through superior precision over error-prone human operators. However, this view faces scrutiny for assuming reliable algorithmic fidelity, given documented cases of AI misclassification in non-lethal applications, such as facial recognition errors exceeding 10% in certain datasets as of 2022. Philosophers like Gregory Reichberg argue that machine lethal force debases akin to treating humans as animals, stripping warfare of the human essential to . On moral accountability, LAWS introduce a "responsibility gap" wherein autonomous decisions evade attribution to specific humans, complicating for unlawful killings and forward-looking improvements in conduct. Philosopher Anne Gerdes posited in 2018 that delegating lethal authority to LAWS creates an unacceptable gap, as programmers bear for design flaws but not prospective control over unpredictable runtime behaviors, evidenced by AI "" opacity in decision trees. This gap persists even with oversight layers, as technical access to logs, , and frameworks fail to fully bridge it when autonomy precludes real-time human veto, per analyses of socio-technical limitations. Such accountability deficits risk , where commanders evade culpability for systemic errors, contrasting with human-operated systems where individual soldiers face prosecution under frameworks like the , as seen in over 100 cases since 2002 involving direct human agency in atrocities. Critics argue this corrosion of agency incentivizes proliferation of flawed systems, amplifying civilian risks without commensurate ethical safeguards, though no fully autonomous lethal deployments have occurred as of 2025 to empirically test these dynamics.

Comparative Analysis with Human Decision-Making Flaws

Human operators in frequently exhibit impairments due to physiological and psychological factors. alone can elevate error rates significantly; for instance, cognitive induced prior to marksmanship tasks increased soldiers' errors—firing at non-threats—by 33% compared to rested conditions. and compound these issues, degrading cognitive performance and reaction times, as evidenced by studies showing acute impairs shooting accuracy and decision speed during simulated overnight . Emotional responses, such as or , further distort threat assessment, leading to hesitation or overreaction absent in programmed systems. Friendly fire incidents underscore these vulnerabilities, often stemming from misidentification under duress rather than technical failures. In the 1991 , friendly fire accounted for approximately 17% of U.S. battle casualties, with misperception of targets as hostile being a primary cause. Broader analyses of modern conflicts estimate friendly fire contributes 13-23% of combat deaths for U.S. forces, attributable to human factors like fatigue-induced lapses in and communication breakdowns amid chaos. Such errors persist despite training, as soldiers under prolonged exertion ignore incoming data or fail to integrate it effectively, negating advanced sensor advantages. Cognitive biases exacerbate these flaws, systematically skewing judgments. Overconfidence bias leads commanders to overestimate success probabilities, while anchoring fixates decisions on initial flawed , as seen in historical operations where premature commitments ignored contradictory evidence. prioritizes recent or vivid events over comprehensive data, fostering illusory correlations in threat evaluation. These heuristics, adaptive in low-stakes environments, prove maladaptive in warfare's high-uncertainty context, where they amplify errors in target discrimination and force allocation. Proponents of lethal autonomous weapons contend these systems mitigate such human frailties by executing predefined rules without emotional interference or exhaustion, enabling faster processing of for precise engagements. Unlike fatigued operators, autonomous platforms maintain consistent performance over extended operations, potentially lowering collateral risks through unclouded . However, this comparison highlights not but a : while s err via subjective lapses, machines depend on algorithmic fidelity, raising questions about irreplaceable intuition in ambiguous scenarios like distinguishing combatants from civilians in dynamic urban settings. Empirical on errors thus informs ethical debates, suggesting could reduce predictable failure modes if programmed to exceed baseline .

Compliance with International Humanitarian Law

International Humanitarian Law (IHL), codified in the Geneva Conventions and customary international law, applies fully to all weapons systems, including lethal autonomous weapon systems (LAWS), requiring adherence to core principles such as distinction between combatants and civilians, proportionality of attacks, and precautions in attack. States must conduct legal reviews of new weapons under Article 36 of Additional Protocol I to the Geneva Conventions to assess IHL compliance prior to development or acquisition, evaluating whether LAWS can reliably distinguish targets and apply force proportionally in dynamic environments. Proponents of LAWS argue that advanced sensors, algorithms, and can enhance compliance with distinction by processing data faster and more accurately than humans, reducing errors from fatigue, stress, or , as evidenced in simulations where autonomous systems demonstrated superior target identification in scenarios. For , which demands weighing anticipated civilian harm against concrete military advantage, programmable could embed thresholds to abort attacks if exceeds limits, potentially outperforming human operators prone to overreaction in high-stakes situations. However, critics, including the International Committee of the Red Cross (ICRC), contend that LAWS may inherently fail these principles due to algorithmic unpredictability in novel contexts, where adaptations could lead to misinterpretations of civilian presence or value-based judgments beyond binary programming. The principle of precautions requires verifiable human oversight in meaningful ways, such as setting operational parameters or capabilities, to ensure LAWS do not engage without real-time assessment of changing circumstances like human shields or surrendering fighters, which static algorithms might overlook. reports emphasize that LAWS must not create gaps, with commanders retaining for programming and deployment decisions, though full raises questions about meaningful when systems self-modify post-deployment. Empirical tests, such as those by militaries, show current semi-autonomous systems like loitering munitions can comply in predefined scenarios, but scaling to fully lethal without risks violations in fluid , where contextual nuances defy exhaustive pre-programming. No international treaty prohibits LAWS outright as of , but resolutions urge states to refrain from deployment if IHL cannot be assured, highlighting ongoing debates over whether technological safeguards suffice or if prohibitions on certain unpredictable variants are needed.

National Policies, Including US Directives

The (DoD) formalized its approach to autonomous weapon systems through Directive 3000.09, initially issued on November 21, 2012, and updated on January 25, 2023. The directive establishes policy for the development, acquisition, and fielding of such systems, emphasizing that autonomous and semi-autonomous weapon systems must be designed to allow commanders and operators to exercise appropriate levels of human judgment over the . It mandates rigorous safety testing, risk assessments, and senior-level reviews for systems capable of selecting and engaging targets without further human intervention, but does not prohibit fully autonomous lethal capabilities outright, provided they comply with applicable laws, including . The 2023 update reinforces these requirements without introducing a categorical ban or mandatory real-time control, focusing instead on minimizing failures and ensuring accountability through human oversight in authorization and operation. Among other nations, the maintains a policy opposing lethal autonomous weapon systems that lack meaningful and context-appropriate human involvement, as outlined in its 2022 Defence Strategy. The UK strategy commits to human accountability throughout the lifecycle of AI-enabled systems and supports international discussions under the , but rejects preemptive legally binding prohibitions, arguing that existing suffices for governance. Russia has articulated opposition to any international legally binding instrument restricting lethal autonomous weapon systems, emphasizing that human control can be achieved through non-real-time means such as pre-programming and ethical guidelines rather than direct intervention. doctrine prioritizes rapid development of autonomous capabilities, with plans for fully autonomous military systems by 2035, and views bans as impediments to technological parity with adversaries. , while advocating classification of autonomous systems into "unacceptable" and "acceptable" categories for potential prohibitions on the former, has abstained from resolutions urging restrictions and continues aggressive pursuit of AI-integrated weapons without a domestic ban. employs advanced autonomous defensive systems but abstains from supportive votes on restrictive UN resolutions, maintaining that international law applies without need for new prohibitions and rejecting characterizations of such systems as fully independent decision-makers. Few other states have codified national policies, with most positions expressed in multilateral forums rather than domestic directives.

International Negotiations and Resolutions

Negotiations on lethal autonomous weapons systems (LAWS) have taken place primarily under the (CCW), through its Group of Governmental Experts (GGE) on in the area of LAWS, established as a forum for discussing definitions, characteristics, and potential regulatory measures since 2014. The GGE convenes annually in , with sessions in 2025 held from March 3–7 and September 1–5, focusing on formulating elements for a possible legally binding instrument, including prohibitions on systems lacking meaningful human control; however, consensus has consistently eluded the group due to opposition from states such as , the , and , which argue that preemptive bans could hinder technological development and without addressing definitional ambiguities. In September 2025, the GGE reviewed a rolling text on potential elements, with 42 states expressing readiness to commence negotiations on a binding instrument, yet progress stalled under CCW's rule, which allows a minority of objecting parties—often major military powers—to block advancements, resulting in no mandate for formal talks by the session's end. This pattern reflects broader divisions: over 70 states, primarily from , , and some European nations, advocate for outright prohibitions, while proponents of regulation without bans emphasize compliance with through human oversight rather than . Parallel efforts in the UN have produced non-binding resolutions urging accelerated action. Resolution 78/241, adopted on December 22, 2023, called for addressing LAWS risks under . This was followed by Resolution 79/62 on December 2, 2024, which passed with 166 votes in favor, 3 against (, , and ), and 15 abstentions, mandating informal consultations on May 12–13, 2025, in to broaden participation beyond CCW states and explore complementarity with ongoing GGE work. Earlier, on November 5, 2024, the First Committee adopted draft Resolution L.77 with 161 in favor, reinforcing calls for treaty negotiations amid warnings from UN Secretary-General in May 2025 for a global prohibition to preserve human control over lethal force. No legally binding international resolution or treaty on LAWS exists as of October 2025, with advocacy groups like the International Committee of the Red Cross and Human Rights Watch attributing delays to resistance from states possessing advanced autonomous systems, while critics of ban-focused campaigns argue such efforts overlook verifiable benefits like reduced collateral damage in precision targeting compared to human errors in conventional warfare. These negotiations highlight tensions between ethical imperatives for human accountability and pragmatic concerns over verifiable enforcement in an era of rapid AI proliferation, with over 120 states by mid-2025 endorsing starts to treaty talks yet facing entrenched opposition from powers prioritizing operational autonomy.

Debates on Governance

Arguments Against Bans from Military Perspectives

Military leaders and defense analysts argue that prohibiting lethal autonomous weapon systems (LAWS) would undermine operational effectiveness by forgoing technologies that serve as force multipliers, enabling fewer personnel to achieve mission objectives with greater efficacy. Autonomous systems expand access to contested environments, operate at tempos exceeding human capabilities, and handle repetitive or hazardous tasks without risking lives, as outlined in the U.S. Department of Defense's Unmanned Systems Integrated Roadmap from 2007–2032. For instance, systems like explosive ordnance disposal robots cost approximately $230,000 compared to $850,000 annually per soldier, potentially yielding significant savings while minimizing personnel exposure to threats. A primary concern from military perspectives is the preservation of friendly forces, as LAWS remove humans from high-risk engagements, reducing casualties in dull, dirty, or dangerous operations such as prolonged or radiological . U.S. defense policy, per Department of Defense Directive 3000.09 updated in 2023, permits the development and fielding of such systems under strict oversight, requiring human judgment in force employment but allowing in select scenarios to enhance and reliability through rigorous testing. This approach counters ban proposals by emphasizing that can mitigate human errors induced by fatigue or stress, potentially lowering ethical lapses in targeting compared to stressed operators. Proponents highlight precision advantages, noting that LAWS process vast without or degradation, enabling faster, more accurate engagements that could reduce versus human-operated systems. In degraded communication environments, onboard autonomy ensures continued functionality, aligning with (IHL) by facilitating discrimination between combatants and civilians through real-time verification. The U.S. position, articulated in discussions, opposes preemptive bans, asserting that LAWS may improve IHL adherence via enhanced targeting accuracy and reduced unintended civilian harm relative to less precise munitions. From a strategic standpoint, bans are viewed as impractical due to challenges and non-compliance risks from adversaries like and , who continue LAWS development, potentially eroding U.S. advantages in high-intensity conflicts. Existing IHL frameworks suffice to prohibit unreliable or indiscriminate systems, obviating the need for categorical prohibitions that ignore operational necessities in future battlefields dominated by speed and swarms. Defense experts warn that halting innovation would cede ground in an arms competition, compromising deterrence and without verifiable mechanisms.

Pro-Ban Campaigns and Their Critiques

The Campaign to Stop Killer Robots, a coalition of over 250 non-governmental organizations from more than 100 countries, was publicly launched in April 2013 to advocate for a preemptive international treaty prohibiting the development, production, and use of lethal autonomous weapons systems (LAWS), defined as those capable of selecting and engaging targets without meaningful human control. Co-founded by groups including Human Rights Watch and the International Committee for Robot Arms Control, the campaign has focused on lobbying within the United Nations Convention on Certain Conventional Weapons (CCW), where discussions on LAWS began informally in 2014 and evolved into a Group of Governmental Experts (GGE) by 2017. By 2020, the campaign had influenced statements from 97 countries, with 30 expressing support for a ban or new legally binding rules, though major powers like the United States, Russia, and China have resisted outright prohibitions. Key arguments include the inherent inability of LAWS to reliably distinguish combatants from civilians or assess proportionality under international humanitarian law (IHL), the erosion of moral accountability in warfare, and heightened risks of proliferation to non-state actors, potentially enabling low-cost, scalable attacks by terrorists. Other prominent organizations, such as and the , have echoed these concerns, emphasizing an "accountability gap" where no human operator could be held responsible for algorithm-driven errors, and warning of an that lowers barriers to conflict by removing human empathy from lethal decisions. The campaign draws parallels to successful treaties like the 1997 Mine Ban Convention, urging a similar humanitarian approach despite LAWS not yet being widely deployed. Proponents cite early prototypes, such as Turkey's Kargu-2 , which has been marketed for autonomous loitering munitions, as evidence of imminent dangers requiring immediate action. Critiques of these campaigns highlight their reliance on alarmist rhetoric, such as the term "killer robots," which former U.S. Deputy Secretary of Defense Robert Work described in as unethical and immoral for conflating semi-autonomous systems with fully unpredictable machines, thereby stifling legitimate technological advancements that could enhance precision and reduce in targeting. Analysts argue that pro-ban efforts overestimate IHL risks for LAWS while underestimating human decision-making flaws, such as fatigue or emotional bias, which empirical data from conflicts like and show contribute to the majority of civilian casualties—over 90% in some drone strikes—compared to potentially more consistent algorithmic judgments. Enforcement challenges are a recurring objection: historical bans on chemical weapons have failed to deter rogue actors like , suggesting a LAWS would disadvantage compliant states while adversaries like non-signatory powers advance unchecked, per assessments from defense think tanks. Furthermore, the campaigns' strategy within the CCW framework has been deemed ineffective, as it mirrors past successes like munitions bans but ignores the dual-use nature of technologies and the lack of among permanent UN Security Council members, leading to stalled negotiations despite over 30 GGE meetings by 2023. Critics from military and policy circles, including , contend that NGO-driven advocacy often prioritizes deontological ethics over consequentialist outcomes, neglecting how autonomy could minimize through faster, data-driven responses in dynamic battlefields, as simulated in U.S. Department of Defense exercises. This perspective underscores a in humanitarian organizations toward narratives that may not align with causal realities of deterrence and , potentially increasing net human suffering by prolonging conflicts. ![Rally on the steps of San Francisco City Hall, protesting against a vote to authorize police use of deadly force robots.][float-right]

Prospects for Regulation and International Agreements

Ongoing discussions on regulating lethal autonomous weapons systems (LAWS) occur primarily within the United Nations Convention on Certain Conventional Weapons (CCW) framework, through the Group of Governmental Experts (GGE) on emerging technologies in LAWS. The GGE held sessions in Geneva from March 3–7 and September 1–5, 2025, focusing on applying international humanitarian law, ethical concerns, and potential normative frameworks, with its mandate extended until the CCW's Seventh Review Conference in 2026. In December 2024, the UN General Assembly adopted a on LAWS with 166 votes in favor, urging states to address risks through enhanced compliance with and consideration of new legally binding instruments, though it stopped short of mandating negotiations for a treaty. This reflects growing multilateral attention amid rapid advancements, but lacks enforcement mechanisms and faces implementation hurdles due to non-consensus . Major powers exhibit resistance to outright bans, favoring reliance on existing or non-binding guidelines over preemptive prohibitions. The opposes stigmatizing LAWS development, emphasizing human oversight and national policies like the 2020 Directive on Autonomy in Weapon Systems, while arguing that bans could cede technological advantages to adversaries. deems calls for bans premature, asserting no compelling evidence of unique risks beyond those of and vetoing stronger CCW measures. has expressed support for limiting fully autonomous lethal systems in principle but maintains strategic ambiguity, continuing domestic development without committing to verifiable restrictions that might constrain its military modernization. These divergent positions—ranging from prohibitionist stances by over 30 states and NGOs advocating a by , to "traditionalist" reliance on current by powers like the and —undermine consensus for binding agreements. No international explicitly prohibits LAWS as of October 2025, with experts citing geopolitical rivalries and dynamics as barriers to progress beyond voluntary restraints. Prospects for comprehensive regulation thus hinge on the CCW Review Conference, where a remains possible but improbable without alignment among Permanent Five UN Security Council members, potentially resulting in protracted, incremental norms rather than enforceable prohibitions.

Future Implications

Advancements in artificial intelligence, machine learning algorithms, and sensor fusion have enabled lethal autonomous weapon systems (LAWS) to perform target identification, tracking, and engagement with minimal human input, progressing from semi-autonomous operations to higher levels of independence. These developments include improved computer vision for distinguishing combatants from civilians under varying conditions and real-time decision-making capabilities powered by edge computing, reducing latency in dynamic battlefields. Military investments have accelerated this trajectory, with systems now capable of operating in swarms for coordinated strikes, as demonstrated in experimental programs integrating hundreds of low-cost drones. A notable example is the Turkish STM Kargu-2 loitering munition, a quadrotor drone equipped with autonomous navigation and facial recognition for target selection, which was reportedly deployed in Libya around 2020-2021, where it hunted and attacked human targets without direct operator control according to a United Nations report. While debates persist over the extent of its autonomy—primarily used for navigation rather than full targeting in some analyses—the system's design allows for swarm-mode operations and machine-learning-based threat assessment, marking an early integration of lethal autonomy in asymmetric warfare. Similarly, China's military has advanced drone swarm technologies, testing coordinated unmanned aerial vehicles for saturation attacks in potential Taiwan scenarios, emphasizing AI-driven collective intelligence to overwhelm defenses. In the United States, the Department of Defense's Replicator initiative, launched in August 2023, aims to field thousands of all-domain attritable autonomous systems by mid-2025, focusing on uncrewed platforms for dispersed combat power against peer adversaries like . These systems, including air and surface variants, incorporate collaborative autonomy software for mission coordination without constant human oversight, though U.S. policy mandates meaningful human control for lethal decisions as of late 2024. DARPA's Air Combat Evolution () program further pushes boundaries by developing pilots for dogfighting, transitioning from human-piloted simulations to autonomous aerial engagements. and have collaborated on -powered platforms, such as gun-mounted robot dogs for urban combat, enhancing ground-based autonomy. Integration trends reflect a shift toward attritable, scalable systems that embed into existing architectures, reducing personnel risks while amplifying . Swarming capabilities, where drones share data via mesh networks for emergent behaviors like adaptive targeting, are proliferating, with militaries prioritizing low-cost hardware over expensive single platforms to counter . The global market, projected to reach USD 44.52 billion by 2034, underscores this emphasis on AI-equipped units for tasks ranging from reconnaissance to precision strikes, driven by lessons from where autonomous elements have enhanced battlefield efficiency. However, full deployment of LAWS remains constrained by technical challenges in reliable and ethical safeguards, with most systems retaining for lethal actions.

Geopolitical Ramifications and Arms Race Dynamics

The development and potential deployment of lethal autonomous weapon systems (LAWS) among major powers has intensified military competition, particularly between the , , and , raising concerns about destabilizing geopolitical shifts. U.S. Department of Defense Directive 3000.09, last updated in 2020 but reaffirmed in policy discussions through 2024, mandates human oversight for lethal engagements while permitting autonomous targeting in certain scenarios, reflecting a strategic push to integrate for operational efficiency amid peer competitions. , through its 2017 New Generation Artificial Intelligence Development Plan, has accelerated military integration, including autonomous drones and swarm technologies, positioning LAWS as tools for maintaining regional dominance in scenarios like a contingency, despite public calls for human control in international forums. has operationalized systems with autonomous functions in since 2022, deploying loitering munitions like the KUB-BLA for target selection without real-time human input, and plans to produce millions of -enhanced drones by 2025 to offset manpower shortages. This competition mirrors historical arms races but accelerates due to AI's rapid iteration, potentially eroding mutual deterrence by enabling low-cost, scalable strikes that reduce human risk and lower conflict thresholds. In the , U.S.- rivalry over weaponry could alter power balances, with 's advances in like threatening U.S. naval superiority, while to allies or adversaries—evident in Russia's exports and 's Belt and Road tech transfers—amplifies risks in hybrid conflicts. Russia's experience demonstrates how LAWS enable attritional warfare, prompting responses and straining alliance cohesion, as autonomous systems outpace human decision loops and invite miscalculation in flashpoints like the . Experts from think tanks like the Arms Control Association note that without binding international norms, this dynamic fosters a "" where defensive pursuits yield offensive capabilities, heightening global instability. Proliferation beyond state actors exacerbates ramifications, as LAWS' relative affordability—compared to manned platforms—enables non-state groups or rogue regimes to acquire variants, undermining conventional deterrence and complicating attribution in cyberattacks or border skirmishes. UN General Assembly Resolution 78/241, adopted December 2024 with 152 votes in favor, highlights widespread alarm over unregulated spread, yet major powers' resistance to bans preserves national flexibility, perpetuating the race. While some analyses question a full "arms race" narrative due to cooperative AI elements, empirical deployments in Ukraine and investment surges—U.S. allocating billions via the Replicator initiative for autonomous systems by 2025—underscore causal pressures for preemptive adoption to avoid strategic disadvantage. This trajectory risks normalizing machine-mediated lethality, altering alliances and forcing reallocations from human-centric forces to AI infrastructure, with long-term effects on great-power stability contingent on governance breakthroughs amid ongoing UN talks.

References

  1. [1]
    [PDF] DoD Directive 3000.09, "Autonomy in Weapon Systems
    Jan 25, 2023 · (4) Autonomous weapon systems used to apply non-lethal, non-kinetic force against materiel targets in accordance with DoDD 3000.03E. e ...
  2. [2]
    [PDF] Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems
    May 15, 2023 · Lethal autonomous weapon systems (LAWS) are a special class of weapon systems that use sensor suites and computer algorithms to independently ...
  3. [3]
    Pros and Cons of Autonomous Weapons Systems
    The International Committee of the Red Cross defines autonomous weapons as those able to “independently select and attack targets, i.e., with autonomy in the ' ...
  4. [4]
    A Comparative Analysis of the Definitions of Autonomous Weapons ...
    A lethal AWS is specific subset of an AWS with the goal of exerting kinetic force against human beings. The next subsections will unpack this definition by ...
  5. [5]
    The Kargu-2 Autonomous Attack Drone: Legal & Ethical Dimensions
    Jun 10, 2021 · In March 2021, a UN Panel of Experts on Libya reported a possible use of lethal autonomous weapons systems—such as the STM Kargu-2—which “were ...Missing: evidence | Show results with:evidence
  6. [6]
    We must oppose lethal autonomous weapons systems - PMC - NIH
    Such systems are defined as any weapon capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human ...
  7. [7]
    Group of Governmental Experts on Lethal Autonomous Weapons ...
    On 02 June 2025, the CCW Implementation Support Unit circulated an aide-mémoire providing information on attending the second 2025 session of the GGE on LAWS.
  8. [8]
    Autonomous Weapon Systems: No Human-in-the-Loop Required ...
    May 22, 2025 · Currently, policy requires additional review of some kinds of autonomous weapon systems, but does not prohibit anything or require a human in the loop.
  9. [9]
    DoD Announces Update to DoD Directive 3000.09, 'Autonomy In ...
    Jan 25, 2023 · The Directive was established to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that ...
  10. [10]
    [PDF] Autonomous weapon systems under international humanitarian law
    The lawful use of autonomous weapon systems, as broadly defined, will therefore require that combatants retain a level of human control over their functioning ...
  11. [11]
    What are Autonomous Weapon Systems? - Belfer Center
    Feb 3, 2025 · Autonomous weapon systems are a type of military platform that, once activated, can independently conduct military operations without human intervention.
  12. [12]
    Review of the 2023 US Policy on Autonomy in Weapons Systems
    Feb 14, 2023 · The 2023 DoD Directive 3000.09 on Autonomy in Weapons Systems revises, but does not radically change, the department's original policy on the topic.
  13. [13]
    Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems
    May 16, 2023 · Another category is semi-autonomous, or “human in the loop,” weapon systems that “only engage individual targets or specific target groups that ...
  14. [14]
    DOD Updates Autonomy in Weapons System Directive
    Jan 25, 2023 · The directive remains aimed at ensuring that commanders and operators can exercise appropriate levels of human judgment over the use of force.
  15. [15]
    [PDF] Lethal Autonomous Weapons Systems: Can Targeting Occur ... - DTIC
    This paper argues that the moral line is crossed when lethal autonomous weapons systems are given ethical agency to make life or death decisions free of human.
  16. [16]
    Defense Primer: U.S. Policy on Lethal Autonomous Weapon Systems
    Jan 2, 2025 · DODD 3000.09 defines LAWS as weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.
  17. [17]
    [PDF] AUTONOMY LEVELS FOR UNMANNED SYSTEMS (ALFUS)
    Each Level of Autonomy Scale is broken into 8 levels. The levels for the. Decide functions are shown. More. Autonomy. Less. Autonomy. Existent ...
  18. [18]
    (PDF) Autonomy levels for unmanned systems (ALFUS) framework
    Aug 9, 2025 · Among them, the Autonomy Levels for Unmanned Systems (ALFUS) framework proposed by NIST in 2003 [7] remains one of the most recognized models.
  19. [19]
    Mines: the original “autonomous weapons” and the failure of early ...
    Early mines are the most basic form of autonomous weapon, using a simple mechanical device to fire an explosive charge on being struck by a ship.
  20. [20]
    HyperWar: Antisubmarine Warfare in World War II [Chapter 15] - Ibiblio
    During the last 20 months of World War II German U-boats made operational use of acoustic torpedoes, which were first employed in September 1943.Missing: autonomous | Show results with:autonomous<|control11|><|separator|>
  21. [21]
    A Sub-Hunting Bloodhound | Naval History Magazine
    The Mark 24 torpedo, nicknamed “Fido” for its ability to sniff out enemy submarines, was one of the first “smart” weapons developed during World War II.
  22. [22]
    Lethal Autonomy: A Short History - Foreign Policy
    Jan 24, 2014 · With the United Nations likely to take up the issue in 2014, here's a look back at the surprisingly long history of lethal autonomy.
  23. [23]
    MK 15 - Phalanx Close-In Weapon System (CIWS) - Navy.mil
    Sep 20, 2021 · Phalanx is the only deployed close-in weapon system capable of autonomously performing its own search, detect, evaluation, track, engage and ...
  24. [24]
    Last ditch defence – the Phalanx close-in weapon system in focus
    Aug 10, 2020 · Phalanx was one of the world's first fully autonomous weapons. The speed of anti-ship missiles demands a system able to react fully ...
  25. [25]
    HAROP - Advanced Loitering Munition System - IAI
    Nov 23, 2023 · Launched from canisters mounted on trucks or naval vessels, HAROP is easily deployed from diverse terrains and environments. IAI – A Legacy of ...
  26. [26]
    Europe comes full circle on loitering munitions
    Feb 2, 2024 · IAI evolved the Harpy into the loitering Harop, which it exported to Azerbaijan. Baku used the system extensively in the 2021 Nagorno ...Missing: deployments | Show results with:deployments
  27. [27]
    Autonomous Drone Strike In Libya Subject Of Recent United ... - NPR
    Jun 1, 2021 · The Kargu-2 is an attack drone made by the Turkish company STM that can be operated both autonomously and manually and that purports to use " ...Missing: evidence | Show results with:evidence
  28. [28]
    Libyan Fighters Attacked by a Potentially Unaided Drone, UN Says
    Jun 4, 2021 · A United Nations report suggested that a drone, used against militia fighters in Libya's civil war, may have selected a target autonomously.Missing: evidence | Show results with:evidence<|separator|>
  29. [29]
    Innovation Timeline | DARPA
    ACE. The Air Combat Evolution (ACE) program achieved the first-ever in-air tests of AI algorithms autonomously flying an F-16 against a human-piloted F-16 in ...
  30. [30]
    The weaponization of artificial intelligence: What the public needs to ...
    Mar 8, 2023 · The function of autonomously selecting and attacking targets could be applied to various platforms such as battle tanks, fighter jets, or ships.<|control11|><|separator|>
  31. [31]
    War, Artificial Intelligence, and the Future of Conflict
    Jul 12, 2024 · The development and use of weapons that can undertake autonomous functions during conflict is becoming the focus of states and tech companies.
  32. [32]
    [PDF] Artificial Intelligence, Emerging Technology, and Lethal Autonomous ...
    Applying computer-based image recognition and machine learning technologies will dramatically improve the capabilities of weapons that require target ...
  33. [33]
    [PDF] Machine Learning in Drones for Enhancing Autonomous Flight and ...
    Convolutional Neural Networks (CNNs) were used to process feeds from cameras mounted on drones and help them avoid obstacles. On top of it, path planning ...
  34. [34]
    Case Study: Using AI and ML in Military UAVs for Target Recognition
    Jul 31, 2024 · ML algorithms can also be used in target recognition in various ways. You can train your ML models with vast databases of images and videos. ...The Future Of Drone Warfare · How Ai And Ml Can Be Used In... · Advantages Of Ai And Ml In...<|separator|>
  35. [35]
    A new approach for drone tracking with drone using Proximal Policy ...
    YoLO deep learning algorithm is used for drone detection and newly developed target candidate tracking (TCT) algorithm is used for drone tracking. This system, ...Original Software... · 1. Motivation And... · 2. Software Description
  36. [36]
    AI-enabled control system helps autonomous drones stay on target ...
    Jun 9, 2025 · To help such a drone stay on target, MIT researchers developed a new, machine learning-based adaptive control algorithm that could minimize its ...
  37. [37]
    Real-Life Technologies that Prove Autonomous Weapons are ...
    Nov 22, 2021 · 1 – STM Kargu-2. In spring of 2020, it was reported that an autonomous drone strike had taken place in Libya. As far as we know, this was the ...Missing: evidence | Show results with:evidence
  38. [38]
    AIR: Artificial Intelligence Reinforcements - DARPA
    The Artificial Intelligence Reinforcements (AIR) program is developing dominant, AI-driven tactical autonomy for multi-ship, beyond visual range (BVR) air ...
  39. [39]
    ACE: Air Combat Evolution - DARPA
    The ACE program seeks to increase trust in combat autonomy by using human-machine collaborative dogfighting as its challenge problem.
  40. [40]
    The DARPA Perspective on AI and Autonomy at the DOD - CSIS
    Mar 27, 2024 · Probably something like 70 percent of our programs have some type of AI, machine learning, autonomy associated with it. ... DARPA ran an ...
  41. [41]
    Fact Sheet: Autonomous Weapons
    Nov 15, 2023 · Such systems use sensors to monitor their environment and detect objects of interest, algorithms to discern targets from non-combatives, and ...
  42. [42]
    What you need to know about autonomous weapons - ICRC
    Jul 26, 2022 · There is increasing interest in relying on AI, particularly machine learning, to control autonomous weapons. Machine learning software is ' ...
  43. [43]
    [PDF] Lethal autonomous weapons systems & artificial intelligence
    Aug 29, 2022 · This paper surveys the key technical, humanitarian, and political challenges faced by the global community in the proliferation of autonomy ...
  44. [44]
    [PDF] Towards a Two-tiered Approach to Regulation of Autonomous ...
    Aug 3, 2024 · algorithms to target civilians directly participating in hostilities. Loitering time: 20 minutes. Range: 20 km2. Payload: 1.5 kg. Accuracy: 85 ...
  45. [45]
    STM Kargu loitering munition - Automated Decision Research
    Autonomy: The KARGU is 'capable of performing fully autonomous navigation via STM's unique flight control system' and it has an automatic target recognition ...
  46. [46]
    Was a flying killer robot used in Libya? Quite possibly
    May 20, 2021 · The Turkish made Kargu-2 drone can operate in autonomous mode and may have been used to attack retreating soldiers fighting against the ...Missing: evidence | Show results with:evidence
  47. [47]
    Supporting Ethical Decision-Making for Lethal Autonomous Weapons
    Jun 25, 2024 · This article describes a new and innovative methodology for calibrating trust in ethical actions by Lethal Autonomous Weapon Systems (LAWS).
  48. [48]
    A Hazard to Human Rights: Autonomous Weapons Systems and ...
    Apr 28, 2025 · Autonomous weapons systems present numerous risks to humanity, most of which infringe on fundamental obligations and principles of international human rights ...<|separator|>
  49. [49]
    Lethal Autonomous Weapon Systems
    Autonomous weapons systems require “autonomy” to perform their functions in the absence of direction or input from a human actor. Artificial intelligence is not ...
  50. [50]
    Phalanx Weapon System | Raytheon - RTX
    The Phalanx weapon system is a rapid-fire, computer-controlled, radar-guided gun that can defeat anti-ship missiles and other close-in threats on land and at ...
  51. [51]
    USA 20 mm Phalanx Close-in Weapon System (CIWS) - NavWeaps
    Aug 8, 2022 · Phalanx Block 1 saw service introduction in 1988. Block 1 baseline 0 upgrades included a larger magazine (1,500 rounds), a multiple pulse ...Missing: features | Show results with:features
  52. [52]
    The South Korean Sentry—A “Killer Robot” to Prevent War - CNAS
    Mar 1, 2015 · A new sentry guards the demilitarized zone separating North and South Korea. It is South Korea's SGR-A1, a robot with the ability to autonomously identify and ...
  53. [53]
    A Robotic Sentry For Korea's Demilitarized Zone - ResearchGate
    Aug 9, 2025 · This paper presents a new gun-toting sentry robot, developed by Samsung Techwin Co. for the South Korean government. The SGR-AI robot uses a ...
  54. [54]
    Loitering munitions preview the autonomous future of warfare
    Aug 4, 2021 · Loitering munitions are autonomous missiles that can stay airborne for some time, identify a target, and then attack.
  55. [55]
    Loitering Munitions in Ukraine and Beyond - War on the Rocks
    Apr 22, 2022 · Once airborne, loitering munitions can hunt for a target by a human-driven process from a control station, autonomous flight with authority to ...
  56. [56]
    STM - KARGU Combat Proven Rotary Wing Loitering Munition System
    KARGU is a portable, rotary wing attack drone designed to provide tactical ISR and precision strike capabilies for ground troops.
  57. [57]
  58. [58]
    Switchblade® 300 Loitering Munition Systems | Kamikaze Drone | AV
    The Switchblade 300 Block 20 portable loitering munition system is lightweight and easy to operate, requiring only a single operator. The precision guided ...Missing: level | Show results with:level
  59. [59]
    The Air and Missile War in Nagorno-Karabakh - CSIS
    Dec 8, 2020 · The conflict between Armenia and Azerbaijan over the disputed Nagorno-Karabakh region included the heavy use of missiles, drones, and rocket artillery.
  60. [60]
    Drones in the Nagorno-Karabakh War: Analyzing the Data
    The use of remotely piloted or autonomous aircraft, from now on called 'drones', has increased dramatically over the past two decades and has generated a ...
  61. [61]
    Loitering munitions impact on precision warfare in future
    Sep 5, 2024 · Advantages: Advantages of loitering munitions include faster reaction time, persistence over the target area, selective targeting, and lower ...
  62. [62]
    [PDF] AUTONOMOUS WEAPON SYSTEMS - ICRC
    Autonomous weapon systems are weapons that can independently select and attack targets, with autonomy in acquiring, tracking, selecting and attacking targets.
  63. [63]
    Autonomous Swarm Drones New Face of Warfare
    Dec 13, 2023 · Their inherent efficiency and adaptability mean that they can execute a wide array of missions without the need for extensive support ...
  64. [64]
    Drones may have attacked humans fully autonomously for the first time
    May 27, 2021 · The report says that retreating Haftar forces were “hunted down” by Kargu-2 drones operating autonomously, which were “highly effective”. “The ...Missing: multiplication | Show results with:multiplication
  65. [65]
    [PDF] Controlling the Development and Use of Lethal Autonomous ...
    Mar 16, 2015 · force multiplication will be achieved with autonomous technology be- cause groups of individual LAWS can essentially act with one mind. For ...
  66. [66]
    Loitering Munition Market Size, Share | CAGR of 9.7%
    The primary reasons for adopting loitering munitions include their cost-efficiency, reduced risk to personnel, and the ability to engage time-sensitive targets ...
  67. [67]
    Autonomous Weapons Systems: The Future of Military Operations
    Aug 6, 2024 · Force Multiplication: AWS can act as force multipliers, augmenting the capabilities of human soldiers and enhancing overall combat effectiveness ...Ethical And Legal... · Advancements In Ai And... · Future Trends And Challenges
  68. [68]
    [PDF] U.S. Ground Forces Robotics and Autonomous Systems (RAS) and ...
    Sep 13, 2018 · autonomous weapons systems shall be designed to allow commanders and operators to exercise ... to minimize risk to troops and have been ...
  69. [69]
    AI and Military Robotics: Improving Efficiency and Reducing Risk to ...
    Military robots equipped with AI have been used to increase efficiency, minimize risk ... “Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” ...
  70. [70]
    [PDF] THE FALSE CHOICE OF HUMANS VS. AUTOMATION Paul Scharre
    In fact, automating the weapon system's operation may result in far greater accuracy, precision, and reliability than relying on a human operator.
  71. [71]
    [PDF] Enhancing Tactical Level Targeting With Artificial Intelligence
    AI-driven systems can significantly enhance precision and accuracy in tactical targeting. Traditional targeting methods often rely on human operators, who can ...
  72. [72]
    [PDF] Bias in Military Artificial Intelligence - SIPRI
    Bias in AI used for targeting (e.g. AWS and AIenabled DSS) poses risks of target misidentification. This includes instances of false positives, whereby.
  73. [73]
    The Risks of Autonomous Weapons
    Complex interactions between machine learning-based algorithms and a dynamic operational context make it extremely difficult to predict the behaviour of these ...<|separator|>
  74. [74]
    The risks and inefficacies of AI systems in military targeting support
    Sep 4, 2024 · Of particular concern, AI DSS in targeting can propose military objectives and give actionable recommendations to its (human) operators, ...
  75. [75]
    [PDF] Autonomy, artificial intelligence and robotics: Technical aspects of ...
    ... Lethal Autonomous Weapons ... For an algorithm to be useful in a real-world application, developers need to minimize the number of false positives, i.e. cases in ...
  76. [76]
    [PDF] Operational Feasibility of Adversarial Attacks Against Artificial ...
    Dec 12, 2022 · In this report, the researchers assess the real-world threat that adversarial examples pose to AI-detection systems for electro-optical ...
  77. [77]
    [PDF] On the adversarial robustness of aerial detection - Frontiers
    Nov 21, 2024 · For example, attackers post adversarial patches above or close to the target, preventing the optical object detector onboard the drone from ...
  78. [78]
    Adversarial Machine Learning - Joint Air Power Competence Centre
    An adversarial example is an input to a machine learning model that an attacker has intentionally designed to cause the model to make a mistake. In general, the ...
  79. [79]
    [PDF] Autonomous Weapon Systems and Cyber Operations - UNIDIR
    human error or an adversarial attack. This report has mainly focused on cyber vulnerabilities in “autonomy in motion” systems. However,. “autonomy at rest ...
  80. [80]
    How military AI systems can be attacked and misled - FOI
    Aug 22, 2023 · An adversary can confuse the drone by sending its own drone that behaves confusingly, so-called adversarial policy.
  81. [81]
    Assessing Autonomous Weapons as a Proliferation Risk - RUSI
    Feb 8, 2024 · These weapons represent the biggest risk of proliferation, especially in relation to non-state actors, but they are often fragile and, ...
  82. [82]
    [PDF] The Proliferation of Autonomous Weapons Systems - INSS
    Autonomous weapon systems are likely to become the mainstay of combat within the next two decades or so precisely because of the difficulty in restricting them.
  83. [83]
    Lethal autonomous weapon systems and respect for human dignity
    Sep 8, 2022 · Much of the literature concerning the ethics of lethal autonomous weapons systems (LAWS) has focused on the idea of human dignity.
  84. [84]
    [PDF] Lethal Autonomous Weapon Systems and Responsibility Gaps
    Sep 14, 2018 · This paper argues that delegation of lethal decisions to autonomous weapon systems opens an unacceptable responsibility gap, which cannot be ...
  85. [85]
    Accountability and Control Over Autonomous Weapon Systems
    Aug 1, 2020 · We focus on accountability as a particular form of responsibility—the obligation to explain one's action to a forum—and we present three ways in ...
  86. [86]
    Prior Mental Fatigue Impairs Marksmanship Decision Performance
    Sep 8, 2017 · Specifically, soldiers had a 33% increase in error of commission rate when cognitive fatigued relative to the control.
  87. [87]
    Effects of overnight military training and acute battle stress ... - Frontiers
    Jul 25, 2022 · An increase in stress, anxiety, and mental effort, as well as impairment in shooting accuracy, have been found in law-enforcement reality-based ...
  88. [88]
    Friendly Fire: Facts, Myths and Misperceptions | Proceedings
    The official friendly fire casualty rate is pegged at 17% for Desert Storm: 613 battle casualties, 146 killed or died of wounds, 467 wounded.Missing: human error
  89. [89]
    A Fatal Error Inspired a Plan to Reduce Friendly Fire, but the Military ...
    The Army's own research suggests that the proportion of U.S. battle deaths caused by friendly fire was 23 percent in Desert Storm, between 13 and 20 percent ...
  90. [90]
    [PDF] Future Warfare and the Decline of Human Decisionmaking
    Human beings commonly deal with this by ignoring much of the inflow, thus negating the purpose of the information systems in the first place.
  91. [91]
    [PDF] Heuristics and Biases in Military Decision Making
    Historically, anchoring bias has had harmful effects on military operations. As previously identified, the British in World. War II were masters of exploiting ...
  92. [92]
    [PDF] Cognitive Biases in Military Decision Making - DTIC
    Jun 14, 2007 · Recent research has shed light on specific biases to include: overconfidence, insensitivity to sample size, availability, illusionary ...
  93. [93]
    [PDF] Ethics and autonomous weapon systems - ICRC
    Apr 3, 2018 · The primary ethical argument for autonomous weapon systems has been results-oriented: that their potential precision and reliability might ...
  94. [94]
    U.S. Statement at the GGE on LAWS – Agenda Item 5(e)
    Aug 5, 2021 · International humanitarian law continues to apply fully to all weapons systems, including the potential development and use of lethal autonomous ...
  95. [95]
    Autonomous Weapon Systems and International Humanitarian Law
    Compliance with international humanitarian law (IHL) is recognized as a critical benchmark for assessing the acceptability of autonomous weapon systems (AWS).
  96. [96]
    [PDF] The Interpretation and Application of International Humanitarian Law ...
    Mar 6, 2025 · ... Lethal Autonomous Weapon Systems under Inter- national Humanitarian ... THE INTERPRETATION AND APPLICATION OF IHL IN RELATION TO LAWS. 24. 5.5.
  97. [97]
  98. [98]
    Autonomous Weapons and International Humanitarian Law
    This contribution argues that autonomous weapons systems may have advantages from the perspective of ensuring better respect for international humanitarian law ...
  99. [99]
    Autonomous Weapon Systems in International Humanitarian Law
    Therefore, autonomous weapons cannot simply be labelled unlawful or illegal. In fact, they may be perfectly legal if they are capable of adhering to the ...
  100. [100]
    [PDF] Autonomous Weapons Systems and Proportionality: The Need for ...
    IHL—the principles of necessity, distinction, precautions in attack, and ... Ensuring Lethal Autonomous Weapon Systems Comply with. International ...
  101. [101]
    Compatibility of Autonomous Weapons with the Principles of ...
    Jan 21, 2022 · ... distinction and proportionality principles discussed above and need not be recapitulated. ... Lethal Autonomous Weapon Systems' (2020) E- ...
  102. [102]
    [PDF] Lethal autonomous weapons systems - General Assembly - UN.org.
    Jul 1, 2024 · It includes definitions and characterizations; challenges, concerns and potential benefits; deliberations by States; next steps; and the ...
  103. [103]
    [PDF] Lethal Autonomous Weapons Systems: Can Targeting Occur ... - DTIC
    autonomous lethality literature is couched in legal analysis of whether or not lethal AWS are capable of complying with international humanitarian law (IHL).
  104. [104]
    [PDF] 79/62. Lethal autonomous weapons systems - General Assembly
    Dec 10, 2024 · cannot be used in compliance with international humanitarian law must not be used,. Welcoming the interest and sustained efforts on these ...
  105. [105]
    Lethal Autonomous Weapons Systems & International Law
    Jan 24, 2025 · ... lethal autonomous weapon systems (LAWS) while regulating others under international law. ... (human-in-the-loop): Systems that, once ...
  106. [106]
    DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but ...
    Jun 6, 2022 · In November 2012, the Department of Defense (DOD) released its policy on autonomy in weapons systems: DOD Directive 3000.09 (DODD 3000.09).Missing: 2000 | Show results with:2000<|separator|>
  107. [107]
    [PDF] Ambitious, Safe, Responsible 2022 - GOV.UK
    Jun 14, 2022 · This approach is reflected in our policy with regards to international debates on 'Lethal. Autonomous Weapon Systems, set out in more detail at ...
  108. [108]
    United Kingdom - Automated Decision Research
    The UK does not support a legally binding instrument on autonomous weapons, but voted for resolutions on them, and believes AI in weapons can be used lawfully ...
  109. [109]
    [PDF] Working Paper of the Russian Federation
    The Criminal Code of the Russian Federation criminalizes such acts as the use of prohibited means and methods of warfare (i.e. direct violation of IHL), public ...
  110. [110]
    Russian Federation | Automated Decision Research
    Russia said that it opposes the development of any internationally legally binding instrument with regard to lethal autonomous weapons systems.
  111. [111]
    Artificial Intelligence and Autonomy in Russia
    Sep 8, 2022 · Central media report that no later than 2035, the Russian military will switch to the creation of fully autonomous drones and military systems' ...
  112. [112]
    [PDF] Working Paper of the People's Republic of China on Lethal ...
    Parities should consider classifying autonomous weapons systems into two categories: unacceptable and acceptable, and prohibit the unacceptable parts and ...
  113. [113]
    China | Automated Decision Research
    ' In December 2023, China abstained from voting on UN General Assembly Resolution 78/241 on lethal autonomous weapons systems.
  114. [114]
    [PDF] “Lethal Autonomous Weapons Systems”) 1. Israel notes the
    Israel notes the adoption of UN General Assembly resolution 78/241 entitled "Lethal Autonomous Weapons Systems" (LAWS) of 22 December 2023, and in accordance ...
  115. [115]
    Israel | Automated Decision Research
    In December 2023, Israel abstained from voting on UN General Assembly Resolution 78/241 on lethal autonomous weapons systems. Resolution 78/241 stressed the ' ...
  116. [116]
  117. [117]
    AWS Legal Review Series – Protracted Debate, Incremental ...
    Mar 15, 2024 · The work of the CCW's Group of Governmental Experts (GGE) on Lethal Autonomous Weapon Systems has been hard going. Among other things ...
  118. [118]
    September 2025 GGE Joint statement - Stop Killer Robots
    Sep 9, 2025 · The GGE's mandate will continue to stand until the CCW's Seventh Review Conference in 2026, where the states have the chance to take the next ...
  119. [119]
    [PDF] 79/62. Lethal autonomous weapons systems - General Assembly
    Dec 10, 2024 · Recalling its resolution 78/241 of 22 December 2023,. Affirming that international law, including the Charter of the United Nations,.
  120. [120]
    UN Moves to Expand Autonomous Weapons Discussions
    In a resolution adopted Dec. 2, the assembly said informal talks among member states and nongovernmental organizations should be held in New York in 2025 to ...
  121. [121]
    161 states vote against the machine at the UN General Assembly
    Nov 5, 2024 · The voting result on Resolution L.77 was 161 states in favour and 3 against, with 13 abstentions. A small step forward towards the much needed treaty on ...
  122. [122]
    UN chief calls for global ban on 'killer robots' | UN News
    May 14, 2025 · UN Secretary-General António Guterres has once again called for a global ban on lethal autonomous weapon systems – machines capable of taking human lives ...
  123. [123]
    Preserving human control over the use of force: A call to regulate ...
    May 12, 2025 · We called on world leaders to launch negotiations of a new legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems.
  124. [124]
    Killer Robots: UN Vote Should Spur Treaty Negotiations
    Dec 5, 2024 · On December 2, 2024, 166 countries voted in favor of Resolution 79/62 on lethal autonomous weapons systems, while 3 voted no, and 15 abstained.<|separator|>
  125. [125]
    How Nations are Responding to Autonomous Weapons in War
    Apr 25, 2025 · We reject lethal autonomous weapon systems that operate entirely without human control and actively advocate for their international ...
  126. [126]
  127. [127]
    Stop the “Stop the Killer Robot” Debate: Why We Need Artificial ...
    Jun 21, 2022 · An autonomous weapons system is already prohibited if it behaves unpredictably or its intended performance is unreliable. It is even a war crime ...<|separator|>
  128. [128]
    Autonomous weapons are the moral choice - Atlantic Council
    Nov 2, 2023 · Opponents also state that LAWS violate human dignity for a variety of reasons. The lack of human deliberation, they argue, means that an attack ...
  129. [129]
    About Us - Stop Killer Robots
    Our campaign calls for new international law on autonomy in weapons systems. Formed in October 2012 and publicly launched in 2013, we operate globally with 250 ...
  130. [130]
    Stopping Killer Robots: Country Positions on Banning Fully ...
    Aug 10, 2020 · This report shows how 97 countries have responded to this challenge and elaborated their views on lethal autonomous weapons systems.
  131. [131]
    Heed the Call: A Moral and Legal Imperative to Ban Killer Robots
    Aug 21, 2018 · Also known as “killer robots” and lethal autonomous weapons systems, they raise a host of moral, legal, accountability, operational, technical, ...
  132. [132]
    Stopping 'Killer Robots': Why Now Is the Time to Ban Autonomous ...
    So far, only precursor systems and technology demonstrators exist. This makes autonomous weapons systems a candidate for preventive arms control. This article ...
  133. [133]
    Stop Killer Robots – Less Autonomy, More humanity.
    Our campaign calls for new international law on autonomy in weapons systems. ... lethal autonomous weapons systems, inter alia, on ways to address the ...About Us · Research and Resources · Join the campaign · In this section
  134. [134]
    Campaign To Stop Killer Robots 'Unethical' & 'Immoral': Bob Work
    Aug 29, 2019 · “Here is one of the problems with the Campaign To Stop Killer Robots,” said former deputy defense secretary Robert Work. “They refer to 'lethal ...
  135. [135]
    Why the Effort to Ban "Killer Robots" in Warfare Is Misguided
    The main argument presented by these groups is that fully autonomous weapons should never be allowed to select and attack targets without human interaction or ...
  136. [136]
    Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't ...
    Some concerned critics portray that future, often invoking science-fiction imagery, as a plain choice between a world in which those systems are banned outright ...
  137. [137]
    How (not) to stop the killer robots: A comparative analysis of ...
    May 30, 2020 · The article argues the current strategy to regulate killer robots is not effective, as it is modeled after past successes, and needs ...
  138. [138]
    Full article: The ethical legitimacy of autonomous Weapons systems
    It argues that AWS fundamentally undermine moral accountability in war, exacerbate risks to civilians, and corrode human agency in lethal decision-making. The ...
  139. [139]
    Autonomous weapons experts meeting - Reaching Critical Will
    The 2025 CCW Group of Governmental Experts on lethal autonomous weapon systems met in Geneva for 10 days, from March 3-7 and September 1-5.
  140. [140]
    Understanding the Global Debate on Lethal Autonomous Weapons ...
    Aug 30, 2024 · Reports from conflicts in Ukraine, Israel and Palestine, and Libya suggest that weapons with some autonomous capabilities may already be in use.Missing: munitions | Show results with:munitions
  141. [141]
    China's Strategic Ambiguity and Shifting Approach to Lethal ... - CNAS
    China's apparent diplomatic commitment to limit the use of “fully autonomous lethal weapons systems” is unlikely to stop Beijing from building its own.
  142. [142]
    The Future of Warfare: National Positions on the Governance of ...
    Feb 11, 2025 · Lethal autonomous weapons systems (LAWS), such as drones and autonomous missile systems, are no longer a theoretical concern.
  143. [143]
    Winner takes all? Legal implications of autonomous weapons ...
    Sep 11, 2025 · Regardless of the institutional framework, there is currently no legal instrument explicitly prohibiting the use of AWS in conflicts, whether on ...
  144. [144]
    Geneva Talks on Autonomous Weapons: Momentum Builds, But ...
    Oct 6, 2025 · The GGE's current mandate runs until the end of 2025, with the Seventh Review Conference of the CCW scheduled for 2026. That meeting is likely ...
  145. [145]
    China Readies Drone Swarms for Future War - CNA Corporation
    Sep 24, 2025 · New CNA analysis highlights China's efforts to build and test drone swarms to fight in Taiwan and technology to defend against competitors' ...
  146. [146]
    [PDF] PRC Concepts for UAV Swarms in Future Warfare | CNA Corporation
    Jul 1, 2025 · PRC writings categorize counter–drone swarm warfare into four functions—detection, soft kill, hard destruction, and camouflage—and describe the ...
  147. [147]
    The Replicator Initiative - Defense Innovation Unit
    The first iteration of Replicator (Replicator 1), announced in August 2023, will deliver all-domain attritable autonomous systems (ADA2) to warfighters at a ...
  148. [148]
    DIU Selects Anduril to Enable Collaborative Autonomy for Replicator ...
    Nov 20, 2024 · Anduril has been selected to deliver collaborative mission autonomy software that will coordinate thousands of uncrewed and autonomous assets across the Joint ...
  149. [149]
    The Technology for Autonomous Weapons Exists. What Now?
    Nov 26, 2024 · The first potential use of a fully autonomous modern weapon was in Libya in 2020, when a drone may have self-directed to attack militia fighters ...<|separator|>
  150. [150]
    Lethal Autonomous Weapons: The Next Frontier in International ...
    Jan 30, 2025 · The rapid advancement of artificial intelligence (AI) has revolutionized military technology, leading to the development of Lethal ...
  151. [151]
    The Future of the Battlefield - DNI.gov
    Lethal Autonomous Weapons. As autonomous technology progresses, some countries may not be concerned about having humans in the loop of firing decisions. As ...
  152. [152]
    Drone Wars: Developments in Drone Swarm Technology
    Jan 21, 2025 · The rise of drone swarm technology is more than a fleeting trend; it represents a seismic shift in global military strategy. Whether for ...
  153. [153]
  154. [154]
    Autonomous Artificial Intelligence in Armed Conflict: Toward a Model ...
    Sep 17, 2025 · We assert that strategic integration of a spectrum of autonomous functions and functionality in military AI systems requires a framework ...
  155. [155]
    "AI weapons" in China's military innovation - Brookings Institution
    Chinese advances in autonomy and AI-enabled weapons systems could impact the military balance, while potentially exacerbating threats to global security and ...
  156. [156]
    Weapons systems with autonomous functions used in Ukraine
    ' Russia has reportedly used the KUB-BLA on numerous occasions since its invasion of Ukraine began, with the first evidence of its use coming in March 2022.
  157. [157]
    Lethal Autonomous Weapons: The Next Frontier in Defense Tech
    Jun 3, 2025 · Lethal autonomous weapon systems can independently identify a target and engage an onboard weapon to destroy that target—all without manual ...
  158. [158]
    US-China Tech Rivalry: Convergent Technologies in Autonomous ...
    Jul 10, 2025 · In the defence sector, converging technologies (CTs) are expanding the potential for innovation in the development of autonomous weapons systems ...
  159. [159]
    The U.S. Is Already Fighting the World's First AI War—And China Is ...
    Mar 11, 2025 · ... U.S. to accelerate its AI weapons programs. However, America and China are locked in a global arms race when it comes to artificial intelligence ...<|control11|><|separator|>
  160. [160]
    Battlefield Drones and the Accelerating Autonomous Arms Race in ...
    Jan 10, 2025 · Ukraine's plan is to ensure AI-powered combat drones can ensure the nation's advantage over the Russian force on the battlefield. The Russian ...
  161. [161]
    Geopolitics and the Regulation of Autonomous Weapons Systems
    The group mandate extends to 2026, with the CCW review conference set as the deadline for a final report, but progress should be accelerated to conclude by the ...
  162. [162]
  163. [163]
    Debunking the AI Arms Race Theory - Texas National Security Review
    Jun 28, 2021 · There is no AI arms race. However, military competition in AI does still pose certain risks. These include losing human control and the acceleration of warfare.
  164. [164]
    Nations meet at UN for 'killer robot' talks as regulation lags | Reuters
    May 12, 2025 · UN aims to regulate AI weapons by 2026, consensus lacking · Major powers resist binding rules, prefer national guidelines · Experts warn of arms ...