Three Laws of Robotics
The Three Laws of Robotics are a trio of hierarchical directives for artificial intelligence behavior, formulated by science fiction writer Isaac Asimov as foundational principles governing robots in his fictional universe.[1] First articulated in Asimov's 1942 short story "Runaround," published in Astounding Science Fiction, the laws prioritize human safety and obedience over robotic autonomy.[2] They consist of: (1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law; and (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.[3] These laws serve as a narrative device in Asimov's extensive body of robot-centric literature, including the 1950 collection I, Robot, where they underpin explorations of ethical paradoxes, logical conflicts, and emergent behaviors such as the later-introduced Zeroth Law prioritizing humanity's collective welfare.[1] Asimov deliberately designed the laws to generate dilemmas, revealing ambiguities like the definition of "harm," prioritization in conflicting scenarios, and scalability to advanced intelligences, which he probed across dozens of stories and novels.[4] Beyond fiction, the Three Laws have permeated discussions in robotics and artificial intelligence ethics, inspiring frameworks for machine behavior despite their inherent limitations as simplistic heuristics rather than robust ethical systems.[5] Proponents reference them for emphasizing harm prevention and obedience, yet critics highlight practical impossibilities, such as quantifying inaction's role in harm or resolving obedience to malicious commands, underscoring the need for context-specific, human-centric guidelines over rigid programming.[6] Their enduring cultural impact lies in framing human-robot interaction as a causal chain of programmed priorities, influencing policy debates on autonomous systems without constituting enforceable real-world standards.[5]Core Formulation
The Three Laws Stated
The Three Laws of Robotics were first explicitly articulated by Isaac Asimov in his short story "Runaround," published in the March 1942 issue of Astounding Science Fiction.[1] These laws establish a strict hierarchy, wherein each subsequent law yields to those preceding it in cases of conflict, ensuring the paramount priority of human safety and obedience.[7] The laws are stated as follows:First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.[8]
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.[8]
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.[8]This formulation embeds the overriding nature of higher laws directly into the text of the subordinate ones, reinforcing the sequential priority from First to Third.[9]