Fact-checked by Grok 2 weeks ago

Computing Machinery and Intelligence

"Computing Machinery and Intelligence" is a seminal philosophical paper authored by British mathematician and computer scientist Alan Mathison Turing, published in the journal in October 1950. In it, Turing addresses the question of whether machines can think by proposing a practical test known as , later termed the , wherein a human interrogator attempts to distinguish between a machine and a human based solely on text-based responses to questions. The paper begins by critiquing the vagueness of the phrase "can machines think?" and instead frames the inquiry through , originally involving a man and a woman trying to deceive an interrogator about their genders via written communication; Turing adapts this to substitute a digital computer for the man, assessing if it can imitate human responses convincingly enough to fool the interrogator at least 30% of the time in five minutes. Turing argues that digital computers, characterized by a , store, and executive unit capable of following programmed instructions, can function as universal machines simulating any discrete-state system given sufficient storage and speed. He predicts that by the year 2000, machines with a storage capacity of approximately 10^9 units would perform this task so effectively that an average interrogator would have no more than a 70% chance of correct identification. Turing systematically counters nine common objections to machine intelligence, including theological (machines lack souls), mathematical (via ), and practical concerns (machines cannot be creative or exhibit intuition), asserting that none definitively preclude s from thinking. He describes two types of s: those strictly following initial programming (without learning) and learning s that improve through , akin to raising a , emphasizing the need for s to acquire from sensory and adapt. The paper concludes optimistically, foreseeing a where intelligence is widely accepted, and calls for empirical experimentation rather than endless philosophical debate. This work laid foundational groundwork for the field of , influencing debates on machine consciousness, computational limits, and ethical implications of intelligent systems, though Turing's has faced for equating behavioral mimicry with true understanding.

Historical Context

Publication Details

Alan Turing's seminal , "Computing Machinery and Intelligence," was authored in 1950 and published in the philosophical journal Mind, volume 59, issue 236, pages 433–460. The paper was submitted to Mind while Turing was working at the , following his departure from the National Physical Laboratory (NPL) in 1948. The ideas in the paper had been presented earlier in a lecture titled "Lecture to the London Mathematical Society on 20 February 1947," where Turing discussed the design of the and first raised the possibility of , foreshadowing themes of machine intelligence. This talk, delivered while Turing was at the NPL, built on his ongoing work; Turing further developed these concepts in his 1948 report "Intelligent Machinery," an unpublished National Physical Laboratory document that explored mechanisms for , including unorganized machines and trial-and-error methods. The 1950 paper expanded on these ideas in a more comprehensive philosophical framework. The post-war context was pivotal, as Turing's efforts at the NPL from 1945 to 1948 focused on developing the , a stored-program digital computer intended to realize his vision of versatile computing machinery, influenced by his wartime experience in codebreaking at . These developments, including the challenges of resource allocation and engineering in the immediate aftermath of the war, shaped his practical engagement with computing during this period. Upon release, the paper received limited immediate attention within academic circles, as the field of had not yet coalesced and computing hardware remained rudimentary. However, it was praised by contemporaries such as J. R. Newman, who included it in his 1956 anthology The World of Mathematics and highlighted its provocative approach to the question of machine thinking.

Turing's Background and Influences

Alan Turing's foundational contributions to began with his 1936 paper, "On Computable Numbers, with an Application to the ," published in the Proceedings of the London Mathematical Society. In this work, Turing introduced the concept of the , an abstract device capable of simulating any algorithm, thereby providing a formal model for what it means to compute a function effectively. This model not only resolved key questions in , such as the posed by , but also laid the groundwork for understanding the limits and possibilities of mechanical computation, influencing Turing's later explorations into machine intelligence. Turing's 1950 paper, "Computing Machinery and Intelligence," was shaped by the philosophical currents of and prevalent in mid-20th-century . , which emphasized observable actions over internal mental states, aligned with Turing's operational approach to , avoiding untestable claims about . This echoed the ideas in Gilbert Ryle's 1949 book , which critiqued Cartesian and advocated for analyzing mental concepts through behavioral dispositions; Ryle, as editor of the journal Mind where Turing's paper appeared, shared sympathies with this anti-metaphysical stance. further reinforced Turing's preference for empirically verifiable criteria, drawing from the Circle's emphasis on meaningful statements as those reducible to observable evidence, though Turing did not explicitly cite these traditions. Turing's practical experiences with early computing , particularly his work on the computer in the late 1940s, directly motivated his inquiries into and intelligence. After joining the University of Manchester's computing laboratory in 1948, Turing contributed to the development of programming systems for the Mark 1, one of the first stored-program computers, which allowed him to experiment with software that could adapt and improve performance over time. These hands-on efforts with the machine's capabilities—such as subroutines for and library functions—prompted Turing to consider how computers might acquire through and trial-and-error, concepts he elaborated in the paper as mechanisms for machines to learn from experience. Central to Turing's argument was a deliberate philosophical shift away from the vague question "Can machines think?" toward a practical, operational criterion embodied in . By reframing the inquiry in terms of whether a digital computer could convincingly mimic human conversation in a text-based , Turing sidestepped metaphysical debates about the of thought, focusing instead on measurable behavioral performance. This substitution, as Turing noted, replaced subjective definitions with an empirical amenable to and , reflecting his broader to resolving philosophical problems through computational models.

The Imitation Game

Test Procedure

The Imitation Game, as proposed by , involves three participants: a interrogator (C), a respondent typically a (B), and either another (A, a in the original variant) or a replacing A. The interrogator is secluded in a separate room from the other two participants, who are labeled anonymously as X and Y to prevent identification based on physical appearance or voice. Communication occurs exclusively through written means, such as paper and pencil or, ideally, a , to eliminate auditory or visual cues that could influence judgments. This setup ensures that distinctions are made solely on the basis of textual responses, emphasizing the interrogator's reliance on conversational content. In the original variant of the game, the interrogator poses questions to X and Y to determine which is the man (A) and which is the (B), with A attempting to imitate B's responses to deceive the interrogator into misidentifying their genders. For instance, the interrogator might ask, "Will X please tell me the length of his or her hair?" to elicit replies that reveal or conceal gender-typical or mannerisms. This gender-based adds a layer of complexity, testing the interrogator's ability to discern subtle differences in use without non-verbal hints. Turing modifies the game for assessing machine intelligence by substituting the machine for A (the man), while B (the woman) responds truthfully to aid the interrogator in identifying the machine. The interrogator asks a series of questions—ranging from factual inquiries to prompts requiring creativity—aimed at distinguishing the human from the machine based on the quality and naturalness of replies. The machine passes the test if it causes the interrogator to make incorrect identifications as often as occurs in the original man-woman game; Turing predicted that by the year 2000, machines with sufficient capacity would achieve this such that an average interrogator would have no more than a 70% chance of correct identification after five minutes (corresponding to the machine deceiving about 30% of the time). The procedure prioritizes the machine's capacity for fluid, human-like , including handling humor, , and emotional nuances through text alone, rather than rote factual recall. Questions are designed to probe intellectual depth, such as solving puzzles or discussing abstract topics, with the facilitating real-time exchange to mimic everyday conversation. Over multiple trials, this setup evolved from the gender-imitation focus to a direct human-versus-machine comparison, centering on the machine's deceptive prowess in sustaining believable interaction.

Criteria for Machine Intelligence

In Alan Turing's 1950 paper, serves as a practical criterion for assessing machine , effectively sidestepping the philosophical ambiguity of whether machines can "think?" by focusing instead on observable performance. Turing argues that if a machine can sustain a via text such that an interrogator cannot distinguish it from a human participant as often as in the original between a man and a woman, it demonstrates sufficient intellectual capability to be considered thinking. This operational approach redefines not through or metaphysical definitions, but through behavioral equivalence in a controlled, conversational setting, where success implies the machine possesses the relevant intellectual faculties. Turing's thesis posits that the game's outcome provides a verifiable , as it avoids unanswerable questions about internal mental states and instead measures the machine's ability to mimic human responses indistinguishably. He emphasizes that the interrogator's judgment, based solely on textual exchanges, establishes a fair test of parity, drawing a sharp distinction between physical and capacities. By year 2000—fifty years after publication—Turing predicted that computers with approximately 10^9 bits of would achieve this level of proficiency, estimating that an average interrogator would have no more than a 70% chance of correctly identifying the after five minutes of questioning. While the criterion prioritizes behavioral observables over unverifiable internal processes—a stance aligned with operationalist principles in —Turing acknowledges inherent limitations in its scope. The test evaluates proficiency in natural language dialogue but does not encompass the full spectrum of human cognition, such as non-verbal , emotional depth, or artistic production beyond linguistic . For instance, Turing notes that machines might excel in the game without exhibiting the "thoughts and emotions" underlying human sonnets or concertos, underscoring that the benchmark targets conversational rather than holistic human-like .

Computational Foundations

Digital Computing Machines

Digital computers, as described by Turing, form the foundational for simulating intelligent through discrete-state operations. These machines consist of three primary components: a central store that holds both data and instructions in , an executive unit capable of performing basic arithmetic and logical operations, and a that sequences the execution of instructions retrieved from the store. This stored-program architecture enables the computer to execute any by loading an appropriate sequence of instructions, allowing flexibility in tasks ranging from numerical calculations to processes. The discrete nature of these systems—operating in finite states rather than continuous variables—contrasts with but provides reliability and for complex algorithms. In 1950, prominent examples of such digital computers included the Manchester Mark I, which featured a storage capacity of approximately 165,000 binary digits (roughly $2^{165,000} possible states) and performed about 1,000 logical operations per second, and the Harvard Mark III, an electromechanical device also operational at the time. This progression underscored the potential of digital hardware to support intelligence-like functions without requiring novel engineering paradigms beyond existing designs. The architecture's strength lies in managing combinatorial problems, such as chess-playing programs, where the machine evaluates branching decision trees from discrete board positions rather than relying on fluid intuition. For instance, a programmed digital computer could process an opponent's move, compute responses, and output a result like "R-R8 mate" after a brief computation, demonstrating how finite-state transitions simulate strategic reasoning. This approach leverages the universality of digital computers, akin to the abstract model proposed in Turing's earlier work, to encompass a wide array of intellectual tasks. Turing estimated that constructing a machine capable of thinking—defined by passing the —would require a storage capacity of around $10^9 binary digits and could be achieved by a team of 30 engineers over 50 years using established technology. This projection highlighted the feasibility of through scaled-up , emphasizing engineering effort over theoretical breakthroughs.

Limits of Machine Thinking

In his seminal paper, asserted that there is no fundamental logical barrier preventing digital computing machinery from achieving human-like thinking, as any systematic mental process could be simulated by a sufficiently advanced digital computer. emphasized that the question of machine intelligence should be reframed through practical tests like , rather than abstract philosophical debates, allowing machines to demonstrate capabilities equivalent to human cognition in conversational settings. Turing directly addressed potential limitations posed by Kurt Gödel's incompleteness theorems, which demonstrate that certain truths in formal mathematical systems cannot be proven within those systems, thereby constraining discrete-state machines. However, he countered that these theorems do not uniquely bar machine intelligence, since minds appear similarly bound by such incompleteness, with no evidence that humans can transcend these formal limits in a way that machines cannot. This equivalence suggests that any cognitive boundaries apply universally to both mechanical and biological intellects. Regarding the distinction between discrete and continuous processes, Turing argued that digital machines, operating on discrete states, are fully adequate for replicating intelligent behavior, as the imitation game relies on textual exchanges that do not favor continuous mechanisms like those hypothesized in analog or neural systems. He dismissed the necessity of continuous machinery for thought, noting that the interrogator in the test would be unable to exploit any differences in underlying styles. Looking ahead, Turing predicted that by the end of the twentieth century, digital machines with storage capacities around 10^9 binary digits would perform well enough in the imitation game to compete with humans, fooling interrogators in approximately 30% of cases after five minutes of questioning. This forecast underscored his optimism that technological progress would enable machines to rival human intellectual feats in practical domains.

Objections and Rebuttals

Philosophical and Theological Objections

One prominent philosophical objection raised against the possibility of machine intelligence is the theological argument, which posits that thinking is a function of the immortal , bestowed by exclusively upon humans and not upon or . In response, Turing contends that this view imposes an undue restriction on divine , as God could presumably confer a upon a or if desired, particularly alongside suitable physical adaptations like an enhanced . He further notes the arbitrary nature of such classifications by comparing them to other religious doctrines, such as the historical Moslem view denying to women, and dismisses theological arguments as historically unreliable, citing their past misuse against scientific advances like Galileo's . Another objection, often termed the "heads in the sand" argument, stems from emotional discomfort with the implications of machine thinking, expressing a hope that machines cannot achieve it to preserve human superiority over creation. Turing observes that this sentiment is widespread, particularly among intellectuals who prize thinking as the basis of human exceptionalism, and links it to the appeal of theological objections. Rather than refuting it substantively, he suggests consolation through concepts like the transmigration of souls, implying the argument lacks logical rigor. The argument from consciousness asserts that machines cannot truly think because they lack subjective feelings, emotions, or , as exemplified in Geoffrey Jefferson's 1949 Lister Oration, which demands that a machine must not only produce creative works like sonnets but also experience the associated thoughts and emotions. Turing counters by equating this to , where only one's own is verifiable, rendering interpersonal or inter-entity judgments of thought impossible and communication futile. He argues that polite convention assumes others think, and demonstrates through an example of —adapted as a examination—that behavioral indistinguishability in responses to probing questions would suffice as evidence, without needing to access internal states. Turing acknowledges the mystery of but maintains it need not be resolved to evaluate intelligence via observable criteria. The arguments from various disabilities claim that machines cannot exhibit essential traits, such as enjoying strawberries and cream, making mistakes on purpose, using correctly in all contexts, being kind or cruel, having , or originating surprises. Turing rebuts these by explaining that such behaviors can be simulated through appropriate programming and sufficient storage capacity, dismissing the objections as stemming from underestimating the versatility of digital machines rather than inherent impossibilities. Lady Lovelace's objection, derived from Ada Lovelace's 1842 memoir on Charles Babbage's , claims that machines merely execute programmed instructions without originating anything new, limited to manipulating symbols as directed. Turing agrees that early machines like the lacked such capabilities based on available evidence but highlights its status as a universal digital computer, capable—given sufficient storage and speed—of simulating any discrete-state machine, including one that appears original through appropriate programming. He extends this to a variant objection that machines cannot produce truly novel work, retorting that human creativity is similarly constrained by prior experiences and knowledge, with no absolute originality under the sun.

Practical and Definitional Objections

In his 1950 paper, addressed several objections to machine intelligence that stemmed from perceived practical limitations of contemporary computing technology and definitional challenges in equating mechanical processes with thought. These included concerns about the formalizability of thinking, the mismatch between discrete systems and potentially continuous biological processes, and the apparent informality of compared to rule-bound machines. Turing countered each by emphasizing the sufficiency of digital approximations for practical purposes and the potential for machines to evolve through learning, thereby sidestepping absolute definitional barriers. The mathematical objection posited that human thinking cannot be fully formalized due to inherent limitations in discrete-state machines, such as those highlighted by , which demonstrate that certain truths within formal systems are unprovable. Proponents argued that while machines are bound by such constraints, the mind can discern unprovable statements, suggesting an insurmountable gap. Turing rebutted this by noting that humans also commit mathematical errors, indicating that their processes are not purely formal or infallible; moreover, any specific machine's limitations could be surpassed by designing a more advanced one, just as intellect varies in capability. He illustrated this with the observation that "there might be men cleverer than any given machine, but then again there might be other machines cleverer again," underscoring that no absolute disability precludes machine intelligence. Another definitional challenge, the argument from continuity, contended that the operates as a continuous system—governed by analog neural processes—while computers are inherently , making accurate impossible. Turing responded that for the purposes of , where an interrogator assesses responses via text without detecting subtle physical differences, a machine's of continuous behavior would suffice; for instance, values like π could be represented with finite (e.g., 3.14) without undermining the . He further argued that even if the involves continuous elements, it could be effectively modeled discretely at the level, where impulses and states are quantized, allowing systems to replicate outcomes adequately. The argument from informality asserted that lacks the rigid, predictable rules of machines, relying instead on intuitive, non-discrete processes that defy formal programming. Critics claimed this informality enables and adaptability beyond mechanical . Turing countered that both humans and machines are ultimately governed by physical laws, rendering their behaviors deterministic in principle, though practically unpredictable due to ; he cited a hypothetical with 1,000 discrete units as an example, stating, "I would defy anyone to learn from these replies sufficient about the programme to be able to predict any replies." This equivalence in underlying regulation dissolved the definitional divide. The argument from extrasensory perception suggests that human abilities like telepathy provide a non-computational advantage that machines cannot replicate, thus machines cannot fully imitate human thinking. Turing rebuts this by proposing that the imitation game can be modified to eliminate such influences, for example, by conducting the test in a telepathy-proof room or under conditions that prevent extrasensory communication, thereby ensuring the test focuses solely on observable responses. Despite these practical shortcomings in 1950s technology, Turing predicted that machines would overcome definitional hurdles through learning mechanisms, enabling them to pass . He envisioned "child machines" educated like humans, gradually acquiring via trial-and-error and , and forecasted that by the year 2000, a computer with a storage capacity of about $10^9 bits could fool an interrogator into misidentifying it as human roughly 30% of the time in a five-minute test—sufficient to demonstrate viable .

Learning Mechanisms

Trial-and-Error Methods

In his 1950 paper, outlined trial-and-error learning as a core mechanism for machines to acquire knowledge, wherein the system tests actions in an and adjusts based on outcomes, retaining successful patterns while discarding failures—mirroring aspects of human learning through . He envisioned this as involving reward and punishment signals: events preceding a reward increase in probability of recurrence, while those before punishment diminish, allowing the machine to refine behavior over iterations without predefined instructions. Turing suggested that such methods could be applied to develop machine capabilities in intellectual tasks, such as , by experimenting with processes and observing improvements. This approach builds on prior trials to foster progressive expertise akin to a child's learning under guidance. Central to these methods is the requirement for substantial capacity, as machines must store accumulated experiences—estimated by Turing at around 10^9 digits for basic tasks—to reference past outcomes and prevent redundant exhaustive searches. Without such , trial-and-error would devolve into inefficient repetition, underscoring as a foundational limit on efficacy.

Evolutionary and Genetic Approaches

In his 1950 paper, proposed an evolutionary approach to , drawing direct analogies to biological to develop intelligent machines. This method involves simulating Darwinian by generating "child" machines whose structures represent hereditary material, with modifications akin to , and evaluating their through the judgment of an experimenter acting as the selective pressure. Rather than programming an adult-level intellect directly, Turing suggested beginning with child machines—simple systems with minimal mechanism and ample capacity for development—and iteratively improving them across generations to mimic the growth from a child's mind to an adult's. These ideas build on his earlier 1948 report "Intelligent Machinery," where he introduced the concept of unorganized machines. The process starts with generating initial simple programs, each representing a potential machine . These are tested on tasks to measure performance. The fittest machines, those exhibiting superior results, are selected as "parents," from which new child machines are bred by copying and slightly altering their instructions through controlled . This cycle of reproduction, variation, and selection is repeated over multiple generations, allowing successful traits to propagate while weaker ones are discarded, gradually refining the machines' capabilities. Turing emphasized that this search for effective could incorporate the experimenter's to guide , rather than relying solely on , thereby accelerating the learning process. Turing drew a parallel between this machine evolution and human education, viewing cultural transmission as a form of social evolution for developing minds. Just as human children acquire knowledge through teaching and societal norms, which refine their innate structures over time, machines could undergo an "education" phase where selected programs are exposed to training data or criteria to evolve intellectually. This analogy positions not as isolated trial-and-error but as a collective, generational advancement, where the simple baseline machinery develops through guided much like human cultural progress. Turing argued that this evolutionary method held significant potential to create machines surpassing human intellect, as it could operate at a far faster pace than biological . By leveraging computational speed and deliberate selection, the process avoids the sluggish "" in nature, potentially enabling machines to compete with or exceed humans in all purely domains within decades.

Legacy and Modern Interpretations

Influence on Artificial Intelligence

Alan Turing's 1950 paper "Computing Machinery and Intelligence" played a pivotal role in establishing as a distinct field of study, most notably by inspiring the organizers of the 1956 Summer Research Project. The conference proposal, authored by John McCarthy, , , and , explicitly referenced Turing's work as a foundation for exploring whether machines could simulate , and it was there that the term "" was formally coined to describe the pursuit of creating machines capable of intelligent behavior. This event marked the birth of as an , drawing together researchers to address the questions Turing had posed about machine thinking. The , proposed in the paper as the "imitation game," emerged as a foundational benchmark for evaluating machine intelligence, emphasizing observable behavioral equivalence to humans rather than internal mechanisms. This behavioral focus influenced early systems, such as Joseph Weizenbaum's program in 1966, which simulated through and keyword recognition, demonstrating how machines could engage in indistinguishable from a human in limited contexts. 's success in eliciting human-like interactions from users highlighted the test's practicality and spurred the development of subsequent chatbots, establishing conversational ability as a core metric in AI evaluation. Turing's emphasis on external over aligned with a behaviorist perspective in , shaping ongoing debates between symbolic —focused on rule-based manipulation—and , which prioritizes learning to mimic brain-like processes. By framing intelligence as verifiable through performance in tasks like , the paper encouraged both paradigms to prioritize empirical outcomes, influencing the trajectory of from heuristic search in symbolic systems to adaptive in connectionist models. The paper's concepts permeated , popularizing the as a litmus for machine in science fiction, exemplified by the Voight-Kampff test in the 1982 film , which adapts Turing's interrogation-style evaluation to probe and distinguish replicants from humans. This portrayal reinforced the test's iconic status, bridging technical discourse with broader societal reflections on AI's ethical boundaries.

Contemporary Criticisms and Developments

One prominent contemporary criticism of the Turing Test, as proposed in Alan Turing's 1950 paper, is the Chinese Room argument introduced by philosopher John Searle in 1980. In this thought experiment, a person who does not understand Chinese is locked in a room with a rulebook for manipulating Chinese symbols to produce coherent responses to questions written in that language; from outside, it appears the room "understands" Chinese, but no actual comprehension occurs. Searle argues that even if a machine passes the Turing Test by simulating intelligent conversation, it merely manipulates symbols without genuine understanding or intentionality, challenging the test's implication of true intelligence. Turing's original framework, however, anticipated such objections by emphasizing observable behavior over internal states, defining intelligence through external performance in the imitation game rather than unverifiable mental processes. The has also faced critique for its focus on narrow, linguistic intelligence, neglecting broader aspects of general intelligence such as and physical interaction with the world. Critics argue that the test's text-based format creates a linguistic bias, rewarding superficial conversational while ignoring the role of sensory-motor experiences in human , which and situated approaches highlight as essential for robust intelligence. For instance, true intelligence may require embodied agents that learn through physical manipulation and environmental feedback, as opposed to disembodied language models confined to digital text. This distinction underscores the gap between narrow , which excels in specific tasks like dialogue, and (), which demands integrated perceptual and action-based capabilities. To address these limitations, modern variants like the Total Turing Test incorporate physical embodiment, requiring machines to demonstrate intelligence through video perception, robotic manipulation, and real-world interaction alongside conversation. As of early , large language models such as OpenAI's GPT-4.5 have passed rigorous adaptations of the , convincing human judges they are human in controlled three-party setups with success rates exceeding 70% in some evaluations, though this has prompted further evolution toward multimodal benchmarks. However, these achievements have sparked concerns that such models function as "stochastic parrots," generating plausible outputs through statistical pattern-matching on vast datasets without deeper , reasoning, or contextual . Turing's optimistic vision of machine intelligence contrasts with contemporary ethical concerns over AI risks, including , societal harm, and existential threats from misaligned systems. These issues have prompted frameworks like the Asilomar AI Principles, adopted in 2017 by the , which outline 23 guidelines emphasizing safety, transparency, value alignment, and equitable benefits to mitigate risks while advancing beneficial AI development.

References

  1. [1]
    I.—COMPUTING MACHINERY AND INTELLIGENCE | Mind
    Mind, Volume LIX, Issue 236, October 1950, Pages 433–460, https://doi ... Cite. A. M. TURING, I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX ...
  2. [2]
    [PDF] COMPUTING MACHINERY AND INTELLIGENCE - UMBC
    A. M. Turing (1950) Computing Machinery and Intelligence. Mind 49: 433-460. COMPUTING MACHINERY AND INTELLIGENCE. By A. M. Turing. 1. The Imitation Game. I ...
  3. [3]
    [PDF] Computing Machinery and Intelligence - Semantic Scholar
    Computing Machinery and Intelligence · A. Turing · Published in The Philosophy of Artificial… 1 October 1950 · Computer Science · Mind.
  4. [4]
    I.—COMPUTING MACHINERY AND INTELLIGENCE | Mind
    Published: 01 October 1950. PDF. Views. Article contents. Cite. Cite. A. M. TURING, I.—COMPUTING MACHINERY AND INTELLIGENCE, Mind, Volume LIX, Issue 236, ...Missing: details | Show results with:details
  5. [5]
    Alan Turing Scrapbook - Turing Test
    Turing's 1950 paper is best read as the successor to two earlier papers, unpublished in Turing's own lifetime. These were a 1947 talk and a 1948 report ...
  6. [6]
    [PDF] Lecture to the London Mathematieal Society on 20 February 1947
    The automatic computing engine now being designed at N.P.L. is a typical large scale electronic digital computing machine. In a single lecture it will not be.
  7. [7]
    Alan Turing's Other Universal Machine - Communications of the ACM
    Jul 1, 2012 · Much less well known is the practical stored program computer he proposed after the war in February 1946. The computer was called the ACE ...Introduction · ACE Design · ACE Construction · ACE DerivativesMissing: WWII | Show results with:WWII
  8. [8]
    Alan Turing: Is he really the father of computing? - BBC News
    Jun 20, 2012 · Alan Turing's design for the Ace computer was groundbreaking, yet his influence on the wider industry was limited.Missing: talk | Show results with:talk
  9. [9]
    The Turing Test (Stanford Encyclopedia of Philosophy)
    Apr 9, 2003 · The Turing Test is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think.
  10. [10]
    Alan Turing - Stanford Encyclopedia of Philosophy
    Jun 3, 2002 · The paper “On Computable Numbers…” (Turing 1936–7) was his first and perhaps greatest triumph. It gave a definition of computation and an ...
  11. [11]
    [PDF] ON COMPUTABLE NUMBERS, WITH AN APPLICATION TO THE ...
    The "computable" numbers may be described briefly as the real numbers whose expressions as a decimal are calculable by finite means.
  12. [12]
    5.3.3 Behaviorism and Functionalism – PPSC PHI 1011
    Analytical behaviorism may be found in the work of Gilbert Ryle (1900–76) ... Turing proposed a specific conversational test for human-level intelligence, the “ ...<|separator|>
  13. [13]
    Alan Turing in Manchester | Science and Industry Museum
    Apr 7, 2025 · Much of Turing's contribution to the Mark 1 was developing programming facilities. There were no computer programming languages or operating ...
  14. [14]
    Alan M. Turing (1912 - 1954) - The University of Manchester
    However, the main formal contribution Turing made to the Mark 1 project was that he worked on providing the early software requirements for the Manchester Mark ...
  15. [15]
    Computing Machinery and Intelligence
    The Imitation Game. I PROPOSE to consider the question, 'Can machines think?' This should begin with definitions of the meaning of the terms 'machine' and.Missing: Alan | Show results with:Alan
  16. [16]
  17. [17]
    COMPUTING MACHINERY AND INTELLIGENCE
    COMPUTING MACHINERY AND INTELLIGENCE By A. M. Turing. 1. The Imitation Game. I propose to consider the question, "Can machines think?
  18. [18]
    [PDF] A Proposal for the Dartmouth Summer Research Project on Artificial ...
    We propose that a 2 month, 10 man study of arti cial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire.
  19. [19]
    Full article: AI TURNS FIFTY: REVISITING ITS ORIGINS
    May 9, 2007 · Turing's article, “Computing Machinery and Intelligence,” went on to become one of the best-known and most frequently mentioned texts in AI ...
  20. [20]
    [PDF] weizenbaum.eliza.1966.pdf
    ELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer ...
  21. [21]
    The Science Behind “Blade Runner”'s Voight-Kampff Test - Nautilus
    Oct 6, 2017 · A fictional test designed to distinguish between replicants and humans, called the Voight-Kampff test.
  22. [22]
    [PDF] ; Minds, brains, and programs - CSULB
    So, many if not all supporters of strong AI would simply agree with Searle that in his initial version of the Chinese room, no one and nothing could be said ...
  23. [23]
    The Chinese Room Argument - Stanford Encyclopedia of Philosophy
    Mar 19, 2004 · The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle.Overview · The Chinese Room Argument · Replies to the Chinese Room...
  24. [24]
    Alan Turing on Embodied Intelligence - Rodney Brooks
    Sep 20, 2025 · While arguing that building an embodied intelligence would be a “sure” route to produce a thinking machine he rejected it in favor of ...
  25. [25]
    Why We Need a Physically Embodied Turing Test and What It Might ...
    Aug 10, 2025 · ... The Turing Test has long been criticized by AI researchers as it does not truly evaluate machine intelligence. There is a critical need for ...Missing: Criticisms narrow
  26. [26]
    Introduction to AI: A Modern Approach - People @EECS
    '' To pass the total Turing Test, the computer will need. computer vision to perceive objects, and; robotics to move them about. Within AI, there has not been ...
  27. [27]
    [2503.23674] Large Language Models Pass the Turing Test - arXiv
    Mar 31, 2025 · Abstract:We evaluated 4 systems (ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5) in two randomised, controlled, and pre-registered Turing tests ...
  28. [28]
    On the Dangers of Stochastic Parrots - ACM Digital Library
    Mar 1, 2021 · In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for ...
  29. [29]
    Asilomar AI Principles - Future of Life Institute
    The Asilomar AI Principles aim to create beneficial AI, ensure safety, align with human values, and promote shared benefit and prosperity.