Fact-checked by Grok 2 weeks ago

Loebner Prize

The Loebner Prize was an annual competition that evaluated computer programs' ability to engage in human-like conversation through a restricted version of the . Founded in 1990 by American philanthropist and inventor Hugh Loebner, it offered a $100,000 grand prize and gold medal to the first program to fully pass an unrestricted by fooling all judges into believing it was human, alongside smaller annual awards of $2,000 to $4,000 and bronze medals for the most convincing entrant. The contest, which ran from 1991 to 2019 without the grand prize ever being awarded, highlighted advancements in technology while underscoring the challenges of achieving genuine machine intelligence. Hugh Loebner, who held a PhD in sociology and built his fortune as president of Crown Industries, a New York-based manufacturer of theater equipment, established the prize to operationalize Alan Turing's 1950 concept of machine intelligence through empirical competition. The inaugural event in 1991, hosted by the Computer Museum in Boston and organized by the Cambridge Center for Behavioral Studies, featured ten programs restricted to topics like romantic relationships, Shakespeare, and Burgundy wines, with judges conversing via terminals without knowing whether they were interacting with humans or machines. Joseph Weintraub's PC Therapist won that year by deceiving five out of ten judges, earning $1,500 and setting the stage for annual iterations that grew in participation, often attracting hobbyists alongside academic and commercial developers. Over its nearly three-decade run, the prize was hosted at venues including the Science Museum in London and Bletchley Park, evolving to include public judging in later years. In each competition, a panel of judges—typically experts in fields like philosophy, computer science, and literature—held five-minute text-based conversations with both hidden human confederates and competing chatbots, scoring them on human-likeness from 0 to 100 points. The winner was the program that accumulated the highest total score by most convincingly mimicking human responses, though no entrant ever achieved the threshold to claim the grand prize, which required fooling a majority of judges in an open-ended format. Notable successes included the chatbot Mitsuku, created by British developer Steve Worswick, which secured a record five victories, including in 2019 at Swansea University, the contest's final edition. The Loebner Prize's legacy lies in popularizing conversational AI and inspiring tools like ELIZA and later large language models, even as its Turing Test framework faced criticism for prioritizing deception over true understanding. Following Loebner's death in 2016, funding dried up, leading to the competition's discontinuation after 2019.

Introduction and Background

Founding and Objectives

The Loebner Prize was founded in 1990 by Hugh Loebner, an American inventor, entrepreneur, and philanthropist who held six U.S. patents and served as president of Crown Industries, a theater equipment manufacturing firm. Loebner, who earned a and was known for innovative designs such as custom clothing with functional features, established the prize to advance research in through practical incentives. The initiative was set up in conjunction with the for Behavioral Studies in , where Loebner agreed to underwrite the contest as a means to implement and promote advancements in machine intelligence. The , focused on behavioral science, organized the early competitions under the direction of figures like Dr. , ensuring a structured approach to evaluating capabilities. The primary objective of the Loebner Prize was to encourage the development of computer programs capable of passing the , emphasizing unrestricted conversation to simulate human-like interaction without domain limitations. This goal aimed to drive innovation in conversational by rewarding programs that could convincingly mimic human responses in open-ended dialogues. Upon announcement, the prize included a $100,000 grand and for the first to fully pass the , alongside annual medals and smaller cash prizes for the most human-like entrant each year. The inaugural contest took place on November 8, 1991, at the Boston Computer Museum, marking the practical launch of this ongoing effort to benchmark progress.

Connection to the Turing Test

The Loebner Prize draws its theoretical foundation from Alan Turing's seminal 1950 paper, "," where he introduced as a criterion for machine intelligence. In this game, a human interrogator engages in text-based conversations with two participants—one and one —separated by a screen to eliminate non-verbal cues, attempting to determine which is the based solely on the responses. Turing proposed this setup to sidestep philosophical debates about thinking, instead operationalizing intelligence as the 's ability to imitate conversation indistinguishably, predicting that by 2000, machines could fool interrogators at least 30% of the time in five-minute sessions. The standard Turing Test, as derived from Turing's imitation game, evaluates a machine's conversational prowess through such text-only interactions, where success occurs if the interrogator cannot reliably distinguish the machine from a . The Loebner Prize adapts this for annual competitions by enforcing strictly text-based chats without visual or auditory elements, limiting sessions to approximately five minutes to align with Turing's benchmark, and requiring programs to engage multiple judges simultaneously in ranking tasks to identify the most human-like responses. These modifications emphasize short-term deception over sustained interaction, making the test a repeatable measure of progress in . Hugh Loebner established the prize in 1990 to transform Turing's abstract game into a concrete, annual benchmark for advancement, motivated by the lack of empirical efforts to realize Turing's and aiming to accelerate research into human-like machine behavior through competitive incentives. Unlike the "Total Turing Test," which extends the original by incorporating sensory perception (such as and ) and physical action (like manipulating objects via ) to test grounded intelligence, the Loebner Prize adheres to the restricted, text-only format, focusing exclusively on linguistic without addressing or capabilities.

Competition Mechanics

Rules and Restrictions

The Loebner Prize competition was open to developers, teams, or organizations of any nationality or background, allowing submissions from individuals without institutional affiliations, though entrants were required to provide detailed protocols for their programs and ensure compatibility with organizer-provided for on-site execution. No entry fees were charged in most iterations, facilitating broad participation. Programs had to operate autonomously using inorganic , with no human assistance permitted during interactions. The format was strictly text-based, involving keyboard-typed conversations in natural English, typically starting with up to 16 entrants in preliminary rounds that narrowed to 4 finalists for the finals. Early competitions featured simultaneous interactions across multiple terminals, where judges communicated with a mix of and human confederates, ranking them on perceived humanity. By the mid-2010s, the structure refined to paired sessions, with each judge engaging one chatbot and one human per round. Key restrictions prohibited pre-scripted responses beyond an initial self-introduction, mandating that programs generate replies dynamically to handle open-ended, unpredictable without external intervention. In the early years, conversations were confined to a single predefined topic to simulate controlled scenarios, but this evolved to unrestricted topics since 1995, emphasizing general conversational ability. Programs were also barred from simulating non-linguistic deceptions, such as artificial delays or errors beyond natural simulation in some years. Each judging session imposed time limits, initially around 5-7 minutes per interaction in the 1990s and early 2000s to maintain focus, extending to 20-30 minutes in later contests for deeper evaluation; judges were deliberately chosen as non-experts in or to replicate casual, everyday human exchanges. Ethical guidelines stressed fair play, forbidding tricks like feigned technical failures and centering deception solely on linguistic to uphold the test's as a measure of conversational . Over time, operational constraints adapted for practicality, including a shift from fully on-site requirements to allowing remote submissions and online preliminary judging post-2000s, which broadened accessibility while preserving in-person finals until the iteration transitioned to a public online format without traditional judges.

Judging Process and Criteria

The Loebner Prize judging process begins with a preliminary round designed to narrow down entrants to the top performers. In this stage, numerous programs—typically around 16 or more—undergo initial evaluation using the Loebner Prize Protocol (LPP), which involves responding to a set of standardized questions or interactions assessed for basic human-likeness, often by human evaluators or automated scoring to select the top four programs for the finals. The final round features three to four human judges engaging in blind, text-based conversations with the four selected programs and an equal number of human confederates, ensuring a balanced mix to test discrimination. Each judge participates in multiple sessions, typically four rounds, where they simultaneously or sequentially chat with one program and one human via separate terminals or interfaces for approximately 25 minutes per pair, simulating casual conversation without prior knowledge of participants' identities. Judges, often selected from academics, journalists, or other public figures with limited AI expertise to minimize bias, evaluate responses qualitatively based on criteria such as naturalness of , coherence in maintaining , relevance to the ongoing , and overall to mimic human interaction. Success is measured by the judges' inability to distinguish machines from humans, with programs ranked by the percentage of instances in which they are mistaken for humans, though no fixed quantitative formula is applied—instead, qualitative assessments determine the order from most to least human-like. To promote and post-event analysis, full transcripts of the conversations are released after the contest, allowing researchers and the public to review the interactions and judges' decisions.

Awards and Prizes

Annual Prizes

The annual prizes in the Loebner Prize contest serve as incentives for advancing conversational , awarded each year to the highest-performing chatbots based on their ability to engage judges in human-like during the final round. These awards operate independently of the grand prize, focusing solely on relative rankings from the judging process rather than achieving a full pass. The top-ranked chatbot receives a bronze medal and a cash prize, valued at $1,500 in the inaugural 1991 contest and consistently around $2,000 thereafter, though minor adjustments have occurred over time to reflect inflation or sponsorship contributions, sometimes reaching $4,000. Runner-up entries, including silver and other categories, are awarded additional cash prizes typically ranging from $250 to $1,000, distributed according to their final rankings among the finalists. The total annual prize pool, shared among the top entries, has amounted to approximately $6,000 to $7,000 in distributed funds, providing modest but meaningful recognition for incremental progress in chatbot technology. Beyond monetary awards, recipients are presented with formal certificates alongside their medals, and the contest's structure ensures winners gain notable media exposure, amplifying the visibility of their AI innovations within the broader community.

The Grand Prize

The Grand Prize of the Loebner Prize, valued at $100,000 along with a , was established in by Hugh Loebner to reward the first computer program that fully passes the according to the competition's specific definition. To claim the Grand Prize, a program must pass an unrestricted by being judged as and indistinguishable from a interlocutor by a majority of judges in open-ended conversations, imposing a significantly higher bar than the annual prizes, which recognize relative performance rather than absolute success. This top award has gone unclaimed throughout the competition's 29 annual events from 1991 to , as no entrant satisfied the required , with the highest deception rates typically ranging from 20% to 25%. The , valued at $25,000, was to be awarded to the first program to pass the restricted by convincing at least half the judges it was human; like the grand prize, it remained unclaimed throughout the competition's . After Loebner's death on December 4, 2016, the competition persisted under the existing organizers until its conclusion in 2019, yet the Grand Prize continued to elude all participants.

Historical Development

Early Years (1991–2000)

The Loebner Prize competition commenced on November 8, 1991, at the Computer Museum in , marking the inaugural event in a series aimed at evaluating conversational through a restricted form of the . Six computer programs competed against confederates, with judges engaging in text-based interactions to distinguish between them. Joseph Weintraub's PC Therapist III, programmed for "whimsical conversation," emerged as the winner by deceiving five out of ten judges into believing it was , earning a $1,500 prize and a . The contest was sponsored by the Cambridge Center for Behavioral Studies, establishing an early academic partnership that facilitated the release of interaction transcripts for research purposes, such as the published analysis of the winning session. Subsequent contests in the early years maintained a modest scale, with the 1992 event held at the Center for Behavioral Studies on December 15, where Weintraub's PC Therapist repeated as winner. By 1994, the competition shifted to , , where Thomas Whalen's system, designed for information provision on topics like , claimed victory. In 1995, rules evolved to permit multiple conversation topics, allowing Weintraub's PC Therapist to secure another win. The 1996 contest saw Hutchens' program, a pattern-matching system developed under time constraints, take the top prize, highlighting the contest's emphasis on superficial conversational tricks over deep understanding. These events reflected a total of ten annual contests from to 2000, typically featuring 6 to 12 program entries. Notable achievements included the 2000 win by Richard Wallace's (), which utilized for pattern-based responses and earned the bronze medal for most human-like performance. Overall, deception rates remained low, with winners typically fooling fewer than 15% of judges on average across interactions, as exemplified by the 1991 benchmark where even the top entry achieved only partial success. Early programs predominantly relied on pattern-matching techniques reminiscent of , prioritizing keyword responses over contextual comprehension. Organizational developments included sustained collaborations with academic institutions like the Center, which supported the contests and promoted transcript releases to advance research. Media coverage gradually increased, transitioning from academic outlets like AI Magazine to mainstream reports in publications such as by the decade's end.

Mid-Period (2001–2010)

During the mid-period of the Loebner Prize from 2001 to 2010, the competition expanded its scope, alternating venues between the and the to foster international interest. For instance, the 2001 contest took place at the , while the 2005 event was hosted at the in . This alternation continued, with the 2008 contest at the in the UK and the 2010 event at . Over these 10 annual contests, participation from academic institutions rose, as universities increasingly served as hosts and provided judging panels, alongside public demonstrations that drew broader audiences to observe the Turing test-inspired evaluations. Key highlights included repeat successes for established chatbots, such as , which secured victories in 2001 and again in 2004, demonstrating the effectiveness of pattern-matching techniques in conversational AI. The emergence of (Artificial Intelligence Markup Language) as a standardized framework during this era profoundly influenced bot development; , built on AIML, exemplified how this open-source language enabled more structured and reusable response patterns, inspiring subsequent entries. In 2005, Rollo Carpenter's George (from the Jabberwacky system) claimed the prize, notable for its precursor to through adaptive responses drawn from prior human interactions, marking a shift toward more dynamic conversational capabilities. By the late 2000s, entry numbers had grown significantly, with the 2008 contest featuring preliminary rounds involving 13 programs, from which six advanced to . That year introduced online text-based preliminary interrogations to streamline selection, allowing remote before in-person judging. The 2009 winner, David Levy's Do-Much-More, further illustrated the integration of precursors, such as context-aware adaptations, in achieving human-like during the contest held in , . Challenges persisted in ensuring consistent performance, including hardware compatibility issues where programs had to run on provided contest machines, prompting organizers to refine setup protocols and standardize judging criteria for fairness across diverse entries. These efforts, combined with growing academic involvement, helped elevate the competition's role in advancing early techniques.

Later Years and Conclusion (2011–2019)

The Loebner Prize competition continued annually throughout the 2010s, with participants refining technologies to better mimic human conversation, though none achieved the grand prize threshold of fooling a of judges into believing the program was human. In 2011, the contest was held at the in , where , developed by Bruce Wilcox using the ChatScript language, won the bronze medal for the most human-like performance, earning $4,000. Rosette's success was attributed to its sophisticated pattern-matching and contextual response generation, which impressed judges despite occasional lapses in coherence. The 2012 competition, marking the centenary of Alan Turing's birth, took place at and was live, drawing broader public attention to conversational systems. Chip Vivant, created by Mohan Embar, claimed the bronze prize, noted for its engaging, music-themed responses that leveraged scripted dialogues to sustain interactions. This win highlighted a shift toward more specialized, personality-driven bots, though judges still easily distinguished them from humans overall. By 2013, advancements in rule-based systems propelled Mitsuku, a chatbot by Steve Worswick built on the Pandorabots platform, to victory in Londonderry, , securing its first with witty, adaptive replies that scored highest among four finalists. Mitsuku's design emphasized personality and humor, fooling some judges briefly but not meeting the grand prize criteria. The following year, 2014, saw Bruce Wilcox's —a sassy, persona-driven bot—win at , , outperforming competitors through nuanced emotional simulation and topic handling. Rose repeated this success in 2015 at , again earning the bronze for its quirky, hacker-like character that maintained engaging dialogues. Mitsuku dominated the latter half of the decade, winning in 2016 at , , where it was praised for seamless topic transitions and empathetic responses, marking Worswick's second victory. The bot secured consecutive wins in 2017 and 2018, both hosted by the Society for the Study of and Simulation of Behaviour (AISB) in the UK, achieving a record-tying four bronze medals by leveraging crowdsourced knowledge and hybrids for more natural flow. These successes underscored incremental improvements in handling , though deception rates remained low, with no bot exceeding 30% in fooling expert judges. The 2019 contest, held at as part of the AISB X exhibition, marked the competition's conclusion and featured a format shift to public voting without traditional judges, aiming for greater accessibility amid 17 entries. Mitsuku clinched both the Loebner Prize for most human-like interaction and the best overall chatbot award, achieving five wins total and setting a World Record. This final event reflected the competition's evolution toward broader engagement, but the grand $100,000 prize and remained unclaimed, as no program ever fully passed the standard set in 1991. The discontinuation followed the death of founder Hugh Loebner on December 4, 2016, at age 74, whose personal funding had sustained the event for nearly three decades. Without his support, the prize ended after 28 iterations, leaving a legacy of advancing research through annual benchmarks, even as criticisms of the paradigm grew. Over the period, winners like , , and Mitsuku demonstrated progress in scripted , influencing modern conversational agents while highlighting persistent challenges in achieving true indistinguishability from humans.

Notable Achievements and Winners

Most Successful Chatbots

One of the most acclaimed entrants in the Loebner Prize history is A.L.I.C.E. (Artificial Linguistic Internet Computer Entity), developed by Richard Wallace, a former professor at Carnegie Mellon University with expertise in computer vision and robotics. A.L.I.C.E. secured victories in 2000, 2001, and 2004, earning the bronze medal for the most human-like chatbot each time. Wallace's creation pioneered the Artificial Intelligence Markup Language (AIML), an open-standard XML dialect that enabled pattern-matching rules for generating responses, influencing subsequent chatbot development. Mitsuku, created by Steve Worswick, a hobbyist , stands as the most decorated with five wins in 2013, 2016, 2017, 2018, and 2019, a feat recognized by for the most Loebner Prize victories. Worswick's approach emphasized personality-based scripting, crafting Mitsuku as a teenage girl with a consistent , emotional depth, and a vast database of contextual responses to simulate natural . This scripting allowed Mitsuku to handle diverse topics while maintaining engagement through relatable humor and empathy. Other notable performers include , developed by Kevin Copple, which won in 2002 by leveraging , phrase normalization, and for semantic understanding to produce coherent, human-like replies. In 2003, Jabberwock, built by Juergen Pirner—a publisher of fantasy and —took first place with its Markov chain-based technique for generating probabilistic responses that mimicked casual conversation. Bruce Wilcox, a professional programmer known for game development at , achieved four wins between 2010 and 2015 using ChatScript bots like Suzette (2010), (2011), and (2014 and 2015); these entries featured extensive rule sets for topic tracking and witty retorts. Successful Loebner chatbots often shared hybrid approaches blending rule-based with elements of probabilistic learning, such as Markov models or rejoinders for retention, allowing them to sustain multi-turn conversations without abrupt shifts. A focus on humor—through one-liners or ironic replies—and emotional consistency helped these programs appear more lifelike, as judges favored bots that deflected suspicion with engaging personas rather than evading questions. The developers behind these top chatbots represented a diverse mix of backgrounds, including academics like , professionals in gaming and such as Wilcox, and self-taught hobbyists like Worswick, alongside non-technical creators like Pirner, highlighting the contest's appeal to varied innovators beyond formal research institutions.

Record Holders and Milestones

The Loebner Prize contest, spanning 29 annual events from 1991 to 2019, featured approximately 300 entries in total, with no program ever achieving a full pass by fooling a majority of judges into believing it was . The grand prize of $100,000 and remained unclaimed throughout the competition's history, as deception rates never exceeded the required threshold of 50% for the limited variant. Over the years, annual prizes were distributed across winners, reflecting the contest's ongoing commitment despite the elusive grand award. Key milestones include the inaugural contest in 1991 at the Boston Computer Museum, won by Joseph Weintraub's PC Therapist, which established the framework for evaluating conversational through blind judging. In 2008, the competition introduced online entries for the first time, broadening participation by allowing remote submissions and expanding the pool of international developers. The final event occurred in 2019 at , marking the end of the Loebner Prize after nearly three decades. Performance trends showed gradual improvement in chatbot capabilities, with deception rates— the percentage of judges mistaking machines for humans— starting around 30-50% for early winners but generally low in the 1990s, rising to over 25% in the 2010s. Mitsuku reached approximately 30% in 2019, still far short of the grand prize benchmark. This progress paralleled a technological shift from early keyword-matching systems, reliant on rigid , to more advanced contextual AI approaches incorporating and for nuanced responses. Mitsuku holds the record for the most wins, securing the top prize five times between and , a feat unmatched by any other entrant.

Criticisms and Legacy

Key Criticisms

The Loebner Prize has faced significant criticism for its text-only format, which limits evaluation to linguistic interaction and neglects aspects of such as , , and sensory-motor capabilities essential for genuine understanding. Critics argue that this restricted approach fails to test comprehensive AI, as systems like can mimic conversation without deeper comprehension or "symbol grounding" through real-world experience. For instance, the prize's emphasis on verbal deception overlooks the need for , rendering it an inadequate measure of broader . A core methodological flaw lies in the contest's encouragement of game-playing strategies, where entrants prioritize over authentic , leading to superficial chatbots that exploit tricks rather than demonstrate . Stuart Shieber highlighted how the competition rewarded "cheap tricks," such as Joseph Weintraub's program using whimsical, evasive responses to dodge scrutiny, rather than advancing capabilities. This focus on fooling judges fosters "unfalsifiable strategies" that undermine scientific validity, as programs repeat outdated techniques like pattern-matching without evolving toward true understanding. Judge bias further compromises the prize's execution, with non-expert evaluators often favoring entertaining or superficial responses over substantive ones, exacerbated by short five-minute interactions that prevent probing depth. A study of Loebner Prize dialogues revealed high variability in judgments, where judges' own behaviors—such as asking more questions when perceiving humans—influenced perceptions of humanness, with (p < 0.05) in differences like word usage and cognitive . Small sample sizes and subjective ranking amplify these inconsistencies, making outcomes unreliable. The contest's stagnation after the underscores its limited relevance, as chatbot performance showed minimal improvement despite decades of entries, with critics like dismissing it as an "obnoxious and stupid" publicity stunt that diverted resources from meaningful research. Philosopher extended broader critiques to such events, arguing via his thought experiment that behavioral mimicry tests simulation, not consciousness or understanding. Hugh Loebner's insistence on an unrestricted test—requiring open-ended, multi-hour conversations without topic limits—has been deemed impractical, as it clashes with real-world constraints and further incentivizes evasion over robust development. Shieber noted that even restricted versions lack clear scientific goals, while unrestricted ones exacerbate execution issues like delays and poor . This rigidity contributed to the prize's declining , with participants often from a narrow pool of Western developers, limiting diverse perspectives in .

Influence on AI and Future Implications

The Loebner Prize significantly popularized the development of chatbots by providing a high-profile annual platform for testing conversational systems, fostering innovations in pattern-matching and rule-based dialogue management. Winners such as , which secured the prize in 2000, 2001, and 2004, demonstrated early capabilities in mimicking human-like responses through heuristic scripting, influencing subsequent tools in . This competition spurred the creation of more sophisticated chatbots like Mitsuku, which won multiple times in the by incorporating vast knowledge bases and contextual adaptation, laying foundational techniques that echoed in the evolution toward modern large language models (LLMs) such as the series. In terms of research impact, the Loebner Prize generated valuable transcripts from judge-bot interactions, which have been analyzed in studies to evaluate conversational strategies and performance metrics in AI dialogue systems. These datasets contributed to broader (NLP) research by highlighting gaps in coherence and context handling, indirectly inspiring the development of standardized benchmarks for evaluating conversational intelligence, such as those assessing in later challenges. Over its nearly three-decade run, the prize bridged theoretical discussions of the with practical experimentation, encouraging hobbyists and researchers to refine techniques despite its focus on imitation rather than deep understanding. Culturally, the Loebner Prize raised public awareness of AI's potential and limitations by framing the as an accessible benchmark for machine intelligence, often featured in media outlets that explored the boundary between human and artificial conversation. Coverage in sources like highlighted ongoing hobbyist efforts and the test's enduring allure, demystifying while underscoring persistent challenges in achieving indistinguishability from humans. Following its discontinuation in 2019 after Hugh Loebner's death, the field shifted toward more robust evaluations like the , which tests commonsense inference to address the Turing Test's vulnerabilities to superficial tricks. In the , LLMs including have demonstrated success in passing variants of the Turing Test, with studies showing they fool human judges at rates exceeding 50% in controlled settings, signaling a progression beyond the prize's scope. The prize's conclusion emphasizes the necessity for evolving AI assessment metrics that prioritize reasoning and robustness over mere , potentially preserving its archival transcripts as a historical for studying conversational 's . While not driving mainstream breakthroughs, it sustained on intelligent for 28 years, influencing the conceptual of today's generative ecosystems.

References

  1. [1]
    [PDF] Turing Test Round One - The Computer Museum
    Nov 8, 2024 · The Loebner. Prize Competition was established in 1990, when New York philanthropist Dr. Hugh. Loebner, President of Crown Industries Inc.,.
  2. [2]
    ChatGPT broke the Turing test — the race is on for new ways to ...
    Jul 25, 2023 · But these annual gatherings stopped after 2019, because Loebner had died and the money to do it ran out, says computer scientist Rob Wortham. He ...
  3. [3]
    Most Loebner Prize wins | Guinness World Records
    The most Loebner Prize wins is 5 and was achieved by Mitsuku and Stephen Worswick (UK) in Swansea, UK, on 15 September 2019.
  4. [4]
    The hobbyists competing to make AI human - BBC
    Sep 13, 2019 · The Loebner prize may now be on the verge of fading into obscurity. "I do not think that the Loebner prize has had a big impact on AI language ...
  5. [5]
    Mitsuku wins 2019 Loebner Prize and Best Overall Chatbot at AISB X
    Sep 15, 2019 · For the fourth consecutive year, Steve Worswick's Mitsuku has won the Loebner Prize for the most humanlike chatbot entry to the contest.
  6. [6]
    Judgment Day for AI: Inside the Loebner Prize - Servo Magazine
    Apr 30, 2017 · The Loebner Prize is the final competition among the top four AI-powered Chatbots (out of 16 entrants in 2016) that scored the highest in initial rounds.
  7. [7]
    Lessons from a Restricted Turing Test - Computer Science
    The prize at this first competition was a nominal $1500, although Dr. Loebner has reportedly earmarked $100,000 for the first computer program to pass the full ...
  8. [8]
    5 How to Create a Bot: Programming Deception at the Loebner Prize ...
    In 1991, American inventor and philanthropist Hugh Loebner funded the launch of a competition aimed at recreating the conditions of the Turing test to ...Missing: background | Show results with:background
  9. [9]
    What Is the Loebner Prize? - Computer Hope
    Sep 19, 2024 · The Loebner Prize is a contest developed by Hugh Loebner and The Cambridge Center for Behavioral Studies. It's the first implementation of the ...Missing: 1990 | Show results with:1990
  10. [10]
    loebner95.txt - CMU School of Computer Science
    The Loebner Prize Competition in Artificial Intelligence was established in 1990 ... Hugh Loebner and the Cambridge (Massachusetts) Center for Behavioral Studies.
  11. [11]
    What is the Loebner Prize? - Chatbots.org
    Jul 5, 2009 · In 1990 Hugh Loebner agreed with The Cambridge Center for Behavioral Studies to underwrite a contest designed to implement the Turing Test.
  12. [12]
    WIRED 1.01: What's It Mean to be Human Anyway? - Archive
    May 1, 1995 · Loebner offered $100,000 to the first person who could devise a program that would fool 10 judges during three hours of unrestricted ...
  13. [13]
    The Turing Test (Stanford Encyclopedia of Philosophy)
    Apr 9, 2003 · The Turing Test is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think.Turing (1950) and the Imitation... · Turing (1950) and Responses...
  14. [14]
    [PDF] The Total Turing Test and the Loebner Prize - ACL Anthology
    Abstract. The Loebner Prize is the first, and only regular, competition based on the Turing Test, but in order to stage the competition various ...
  15. [15]
    On the Loebner Silver Prize (a Turing test) - LessWrong
    May 6, 2023 · Reliably passing a Loebner-Silver-prize-equivalent Turing test is the hardest, since it is the only one that is adversarial.
  16. [16]
    [PDF] AISB Loebner Prize 2017 Finalist Selection Transcripts - AOMartin
    AISB Loebner Prize 2017 Finalist Selection Transcripts. Andrew Owen Martin. August 15, 2017. Contents. 1 Aidan. 3. 2 Alt Inc. 5. 3 Arckon. 7. 4 Johnny & co.Missing: released | Show results with:released
  17. [17]
    Loebner Prize Results
    Loebner and the Loebner Prize · Hugh Loebner's homepage · The Loebner Prize homepage · Why There is a Loebner Prize at all ...
  18. [18]
    The 21st Annual Loebner Prize Competition 19 October 2011 at the ...
    The bronze medal and cash prizes were therefore awarded based on the ranks ... Congratulations to Bruce Wilcox's Rosette as the 2011 Loebner Prize winner!Missing: money | Show results with:money
  19. [19]
    SCI/TECH | Chatty computers sought - BBC News
    Oct 12, 2001 · No Gold or Silver medals have ever been awarded. Also, every year, a bronze medal, and $2,000 (£1,400) cash, goes to the most convincing entry.Missing: runner- | Show results with:runner-
  20. [20]
    Rose Wins Loebner Bronze - I Programmer
    Sep 26, 2015 · Instead the annual prize of a bronze medal and $4000, awarded for the most human-seeming chatbot in the competition went to Bruce Wlicox who ...Missing: runner- | Show results with:runner-
  21. [21]
    Reality Catches Up to the Turing Test | Psychology Today
    Oct 19, 2023 · The last Loebner competition was held in 2019; the gold medal was never awarded. Artificial Intelligence Essential Reads. AI Friends Can Make ...
  22. [22]
    Loebner Prize chatbots attempt to mimic human conversation, fail ...
    Oct 22, 2011 · This year's Loebner Prize competition awarded a “most human-like” award, but the Turing Test has yet to be passed.
  23. [23]
    [PDF] How to Pass the Turing Test by Cheating
    To add insult to injury, all the judges in the 1995 contest were involved in the computer industry. The judges therefore possessed the skills needed to fool the ...Missing: threshold | Show results with:threshold
  24. [24]
    Machines Almost Pass Mass Turing Test - Slashdot
    Oct 13, 2008 · dewilso4 writes "Of the five computer finalists at this year's Loebner prize Turing Test, at least three managed to fool humans into thinking ...<|control11|><|separator|>
  25. [25]
    [PDF] Can Machines Think? Computers Try to Fool Humans at the First ...
    The second annual Loebner Prize Competi- tion will be held in Boston on November. 17th, 1992, and official applications must be postmarked by July 31st. Again, ...
  26. [26]
    Hugh Loebner - Computer Hope
    Sep 7, 2025 · Name: Hugh Gene Loebner ; Born: March 26, 1942, in Seattle, Washington, USA ; Death: December 4, 2016 (Age: 74) ...
  27. [27]
    Cambridge Center for Behavioral Studies Turing Test Transcript for ...
    Jun 15, 1992 · Epstein's article is the actual transcript of session that won the Loebner Prize Competition -- Joseph Weintraub's computer program PC Therapist ...
  28. [28]
    [PDF] Review of Conversant Systems - Behavior.org
    In 1994, the winning program was written by Thomas Whalen of the Communications Research Centre, in Ottawa, Canada. His TIPS system was designed to provide ...Missing: Surrey | Show results with:Surrey
  29. [29]
  30. [30]
    ALICE talks her way to victory in AI challenge - ZDNET
    Oct 16, 2001 · ALICE, the Artificial Linguistic Internet Computer Entity, has won the Loebner prize, but remains decidedly underwhelmed at the dubious ...
  31. [31]
    SCI/TECH | Chatty computer wins again - BBC News
    Oct 15, 2001 · This year Hugh Loebner, himself, was one of the judges who scored the competing computer programs on a scale of 1 to 25. The job of the judges ...
  32. [32]
    Technology | Brit's bot chats way to AI medal - BBC NEWS
    Sep 20, 2005 · A British computer chat program, George, wins the prize for holding the most human-like conversation.
  33. [33]
    Loebner Prize - Wikipedia
    The Loebner Prize was an annual competition in artificial intelligence that awarded prizes to the computer programs considered by the judges to be the most ...
  34. [34]
    Technology | Alice chatbot wins for third time - BBC NEWS
    Sep 20, 2004 · It was judged to be chattiest bot out of the four finalists in the Loebner Prize for artificial intelligence held in New York on Sunday.
  35. [35]
    [PDF] Chapter 00 The Anatomy of A.L.I.C.E. - FreeShell
    Loebner, who holds a Ph.D. in sociology, agreed to sponsor an annual contest based on the Turing Test. The contest awards medals and cash prizes for the ...
  36. [36]
    David Levy wins Loebner Prize 2009 - Chatbots.org
    Oct 12, 2009 · Congratulations to David Levy, winner of the Loebner Prize 2009 for “most human computer”. The contest was held September 6, 2009 in Brighton, ...
  37. [37]
    Loebner Prize Judges Could Easily Identify Chatbots - I Programmer
    May 18, 2012 · The 2012 Loebner Prize for the best chatbot has been awarded to Chip Vivant, created by Mohan Embar, a software consultant based in ...
  38. [38]
    Mitsuku chatbot has good answers for the Loebner Prize - Phys.org
    Sep 17, 2013 · Loebner has offered a prize of $100,000 for the computer program that meets Turing's standard for artificial intelligence but no chatbot creator ...Missing: grand | Show results with:grand
  39. [39]
    In memoriam - JHU Hub - Johns Hopkins University
    Hugh G. Loebner, A&S '63, December 4, 2016, New York. Frank R. Olenchak, Ed '63 (Cert), August 24, 2016, Columbia, South Carolina. Neil B. Pride, Med '63 ...Missing: obituary | Show results with:obituary
  40. [40]
    Dr. Richard Wallace - Chatbots.org
    The chat bot A.L.I.C.E. won the Loebner Prize, a real-world Turing Test, in 2000, 2001 and 2004. Currently Dr. Wallace is Chief Science Officer, Pandorabots, ...
  41. [41]
    Top Guinness world records in AI - Analytics India Magazine
    Jun 30, 2022 · Stephen Worswick achieved the most Loebner Prize wins. His chatbot Mitsuku won the Loebner Prize in 2013, 2016, 2017, 2018 and 2019. The Loebner ...Missing: Steve | Show results with:Steve
  42. [42]
    Mitsuku chatbot wins Loebner Prize for most humanlike A.I., yet again
    Oct 16, 2016 · In the qualifying round, Mitsuku scored 90 percent, with the second-place contestant trailing behind at 78 percent. In the final, where the best ...Missing: 2001-2010 highlights
  43. [43]
    How Realistic Chatbots Dupe Humans - TOPBOTS
    Nov 9, 2016 · Cleverbot by Rollo Carpenter won the Loebner Prize in both 2005 and 2006 by expressing extraordinary attunement to popular culture. For ...
  44. [44]
    Meet the First Chatbot Sent Into Outer Space - VICE
    Apr 7, 2017 · After winning the Loebner prize in 2002 for the most humanoid chatbot, Ella was selected to be a part of the second Cosmic Call transmission ...
  45. [45]
    Technology | German chatty bot is 'most human' - BBC NEWS
    Oct 20, 2003 · A German computer program has chatted its way to first place in the Loebner Prize for human-like communication. ... University of Surrey.
  46. [46]
    A sassy chatbot named Rose just won a big test of artificial intelligence
    Sep 22, 2015 · The Loebner Prize, founded in 1990 by inventor, philanthropist and activist Hugh Loebner, seeks to answer the question posed by computer ...
  47. [47]
    Making it real: Loebner-winning chatbot design - ResearchGate
    A world-class chatbot should tell the story of its life, have a consistent personality, and respond emotionally. It takes a lot of script. And it takes a ...Missing: traits | Show results with:traits
  48. [48]
    Can Chatbots Think Before They Talk? - Communications of the ACM
    Apr 26, 2016 · Even the most successful chatbots have been largely driven by fairly simple pattern-matching rules, backed by large databases of typical ...
  49. [49]
    Human possibilities | Technology - The Guardian
    Oct 22, 2003 · This year's event took place at the University of Surrey. Last ... The ultimate Loebner prize, he explains, is a gold medal and ...
  50. [50]
    (PDF) The Total Turing Test and the Loebner Prize - ResearchGate
    In 2008 [15], he ran a competition to see who could develop the most humanlike bot, based on the Loebner prize competition [16] for the traditional Turing test.
  51. [51]
    Can a machine think? Almost! - New Atlas
    Oct 13, 2008 · October 14, 2008 The Loebner Prize for artificial intelligence ( AI ) is the first formal instantiation of a Turing Test, the test named ...
  52. [52]
    Computer says: um, er... | Artificial intelligence (AI) - The Guardian
    Apr 29, 2011 · Turing predicted that, by the year 2000, computers would be able to fool 30% of human judges after five minutes of conversation and that, as a ...
  53. [53]
    Can machines think? A report on Turing test experiments at the ...
    One important aspect of the 2014 test results is that a machine has now achieved a 33% deception rate, which is the first time in our studies that the 30% ...
  54. [54]
    Chatbots: History, technology, and applications - ScienceDirect.com
    Dec 15, 2020 · This literature review presents the History, Technology, and Applications of Natural Dialog Systems or simply chatbots.<|separator|>
  55. [55]
    [cmp-lg/9404002] Lessons from a Restricted Turing Test - arXiv
    Apr 4, 1994 · We report on the recent Loebner prize competition inspired by Turing's test of intelligent behavior. The presentation covers the structure of ...
  56. [56]
    Judgment of the Humanness of an Interlocutor Is in the Eye of the ...
    Sep 22, 2011 · The Loebner Prize in Artificial Intelligence features humans and artificial agents trying to convince judges on their humanness via computer- ...Methods · Results · Discussion<|separator|>
  57. [57]
    Artificial stupidity, Part 2 - Salon.com
    Feb 27, 2003 · It was clear that the Loebner Prize was not a comfortable topic for him. He acknowledged that the 2002 contest had not gone well, and that the ...
  58. [58]
    [PDF] minsky-thread.txt - Science@SLC
    The names "Loebner Prize" and "Loebner Prize Competition" may be used by. > ... The Grand Prize winner will satisfy The Minsky Prize criterion. 6. Minsky ...
  59. [59]
    Computer AI passes Turing test in 'world first' - BBC News
    Jun 9, 2014 · Hugh Loebner, creator of another Turing Test competition, has also criticised the University of Reading's experiment for only lasting five ...Missing: criticism | Show results with:criticism
  60. [60]
    An Overview of Chatbot Technology - PMC - NIH
    In 1995, the chatbot ALICE was developed which won the Loebner Prize, an annual Turing Test, in years 2000, 2001, and 2004. It was the first computer to gain ...
  61. [61]
    [PDF] History of generative Artificial Intelligence (AI) chatbots - arXiv
    ALICE used sophisticated pattern matching rules to have natural-sounding dialogs [28]. This chatbot won the Loebner Prize for most human-like bot three times in ...<|separator|>
  62. [62]
    [PDF] The Winograd Schema Challenge - NYU Computer Science
    In this paper, we present an alternative to the Turing Test that has some conceptual and practical advantages. A Wino- grad schema is a pair of sentences ...
  63. [63]
    People cannot distinguish GPT-4 from a human in a Turing test - arXiv
    May 9, 2024 · The Loebner Prize (Shieber, 1994) —an annual competition in which entrant systems tried to fool a panel of expert judges—ran from 1990 to ...