Fact-checked by Grok 2 weeks ago
References
-
[1]
[PDF] ELIZA effect - Computational LawNov 15, 2019 · The ELIZA effect, in computer science, is the tendency to unconsciously assume computer behaviors are analogous to human behaviors, that is ...
-
[2]
What Is the Eliza Effect? | Built InThe Eliza Effect is the tendency to falsely attribute human thought processes and emotions to AI, and believe an AI is more intelligent than it actually is.
-
[3]
ELIZA—a computer program for the study of natural language ...ELIZA—a computer program for the study of natural language communication between man and machine. Author: Joseph Weizenbaum. Joseph Weizenbaum. Massachusetts ...
-
[4]
[PDF] weizenbaum.eliza.1966.pdfELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer ...
-
[5]
The ELIZA Effect - Why We Love AI - NN/GOct 6, 2023 · Users quickly attribute human-like characteristics to artificial systems, which reflect their personality back to them. This phenomenon is called the ELIZA ...
-
[6]
Weizenbaum's nightmares: how the inventor of the first chatbot ...Jul 25, 2023 · As Colin Fraser, a data scientist at Meta, has put it, the application is “designed to trick you, to make you think you're talking to someone ...
-
[7]
The ELIZA Effect: Avoiding emotional attachment to AI coworkers | IBMIn 1966, Joseph Weizenbaum created a chatbot program called ELIZA that applied simple rules to transform the language of a person's input into a response ...Missing: definition | Show results with:definition
-
[8]
Joseph Weizenbaum Writes ELIZA: A Pioneering Experiment in ...Between 1964 and 1966 German and American computer scientist Joseph Weizenbaum Offsite Link at MIT wrote the computer program ELIZA.
-
[9]
How ELIZA's Lisp Adaptation Derailed Its Original Research IntentSep 10, 2024 · Weizenbaum built his ELIZA in MAD-SLIP on the IBM 7090, which was the primary machine at MIT's Project MAC, on the 5th-through-9th floors of ...
-
[10]
My search for the mysterious missing secretary who shaped chatbot ...Mar 22, 2024 · In his accounts of Eliza, Weizenbaum repeatedly worries about a particular user: My secretary watched me work on this program over a long period ...Missing: anecdote | Show results with:anecdote<|separator|>
- [11]
-
[12]
The Biology and Evolution of the Three Psychological Tendencies to ...At the core of anthropomorphism lies a false positive cognitive bias to over-attribute the pattern of the human body and/or mind.
-
[13]
The Double-Edged Sword of Anthropomorphism in LLMs - PMC - NIHFeb 26, 2025 · Humans may have evolved to be “hyperactive agency detectors”. Upon hearing a rustle in a pile of leaves, it would be safer to assume that an ...
-
[14]
Anthropomorphism in AI: hype and fallacy | AI and EthicsFeb 5, 2024 · This essay focuses on anthropomorphism as both a form of hype and fallacy. As a form of hype, anthropomorphism is shown to exaggerate AI capabilities and ...
-
[15]
The mind behind anthropomorphic thinking: attribution of mental ...Agency detection and social cognition. Recent imaging studies support the long-standing belief about how the brain deals with different aspects of the world ...
-
[16]
Rethinking LLM Anthropomorphism Through a Multi-Level ... - arXivAug 25, 2025 · In 1966, Weizenbaum introduced the "ELIZA effect", in which simple linguistic cues can elicit deep emotional responses from interpreters ...
-
[17]
AI anthropomorphism and its effect on users' self-congruence and ...AI agents seem progressively more humanlike, not only in terms of their physical appearance, but also in the way they mimic emotions and the personality traits ...
-
[18]
an empirical investigation of agency detection in threatening situationsIt has been hypothesized that humans have evolved a hypersensitivity to detect intentional agents at a perceptual level, as failing to detect these agents ...
-
[19]
None### Empirical Data from User Studies Comparing ELIZA to Other Systems
-
[20]
Longitudinal Study of Self-Disclosure in Human–Chatbot ...Mar 4, 2023 · They found that humans participated more frequently in intimate self-disclosure than the chatbot, although the study also showed how self- ...
-
[21]
(PDF) "I Hear You, I Feel You": Encouraging Deep Self-disclosure ...Apr 25, 2020 · Prior research has explored users' intended disclosures in hypothetical scenarios [10,23,70], as well as their actual disclosure behaviours ...
-
[22]
Cultural Variation in Attitudes Toward Social Chatbots - PMCOur findings suggest there is cultural variability in attitudes toward chatbots and that these differences are mediated by differences in anthropomorphism.Missing: empirical ELIZA
-
[23]
[PDF] ELIZA—A Computer Program For the Study of Natural Language ...ELIZA is a program operating within the MAC time-sharing system at MIT which makes certain kinds of natural language conversation between man and computer ...
-
[24]
How the first chatbot predicted the dangers of AI more than 50 ... - VoxMar 5, 2023 · If Eliza changed us, it was because simple questions could still prompt us to realize something about ourselves. The short responses had no room ...<|control11|><|separator|>
-
[25]
The Inventor of the Chatbot Tried to Warn Us About A.I.May 8, 2024 · It was creating “the illusion of understanding,” as he described it. But Weizenbaum didn't anticipate how much some people wanted to be fooled.
-
[26]
[PDF] Computer Power and Human Reason - - blogs.evergreen.eduThus ELIZA could be given a script to enable it to maintain a conversation about cooking eggs or about managing a bank check- ing account, and so on. Each ...
-
[27]
Computer power and human reasonApr 24, 1976 · Joseph Weizenbaum, Computer Power and Human Reason, W. H.. Freeman Co ... In his 1966 paper on ELIZA (cited as 1965), Weizenbaum writes,.
-
[28]
The Chinese Room Argument - Stanford Encyclopedia of PhilosophyMar 19, 2004 · The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a 1980 article by American philosopher John Searle.
-
[29]
John Searle's Chinese Room Argument1. We need to exclude a system like Weizenbaum's Eliza that merely looks for certain words in the input and makes certain syntactic transformations on each ...
-
[30]
Stop Treating AI Models Like People - Marcus on AIApr 17, 2023 · What we are seeing now is simply an extension of the same “ELIZA effect”, 60 years later, where humans are continuing to project human qualities ...
-
[31]
The danger of anthropomorphic language in robotic AI systemsJun 18, 2021 · When describing the behavior of robotic systems, we tend to rely on anthropomorphisms. Cameras “see,” decision algorithms “think,” and classification systems “ ...
-
[32]
[PDF] All Too Human? Mapping and Mitigating the Risks from ...Anthropomorphic AI risks include emotional connections leading to over-reliance, privacy and autonomy infringement, and potential harm to users and society.
-
[33]
Kenneth Colby Develops PARRY, An Artificial Intelligence Program ...PARRY was described as "ELIZA with attitude". "PARRY was tested in the early 1970s using a variation of the Turing Test Offsite Link . A group of experienced ...
-
[34]
The computational therapeutic: exploring Weizenbaum's ELIZA as a ...Feb 21, 2018 · This paper explores the history of ELIZA, a computer programme approximating a Rogerian therapist, developed by Jospeh Weizenbaum at MIT in the 1970s, as an ...Missing: colleagues treating
-
[35]
Investigating Affective Use and Emotional Well-being on ChatGPTOur analysis employs two main methods–conversation analysis and user surveys–to examine how users experience and express emotions in these exchanges. Report ...
-
[36]
[PDF] Investigating Affective Use and Emotional Well-being on ChatGPTMar 20, 2025 · User surveys: We surveyed over 4,000 users to understand self-reported behaviors and experiences using ChatGPT. 2. Randomized Controlled Trial ( ...
-
[37]
Eliza Grows Up: The Evolution of Conversational AISep 25, 2024 · This phenomenon, known as the Eliza effect, occurs when people attribute human-like intelligence and emotional awareness to machines, even when ...
-
[38]
Exploring Artificial Intelligence [7] - Taylored SolutionsSep 4, 2025 · This became known as the ELIZA effect – our human tendency to attribute deeper meaning, empathy, or intelligence to machines, even when the ...<|control11|><|separator|>
-
[39]
Reasoning skills of large language models are often overestimatedJul 11, 2024 · New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities ...
-
[40]
Multi-turn Evaluation of Anthropomorphic Behaviours in Large ...Feb 10, 2025 · The tendency of users to anthropomorphise large language models (LLMs) is of growing interest to AI developers, researchers, and policy-makers.
-
[41]
vectara/hallucination-leaderboard - GitHubThis leaderboard uses HHEM-2.1, Vectara's commercial hallucination evaluation model, to compute the LLM rankings. You can find an open-source variant of that ...Activity · Issues 14 · Pull requests 1 · ActionsMissing: tolerance fluent
-
[42]
Survey and analysis of hallucinations in large language modelsSep 29, 2025 · Hallucination in Large Language Models (LLMs) refers to outputs that appear fluent and coherent but are factually incorrect, ...
-
[43]
Phare LLM Benchmark: an analysis of hallucination in leading LLMsApr 30, 2025 · LLM benchmark reveals how LLMs confidently generate hallucinations & spread misinformation. It exposes critical AI security & safety risks ...Missing: tolerance fluent
-
[44]
Large Language Models are biased to overestimate profoundnessOct 22, 2023 · This study evaluates GPT-4 and various other LLMs in judging the profoundness of mundane, motivational, and pseudo-profound statements.
-
[45]
AI, Loneliness, and the Value of Human ConnectionSep 22, 2025 · A study of over 1,100 AI companion users found that people with fewer human relationships were more likely to seek out chatbots, and that heavy ...Missing: attachment | Show results with:attachment
-
[46]
The Rise of AI Companions: How Human-Chatbot Relationships ...Jun 16, 2025 · Our analysis reveals that companionship use of chatbots is associated with lower well-being, especially among users with more intensive ...
-
[47]
Many teens are turning to AI chatbots for friendship and emotional ...Oct 1, 2025 · Psychologists across the discipline are studying how digital technology positively and negatively affects kids' and teens' friendships.
-
[48]
Chatbots Are Not People: Designed-In Dangers of Human-Like A.I. ...Sep 26, 2023 · Anthropomorphic design can increase the likelihood that users will start using a technology, overestimate the technology's abilities, continue ...
-
[49]
AI's hype and antitrust problem is coming under scrutinyDec 10, 2024 · AI's hype and antitrust problem is coming under scrutiny. The FTC is coming after the AI industry for too much hype and not enough competition.Missing: examples | Show results with:examples
-
[50]
Opportunity Costs of State and Local AI Regulation | Cato InstituteJun 10, 2025 · In this paper I investigate the opportunity costs associated with new state and local regulations that are intended to control the creation or use of AI.
-
[51]
The coming AI backlash will shape future regulation | BrookingsMay 27, 2025 · If autonomous vehicles harm people or facial recognition is biased, there will be widespread demands for public accountability and oversight.