Fact-checked by Grok 2 weeks ago

General Problem Solver

The General Problem Solver (GPS) is a pioneering computer program in the field of artificial intelligence, developed between 1957 and 1959 by Allen Newell, Herbert A. Simon, and J. C. Shaw at the RAND Corporation. Intended as a universal system for tackling diverse intellectual tasks, GPS employed a general-purpose heuristic method called means-ends analysis to simulate human-like problem-solving by reducing differences between initial states and desired goals through recursive subgoal generation. At its core, GPS represented problems using a formal structure consisting of objects (elements of the problem domain), operators (actions that modify objects), and goals (target states). The means-ends analysis strategy operated by evaluating the discrepancies between the current problem state and the goal, selecting an appropriate operator to minimize those differences, and treating any resulting obstacles as new subgoals to be solved iteratively—a process that emphasized planning and divide-and-conquer tactics over exhaustive search. This approach drew from observations of human cognition, aiming to model adaptive reasoning applicable across domains rather than relying on domain-specific rules alone. Demonstrating practical utility, GPS successfully addressed a range of benchmark problems, including proving theorems in symbolic logic (such as those from Whitehead and Russell's Principia Mathematica), solving cryptarithmetic puzzles like SEND + MORE = MONEY, and navigating tasks in trigonometry and the Tower of Hanoi puzzle. These achievements highlighted its ability to handle combinatorial challenges within structured "microworlds," though performance degraded in larger search spaces due to exponential growth in possibilities and the program's dependence on predefined representations. GPS holds enduring significance as one of the earliest symbolic AI systems, establishing key paradigms in heuristic search, goal-directed reasoning, and computational modeling of cognition during the field's formative "golden age" of the 1950s and 1960s. By bridging computer science and psychology, it influenced the development of later programs and theories, underscoring the potential—and initial limitations—of general intelligence simulation while inspiring ongoing research in automated problem-solving.

History and Development

Origins and Motivation

The field of artificial intelligence emerged in the early 1950s amid growing interest in simulating human intelligence using computers, with the 1956 Dartmouth Summer Research Project serving as a pivotal catalyst that formalized symbolic AI as a discipline. Organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon, the conference proposed exploring how machines could form abstractions, solve problems reserved for humans, and improve themselves, conjecturing that "every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it." Allen Newell and Herbert A. Simon, listed among the interested participants, were inspired by this vision to pursue computational models of cognition. Newell and Simon's motivation for the General Problem Solver (GPS) stemmed from their studies in human cognition, building directly on their 1956 development of the Logic Theorist, the first AI program designed to prove mathematical theorems and mimic human reasoning processes. At the RAND Corporation, where Newell worked as a researcher and Simon consulted, they sought to generalize these insights into a framework for simulating human problem-solving, drawing from psychological experiments that recorded subjects' thought protocols during tasks like logic puzzles. This work reflected a broader ambition to bridge psychology and computing, treating programs as theories of human behavior to uncover the mechanisms of adaptive intelligence. RAND's played a , supporting Newell, , and collaborator J.C. Shaw in their efforts to develop general-purpose tools for decision-making, aligned with the organization's focus on defense-related strategic during the era. The specific goal of GPS was to create a program capable of solving any formal problem by systematically reducing differences between the current state and the goal state, inspired by observed human strategies in psychological protocols. This approach, briefly referencing means-ends as the core method, aimed to demonstrate how heuristic processes could enable computers to exhibit intelligent behavior across diverse domains.

Creation Process and Timeline

The development of the General Problem Solver (GPS) began in the summer of 1957 at the RAND Corporation, where Allen Newell, J.C. Shaw, and Herbert A. Simon initiated work on a program to simulate general human-like problem solving, building directly on the 1956 Logic Theorist, their earlier heuristic theorem-proving system. Shaw's creation of the Information Processing Language (IPL), one of the first list-processing languages, was integral to this phase, enabling efficient symbolic manipulation and representation of problem states through linked lists, which addressed limitations in earlier machine-code implementations. The collaborative effort involved cross-institutional coordination between RAND and Carnegie Institute of Technology, with protocol analysis of human subjects—using think-aloud techniques to capture problem-solving behaviors—guiding the design from the outset. By late 1957, the first working version of GPS was operational, demonstrating basic means-ends analysis on formalized problems. Iterative refinements followed through 1958, incorporating feedback from human protocol data to enhance subgoal formation and search mechanisms; this included testing on simple puzzles like the Tower of Hanoi and the missionaries-and-cannibals problem to validate generality across domains. A key milestone was the 1958 publication of "Elements of a Theory of Human Problem Solving" in Psychological Review, which outlined the program's theoretical underpinnings and early empirical fits to human data. Public demonstration and broader dissemination occurred in 1959 with the presentation of "Report on a General Problem-Solving Program" at the International Conference on Information Processing in Paris, showcasing GPS's application to diverse tasks such as symbolic logic proofs and chess endgames. The program's evolution continued into the early 1960s, culminating in a detailed exposition in the 1963 anthology Computers and Thought, where Newell and Simon's chapter "GPS, a Program that Simulates Human Thought" highlighted its simulation of cognitive processes and influence on artificial intelligence research.

Key Contributors

Allen Newell, a researcher with a background in mathematics and physics who transitioned into computer science at the RAND Corporation, played a central role in the development of the General Problem Solver (GPS) by leading the design of its programming architecture and overall structure. Newell, along with colleagues, co-authored key reports on GPS, emphasizing its mechanisms for simulating human-like problem solving. His contributions extended to broader AI research, earning him the A.M. Turing Award in 1975 shared with Herbert A. Simon for foundational work in artificial intelligence and information processing. Herbert A. Simon, a psychologist and economist, provided the cognitive theoretical foundation for GPS, drawing on human decision-making models to inform its problem-solving strategies. Simon contributed to GPS through analyses of human protocols in problem solving, which helped shape the program's heuristic approaches. For his pioneering research into decision-making processes, including bounded rationality concepts influential in GPS's design, Simon received the Nobel Prize in Economic Sciences in 1978. J.C. Shaw, a computer scientist at the RAND Corporation, was essential for the technical execution of GPS, implementing components of the Information Processing Language (IPL) used to code the program and handling early programming tasks. Shaw's work on IPL, developed in collaboration with Newell and Simon, enabled the list-processing capabilities critical to GPS's operation on the JOHNNIAC computer. Though less recognized than his co-developers, Shaw's contributions were integral to realizing the system's functionality. The development of GPS also benefited from institutional support at RAND, where Shaw advanced IPL as a versatile programming tool, and the organization's computational resources, including access to the JOHNNIAC, facilitated the program's implementation and testing.

Theoretical Foundations

Means-Ends Analysis

Means-ends analysis serves as the core heuristic strategy in the General Problem Solver (GPS), a method that systematically identifies discrepancies between the current problem state and the desired goal state, then selects operators to diminish those differences. This approach enables GPS to navigate complex problems by focusing on functional transformations rather than exhaustive enumeration, classifying objects according to their roles in achieving intermediate ends. The process operates recursively: once a difference is detected, GPS generates a subgoal aimed at applying a suitable operator to reduce it, subsequently addressing any new differences that emerge from this application until base cases—such as no remaining differences or primitive operations—are reached. This recursive decomposition creates a hierarchy of subgoals, allowing the system to break down intricate tasks into manageable steps while maintaining a directed path toward the overall objective. Central to this strategy are components such as operator selection, which prioritizes transformations relevant to the specific difference identified, often through pattern matching against operator preconditions and effects. Backtracking is incorporated to handle failures, wherein unsuccessful subgoals prompt GPS to abandon the current path and explore alternative operators or differences. The theoretical of means-ends draws from empirical studies of , particularly think-aloud protocols collected from students solving problems, which revealed patterns of discrepancy and subgoal formation mirroring the GPS . These protocols demonstrated that problem solvers intuitively employ similar heuristics, oscillating between states and means to achieve them, thereby validating the model's psychological plausibility. Formally, the means-ends analysis in GPS follows these structured steps:
  1. Identify the difference: Compare the current state (object A) with the goal state (object B) to pinpoint the primary discrepancy (d).
  2. Select an operator: Search the available operators for one whose effects can reduce d, based on relevance criteria such as partial matching.
  3. Apply and test: Establish a subgoal to enable the operator's application if necessary, execute it to transform the state, and evaluate the outcome.
  4. Recur on new differences: If the goal is not achieved, repeat the process on any residual or newly introduced differences arising from the transformation.

Problem Space Representation

In the General Problem Solver (GPS), the problem space is conceptualized as a directed graph in which nodes represent distinct states of the problem, including the initial state, intermediate states, and the goal state, while edges denote operators that transform one state into another. This structure allows for a systematic exploration of possible configurations, where the initial state captures the given problem situation, and the goal state defines the desired outcome. For instance, in a puzzle like the Tower of Hanoi, states might represent disk configurations on pegs, with operators corresponding to legal moves between pegs. Problems are formally represented in GPS using symbolic expressions, often as lists or well-formed formulas in predicate logic, enabling the manipulation of abstract objects through defined rules. States are encoded as structured lists of symbols, such as logical expressions (e.g., propositional clauses) or algebraic terms, while operators are specified as transformations with preconditions and effects, like inference rules in theorem proving. This representation supports domains requiring symbolic reasoning, such as geometry theorem proofs, where states might involve geometric figures and operators apply axioms to derive new relations. The framework's reliance on discrete symbolic manipulation assumes finite, enumerable spaces without inherent support for continuous variables or probabilistic transitions. GPS achieves generality by separating the core problem-solving engine from domain-specific knowledge, allowing the same heuristic mechanisms to apply across varied tasks as long as operators and goals are clearly defined. Domain knowledge, including operator definitions and state evaluations, is provided externally via correlative tables that map abstract heuristics to concrete elements, such as linking "difference reduction" to specific logical substitutions. This modularity enables adaptation to puzzles, logical deductions, or planning tasks without altering the solver's fundamental structure, though it limits efficacy to problems with explicit, finite representations.

Architecture and Implementation

Information Processing Language

The Information Processing Language (IPL) was developed in 1956 by Allen Newell, Cliff Shaw, and Herbert A. Simon at the RAND Corporation as a pioneering list-based programming language tailored for manipulating symbolic data structures in early artificial intelligence research. This language emerged from their efforts to create tools for complex information processing, initially applied in programs like the Logic Theorist, which demonstrated automated theorem proving. IPL's design emphasized flexibility for symbolic computation, marking a shift from numerical-oriented languages toward those suited for heuristic and adaptive problem-solving systems. Key features of IPL included robust support for recursive list processing, allowing routines to traverse and modify nested hierarchical structures such as algebraic expressions or problem representations. It incorporated pattern matching mechanisms for symbol comparison and location within lists, along with substitution operations to insert, replace, or transform elements dynamically. Additionally, IPL featured automatic garbage collection to reclaim unused memory, managed through dedicated processes that erased cells and lists while maintaining available space lists, ensuring efficient operation in resource-constrained environments. These capabilities made IPL particularly adept at handling the symbolic manipulations required for simulating cognitive processes. In the General Problem Solver (GPS), IPL played a foundational role by enabling the representation of problem states as nested lists and defining operators as functions that transformed these structures. This allowed GPS to model diverse problems through symbolic encoding, facilitating the heuristic search central to its operation. The IPL-V variant, refined by 1958, was the specific version used to implement GPS, providing the interpretive framework for its core routines. IPL-V exhibited no direct hardware dependencies, operating as an interpretive system adaptable to various platforms, including the IBM 704 and subsequent machines like the IBM 709 and Burroughs 220. Its innovations in list processing and recursion profoundly influenced later languages, notably LISP, which adopted IPL's approach to treating lists as both data and code structures. This legacy underscored IPL's contribution to symbolic programming paradigms in AI.

Core Program Components

The General Problem Solver (GPS) program was structured around several key modules that facilitated its operation as a symbol-manipulation system implemented in IPL-V. The primary modules included the problem definer, which initialized the problem by setting up the initial goal and context, such as converting external goals into internal representations using routines like E22. The solver module, centered on the recursive execution routine R10, managed the core problem-solving process by interpreting and applying methods through subroutines like R11, which handled method segments for transforming objects and reducing differences. Complementing these, the tracer module, part of the experimenter routines (E-routines), monitored and recorded the program's behavior, generating detailed traces of goal formations, operator applications, and subgoal interactions for analysis. Central to GPS's functionality were its data structures, which represented the problem environment in a list-processing . lists maintained the current and derived states, such as derivation lists tracking expressions and goal attributes (e.g., G1-G5 for goal like type and results). Operator tables, stored in locations like Y51, enumerated the available operators specific to the task environment, such as the 12 rules for logical transformations. Difference tables, implemented in Y52, linked identified differences between states to relevant operators, forming a connection table that guided operator selection (e.g., associating "change connective" differences with specific rules). The program's control flow operated through a centralized loop managed by R10, utilizing a signal system (S-symbols in Y1) to evaluate states, select operators, and queue subgoals recursively. This involved prioritizing differences, applying operators via Q-routines as unit actions, and propagating results back through the goal hierarchy, with mechanisms to limit depth and reject infeasible paths. A dedicated protocol generator produced human-like reasoning traces by outputting step-by-step records of the program's decisions, such as goal-subgoal relationships and rule applications, which could be validated against psychological protocols from human subjects. These traces, generated via routines like E24 and E26 for printing goals and objects, facilitated comparison between machine and human problem-solving behaviors. In terms of hardware context, GPS ran in batch mode on early computers like the IBM 7090, occupying approximately 4,700 words in core memory for its essential components, within a total system requiring up to 32,576 words including IPL-V and task environments.

Functionality and Problem-Solving Process

Heuristic Search Mechanism

The heuristic search mechanism in the General Problem Solver (GPS) utilizes a depth-first search strategy to traverse the problem space, modeled as a directed graph from the initial state to the goal state via successive operator applications. This approach incorporates heuristics to prune unproductive paths, focusing exploration on promising branches rather than exhaustive enumeration, thereby simulating human-like efficiency in navigating complex state spaces. Backtracking is employed when a dead-end is reached, allowing the program to retreat to prior states and explore alternative operators, with predefined limits on search depth to prevent computational explosion in branching structures. Operator selection relies on means-ends analysis, a core heuristic that evaluates differences between the current state and the goal, then chooses operators based on relevance scores indicating their potential to minimize those differences. These scores are computed via table lookups: a difference table identifies key discrepancies, while an operator table matches them to applicable rules, prioritizing selections that address multiple differences simultaneously or generate subgoals during the search process. This table-driven method enables rapid heuristic evaluation without recomputing effects for each state transition. On 1950s hardware like the IBM 704, GPS demonstrated practical performance by solving small problems—such as logic puzzles or planning tasks with limited states—in several minutes, underscoring its viability for modest-scale challenges. However, it scaled poorly to larger problems, where the combinatorial growth of the state space led to prohibitive computation times, often exceeding hours or becoming infeasible without further optimizations.

Subgoal Formation and Reduction

In the General Problem Solver (GPS), subgoal formation occurs as part of its means-ends analysis strategy when no operator can be directly applied to the current problem state to reach the goal. Specifically, GPS identifies differences between the current object and the desired object, then creates a subgoal to reduce one of these differences by achieving the preconditions necessary for a relevant operator. This process follows the principle of subgoal reduction, which states: "Make progress by substituting for the achievement of a goal the achievement of a set of easier goals." The reduction steps begin with selecting a relevant operator that could bridge the identified difference, followed by forming a new subgoal to satisfy its preconditions if they are not met. GPS then recursively applies the same process to this subgoal, breaking it down further until primitive actions—operators directly applicable without additional subgoals—are reached. Solutions are composed bottom-up: once a subgoal is solved, its resulting object is substituted back into the parent goal, potentially enabling the next operator application. Termination of the subgoal process happens in base cases where the difference between objects is zero (indicating the goal is achieved) or where an operator becomes directly applicable without further decomposition. If no progress can be made—such as when no relevant operator reduces the difference—the subgoal is rejected, and GPS backtracks to explore alternatives. Subgoals are integrated into the broader search mechanism by being enqueued for processing, with their solutions substituted into the main problem once resolved, allowing GPS to build a hierarchical solution path. Heuristic guidance prioritizes which differences to target first during this integration. This approach draws an analogy to human cognitive planning, where complex tasks are decomposed into simpler sub-tasks, much like reasoning through intermediate steps in everyday problem-solving, such as arranging transportation to reduce distance in a travel scenario.

Examples and Demonstrations

Tower of Hanoi Solution

The Tower of Hanoi puzzle features three pegs and a set of disks of graduated sizes stacked in decreasing order on the source peg, with the goal of transferring the entire stack to the target peg while ensuring that no disk is ever placed atop a smaller one, and only one disk may be moved at a time. In the General Problem Solver (GPS), this puzzle is formulated within a problem space where states represent specific configurations of disk positions across the pegs, encoded using tree structures in the Information Processing Language (IPL) to denote pegs as nodes and disks as branches. Operators consist of legal single-disk moves between pegs that respect the size constraint, while differences are computed as discrepancies in disk locations between the current state and the goal state, such as the absence of disks from the target peg. GPS applies means-ends analysis to select operators that reduce these differences, prioritizing those with the greatest impact on subgoal achievement. The solution process begins with the initial state, where all disks reside on the source peg (labeled A), and the goal state requires them all on the target peg (C), with the auxiliary peg (B) available for temporary storage. The primary difference identified is the stack's location on A rather than C, prompting GPS to form a subgoal: move the largest disk (disk n) from A to C. To achieve this, GPS recognizes that the n-1 smaller disks must first be cleared from atop it, leading to a recursive subgoal of moving those n-1 disks from A to B. This recursion continues downward: for n-1 disks, GPS subgoals moving the (n-1)th largest to B by first relocating the top n-2 to C, and so forth, until reaching the base case of a single disk, which can be moved directly. Once the largest disk is on C, GPS recursively moves the n-1 disks from B to C atop it, inverting the auxiliary role. For a 3-disk instance, this yields the optimal 7-move sequence: disk 1 from A to C, disk 2 from A to B, disk 1 from C to B (completing the first recursive move of 2 disks to B); disk 3 from A to C; disk 1 from B to A, disk 2 from B to C, disk 1 from A to C (completing the second recursive move). GPS successfully solved the 3-disk in the minimal moves, showcasing its to generate efficient solutions through recursive subgoal without backtracking errors. For the 4-disk version, comprising moves across a state space of 81 configurations, GPS also produced the optimal path flawlessly, as the differences and operator ordering aligned ideally with means-ends analysis. This application underscored GPS's prowess in tackling recursive puzzles via hierarchical subgoal formation, effectively navigating structured problem spaces; however, for instances beyond 4 disks, the exponential growth in subgoals and search depth caused significant slowdowns, limiting practical scalability on 1950s hardware.

Theorem Proving Applications

The General Problem Solver (GPS) applied means-ends analysis to theorem proving in formal logic by representing inputs as sets of Horn clauses, formulating goals as target theorems to be proven, and defining operators as logical inference rules, including modus ponens for deriving conclusions from implications and premises. In this framework, the initial problem state comprised axioms and given premises in clausal form, while differences between current states and the goal were identified through pattern matching to select applicable rules, enabling step-by-step derivations toward the theorem. This approach drew directly from propositional sentential calculus, as in Whitehead and Russell's system, where transformations reduced differences such as mismatched propositions or connective structures. For geometry theorem proving, GPS conceptualized Euclidean proofs with states defined by axioms, postulates, and geometric figures (e.g., points, lines, and circles), while differences represented unproven steps or missing relations between elements. Operators corresponded to Euclidean transformations and theorems, such as congruence or similarity rules, applied to bridge these differences and construct proofs. The process involved forming subgoals for intermediate lemmas—breaking complex proofs into simpler verifiable steps—and conducting heuristic search through hypothesis spaces of possible derivations, often using auxiliary representations like diagrams to guide operator selection. This mirrored the broader problem space representation, where geometric configurations served as objects manipulated toward the goal theorem statement. Building on the earlier Logic Theorist program, which proved theorems from Principia Mathematica, GPS demonstrated its capacity for formal deduction in symbolic logic and extended these heuristic search techniques to handle a wider range of structured problems. However, these applications required hand-crafted sets of domain-specific operators and match routines, limiting scalability without extensive human intervention. Additionally, GPS struggled with complex logics beyond Horn clause forms, where the explosion of possible derivations in non-clausal predicate systems rendered the search inefficient and often intractable.

Limitations and Criticisms

Computational Challenges

The General Problem Solver (GPS) encountered profound computational challenges stemming from the combinatorial explosion in its search process, where the branching factor—determined by the number of available operators—expanded rapidly, resulting in an exponential growth in the number of states to evaluate. For modest problems, this could generate millions of states; in more complex scenarios like chess endgames, a branching factor of approximately 30 over a depth of 40 would require exploring around 10^{59} potential objects, far beyond feasible computation. These issues were compounded by the hardware limitations of 1950s computing systems on which GPS was implemented, such as the IBM 704, which offered a maximum memory of 32,768 36-bit words (roughly 144 KB). This constrained the program's ability to maintain large search trees in core memory, forcing reliance on slower magnetic drum or tape storage for swapping, which dramatically increased runtime and made even moderate problems resource-intensive. Empirical demonstrations underscored these bottlenecks: GPS efficiently resolved the 4-disk Tower of Hanoi puzzle in seconds by generating a compact search tree of about 81 nodes, but scaling to the 5-disk version, requiring 243 nodes and 31 minimal moves, consumed hours due to the escalating state space. Larger or real-world tasks proved infeasible, as the program's exhaustive heuristic exploration quickly overwhelmed available resources. To mitigate the explosion, GPS employed means-ends analysis heuristics, which prioritized operators reducing the largest differences between current and goal states, thereby pruning unpromising branches and focusing on relevant subgoals. However, these techniques merely delayed rather than resolved the issue, as GPS included no adaptive learning from failed searches to refine future performance. In contemporary terms, GPS's strategy resembles basic heuristic search for NP-hard problems without informed guidance, contrasting with later optimizations like A* search, which integrates cost estimates to minimize explored nodes efficiently.

Theoretical and Practical Constraints

The General Problem Solver (GPS) operates under the theoretical assumption of a complete and consistent body of knowledge about the problem space, including well-defined goals, initial states, and operators for transformation, rendering it brittle in the face of incomplete or noisy data. This design philosophy presumes that all necessary information is explicitly available and error-free, with no built-in mechanisms to address uncertainty, such as probabilistic reasoning or handling ambiguous inputs, which limits its applicability to idealized scenarios rather than real-world contexts where information is often partial or unreliable. Newell and Simon themselves acknowledged these constraints in their comprehensive analysis, noting that the theory underlying GPS remains "both tentative and incomplete" in accounting for how problem spaces and strategies are formed in human cognition. Practically, GPS demands extensive human intervention for setup, including the manual definition of operators and difference tables by domain experts to specify how differences between current and goal states can be reduced, making it non-adaptive to novel domains without significant reprogramming. This reliance on predefined, task-specific rules confines the system to toy problems like theorem proving or puzzles, where the problem space can be fully enumerated, and prevents seamless extension to broader applications requiring dynamic adjustment. Consequently, GPS fails to integrate with perceptual processes for real-time data acquisition or learning mechanisms for refining knowledge over time, isolating it from more holistic cognitive simulations. From a psychological perspective, GPS oversimplifies human problem-solving by enforcing a universal means-ends analysis strategy, ignoring diverse human approaches such as intuition, analogy, or parallel processing, and thus mismatches empirical observations of cognition. Critics, including Marvin Minsky, highlighted the lack of modularity in such early systems, arguing that intelligence requires decomposable components rather than a monolithic solver, a point echoed in broader assessments of symbolic AI's rigidity. Newell and Simon's 1972 reflections further conceded these gaps, emphasizing that no single mechanism suffices across domains and that GPS's heuristic search does not capture the origins of strategic diversity in human behavior.

Legacy and Influence

Impact on Artificial Intelligence

The General Problem Solver (GPS) marked a paradigm shift in artificial intelligence by establishing symbolic AI as a dominant framework, where problems are represented and solved through the manipulation of discrete symbols rather than numerical computations. Developed by Allen Newell, J. C. Shaw, and Herbert A. Simon, GPS introduced means-ends analysis—a heuristic method that identifies differences between current and goal states and applies operators to reduce them—allowing the system to operate independently of specific problem domains. This separation of general problem-solving strategies from domain-specific knowledge enabled more flexible architectures, profoundly influencing the design of knowledge-based systems that encode expertise as rules and facts for targeted applications. GPS also served as a critical bridge between AI and cognitive science, providing empirical validation for computational models of the human mind through its simulation of heuristic reasoning under resource constraints. The program's approach aligned closely with Herbert Simon's theory of bounded rationality, which posits that decision-makers satisfice—selecting satisfactory rather than optimal solutions—due to limited information and cognitive capacity, as demonstrated by GPS's efficient search in complex spaces without exhaustive exploration. By modeling human-like problem-solving as procedural processes, GPS underscored the viability of AI in replicating cognitive phenomena, fostering interdisciplinary research that viewed intelligence as an information-processing activity. Among its milestones, GPS formed the conceptual foundation for the physical symbol system hypothesis articulated by Newell and Simon in 1976, asserting that any system capable of general intelligent action must manipulate symbols physically, with GPS exemplifying this through its symbolic operations on problem representations. This hypothesis directly inspired the proliferation of expert systems in the 1970s and 1980s, such as DENDRAL and MYCIN, which adopted GPS-inspired rule-based heuristics to achieve domain-specific intelligence in fields like chemistry and medicine. Beyond these advancements, GPS promoted heuristic programming as a core technique in AI, emphasizing practical, approximate methods over perfect algorithms, and remains a foundational reference in AI textbooks for illustrating early cognitive architectures. The enduring influence of GPS contributed significantly to Newell and Simon receiving the 1975 ACM A.M. Turing Award, honoring their pioneering work in establishing computer science as an empirical science of symbols and search.

Evolution into Modern Systems

The Soar cognitive architecture, developed by Allen Newell and colleagues starting in 1983, serves as a direct successor to the General Problem Solver (GPS), extending its problem-space hypothesis into a unified framework for general intelligence. Soar incorporates GPS-style problem solving through means-ends analysis and heuristic search within problem spaces, but introduces chunking—a learning mechanism that compiles solutions from impasse resolution into production rules for faster future performance. This allows Soar to handle multi-tasking and adapt across domains without domain-specific programming, addressing some of GPS's rigidity in operator application. Laird, Rosenbloom, and Newell formalized Soar as a production-system architecture capable of deliberate problem solving, learning, and perception integration, building explicitly on Newell's earlier work with GPS. Related planning systems like STRIPS (Stanford Research Institute Problem Solver), introduced in 1971 by Richard Fikes and Nils Nilsson, built on GPS's representation of problems as state transformations via operators, but shifted toward automated planning in blocks-world domains using precondition-add-delete lists for actions. STRIPS formalized GPS's means-ends analysis into a more structured backward-chaining approach, enabling efficient theorem-proving-based planning for robotics and scheduling tasks, though it retained GPS's reliance on explicit state descriptions. Similarly, PROLOG (Programming in Logic), developed in 1972 by Alain Colmerauer and colleagues, drew from GPS's symbolic representation and resolution-based theorem proving roots—evident in the Logic Theorist precursor—to create a declarative language for logic programming, where problems are solved via unification and backtracking search over Horn clauses. This evolution emphasized GPS's influence on knowledge representation, allowing PROLOG to automate deductive inference in natural language processing and expert systems. Echoes of GPS persist in modern automated theorem provers, such as Coq, which employ heuristic search tactics akin to GPS's operator selection for guiding proof construction in dependent type theory, though Coq focuses on interactive verification rather than fully automated solving. In game AI, GPS's heuristic search principles underpin precursors to systems like AlphaGo; for instance, Monte Carlo Tree Search (MCTS) in AlphaGo's architecture extends GPS-style state-space exploration with value estimation heuristics to prune vast game trees, enabling superhuman performance in Go by 2016. These applications demonstrate how GPS's core idea of reducing differences between current and goal states via guided search has informed scalable search algorithms in combinatorial domains. GPS concepts have reemerged in hybrid neuro-symbolic AI systems of the 2020s, where symbolic reasoning modules inspired by GPS's explicit problem spaces integrate with neural networks for end-to-end learning, as in frameworks that combine differentiable search with logical operators to improve interpretability and generalization in tasks like visual question answering. However, critiques in the deep learning era highlight GPS's lack of scalability for real-world complexity, as its exhaustive heuristic search explodes combinatorially without data-driven priors, contrasting with neural models' efficiency on large datasets but underscoring the need for hybrid approaches to recapture symbolic rigor. Despite these limitations, GPS ideas contribute to ongoing efforts in neurosymbolic planning, where neural approximations guide symbolic subgoal formation. Documentation of GPS, including original papers and program descriptions, has been preserved in academic archives, such as the Newell-Simon collection at Carnegie Mellon University.

References

  1. [1]
    [PDF] A GENERAL PROBLEM-SOLVING PROGRAM FOR A COMPUTER
    This paper deals with the theory of problem solving. It describes a program for a digital computer, called. General Problem Solver I (GPS), which is part of ...
  2. [2]
    Appendix I: A Short History of AI
    Several focal areas in the quest for AI emerged between the 1950s and the 1970s. Newell and Simon pioneered the foray into heuristic search, an efficient ...
  3. [3]
    Means-Ends Analysis - University of Alberta
    Means-ends analysis is a problem solving strategy that arose from the work on problem solving of Newell and Simon (1972). In means-ends analysis, ...<|separator|>
  4. [4]
    [PDF] Foundations / A (Brief) History of AI - Portland State University
    (*) GPS: General Problem Solver (1959), also developed by Newell. And Simon; used a separate knowledge representation module – intended as a universal solver ...
  5. [5]
    AI History - Artificial Intelligence (AI) - Library Guides at Ohio University
    Sep 23, 2025 · ... General Problem Solver (GPS) in 1957, marked significant progress in the field. 1966 - ELIZA: Joseph Weizenbaum created ELIZA, an early ...
  6. [6]
    [PDF] A Proposal for the Dartmouth Summer Research Project on Artificial ...
    We propose that a 2 month, 10 man study of arti cial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire.
  7. [7]
    None
    ### Summary of Introduction and Background on General Problem Solver (GPS)
  8. [8]
    Allen Newell - Digital Collections
    The Newell-Simon partnership grew into what Newell later called a "highly personal and direct relationship" of scientific interests. RAND employed the pair on a ...
  9. [9]
    [PDF] GPS, A Program that Simulates Human Thought
    Working from the "protocol" recording the behaviour of a testperson solving a logic problem, a computer program catted GPS (for General Problem Solver) is ...<|control11|><|separator|>
  10. [10]
    ALLEN NEWELL | Biographical Memoirs: Volume 71
    Allen Newell was born in San Francisco on March 19, 1927, the son of Dr. Robert R. Newell, a distinguished professor of radiology at Stanford Medical School.
  11. [11]
    John Clifford Shaw - Computer Pioneers
    The three-Newell, Simon, and Shaw (NSS)-went on to develop a sequence of Information Processing Languages (IPL I through IPL V), and the "General Problem Solver ...
  12. [12]
    Definition of General Problem Solver | PCMag
    Developed in 1957 by Herbert Simon, J.C. Shaw and Allen Newell, the General Problem Solver (GPS) was designed to solve any problem that could be expressed in a ...Missing: first date
  13. [13]
    [PDF] Report on a General Problem Solving Program Newell, Shaw, and ...
    Report on a General Problem Solving Program. Newell, Shaw, and Simon few. Introduction. The last fcot yeafcs have seen the start of the serious attempt to.
  14. [14]
    Report on a general problem-solving program - Semantic Scholar
    Report on a general problem-solving program · A. Newell, J. Shaw, H. Simon · Published in IFIP Congress 1959 · Computer Science.
  15. [15]
    Computers and thought : Feigenbaum, Edward A., ed - Internet Archive
    Jul 26, 2010 · Computers and thought ; Publication date: 1963 ; Topics: Artificial intelligence, Digital computer simulation, Kunstmatige intelligentie, Denken, ...
  16. [16]
    [PDF] A GUIDE TO THE GENERAL PROBLEM-SOLVER GPS-2-2
    Newell, Allen, and H. A. Simon, Computer Simulation and. Human Thinking and Problem-Solving, The RAND Corporation,. p-p^ip. also published In Management and ...
  17. [17]
    Allen Newell - A.M. Turing Award Laureate
    Professionally, Newell is chiefly remembered for his important contributions to artificial intelligence research, his use of computer simulations in psychology, ...
  18. [18]
    Herbert A. Simon – Biographical - NobelPrize.org
    He was an inventor and designer of electrical control gear, later also a patent attorney. An active leader in professional and civic affairs, he received an ...<|separator|>
  19. [19]
    [PDF] An Information Processing Theory of Verbal Learning - RAND
    The General Problem Solver (Newell, Shaw, and. Simon; 1959) solves problems in symbolic logic by applying certain methods to subproblems it generates from the ...
  20. [20]
    Herbert A. Simon – Facts - NobelPrize.org
    Simon. Photo: Carnegie Mellon University. Nobel Foundation archive. Herbert A. Simon Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1978.
  21. [21]
    John Clifford Shaw Papers | Smithsonian Institution
    In early 1954, Newell left RAND for Pittsburgh to work with Simon; Shaw remained at RAND. The NSS team focused on creating programs that would enable a machine ...
  22. [22]
    [PDF] GPS, A PROGRAM THAT SIMULATES HUMAN THOUGHT - A. Newell
    GPS, A PROGRAM THAT SIMULATES. HUMAN THOUGHT. A. Newell. Computer Sciences Department. The RAND Corporation and. H. A. Simon*. Consultant, The RAND Corporation.
  23. [23]
    [PDF] ELEMENTS OF A THEORY OF HUMAN PROBLEM SOLVING
    ELEMENTS OF A THEORY OF HUMAN PROBLEM SOLVING. ALLEN NEWELL, J. C. SHAW. The RAND Corporation. AND HERBERT A. SIMON. Carnegie Institute of Technology. In this ...
  24. [24]
    [PDF] Artificial Intelligence
    General Problem Solver (GPS) was a computer program created in 1957 by Simon ... Definitions of terms: ◊A state space forms a graph (or map) in which the nodes ...
  25. [25]
    [PDF] the logic theory machine - a complex information processing system
    In this paper we shall report some results of a research program directed toward the analysis and understanding of com- plex information processing systems. The ...
  26. [26]
    [PDF] INFORMATION PROCESSING LANGUAGE-V MANUAL - Stacks
    Information Processing Language-V (IPL-V) is an addition to techniques for using digital computers, and was the first IPL made available for public use.
  27. [27]
    [PDF] memorandum - rm-3337-pr - Bitsavers.org
    GPS in its various forms and guises is the joint work of J. C. Shaw of RAND, H. A. Simon, and the author. The latter two are members of the faculty of the ...
  28. [28]
    [PDF] History of Lisp - John McCarthy
    Feb 12, 1979 · During this meeting, Newell, Shaw and Simon described IPL 2, a list processing language for Rand Corporation's JOHN-. NIAC computer in which ...
  29. [29]
    ipl :: P-2257 GPS A Program That Simulates Human Thought Apr61
    1. Newell, A. ,. J. C. Shaw, and H. A. Simon, "Report on a General. Problem Solving Program," Proceedings of the International. Conference on Information ...
  30. [30]
    [PDF] Report on a general problem-solving program.
    This paper reports on a computer program, called GPS-I for General Problem Solving Program I. Construction and investigation of this program is part of a ...
  31. [31]
    [PDF] The Problems with Problem Solving - Purdue e-Pubs
    The research paradigm invented by Allen Newell and Herbert A. Simon in the late 1950s dominated the study of problem solving for more than three decades. But in ...
  32. [32]
    [PDF] generalproblem solver - Stacks
    7 ANEWELL H A SIMON. J C SHAW. Self-organizing systems. A variety of intelligent learning in a general problem-solver. Yovits M C and Cameron S eds. Pergamon ...
  33. [33]
  34. [34]
    [PDF] the ability to use 50,000 words. (more) - MIT
    its enlarged memory has enabled the IBM 704 to hold a total of. 32,768 words ... Von Neuman, once estimated that the human brain has a memory capacity.
  35. [35]
    [PDF] HUMAN PROBLEM SOLVING - Carnegie Mellon University
    NEWELL, A., SHAW, J. C., & SIMON, H. A. Elements of a theory of human problem solving. Psychological Re view, 1958, 65,151-166. NEWELL, A., ...
  36. [36]
    General Problem Solver (A. Newell & H. Simon)
    Nov 30, 2018 · Problem-solving behavior involves means-ends-analysis, i.e., breaking a problem down into subcomponents (subgoals) and solving each of those.
  37. [37]
    [PDF] A. Newell and H. A. Simon's Contribution to Symbolic AI - PhilArchive
    The Rand Corporation, P-868. Newell, A. & Simon, H. A. (1961). GPS, a program that simulates human thought. In. H. Billing ( ...<|control11|><|separator|>
  38. [38]
    Bounded Rationality - Stanford Encyclopedia of Philosophy
    Nov 30, 2018 · Bounded rationality now describes a wide range of descriptive, normative, and prescriptive accounts of effective behavior which depart from the assumptions of ...The Emergence of Procedural... · The Emergence of Ecological...
  39. [39]
    Of Models and Machines: Implementing Bounded Rationality | Isis
    Newell and Simon aimed to uncover the rules that guided human problem solving and explore their potential, their bounds, and the behavior they produced by ...<|control11|><|separator|>
  40. [40]
    The History of Artificial Intelligence - IBM
    Allen Newell, a mathematician and computer scientist, and Herbert A. Simon, a political scientist, develop influential programs such as the Logic Theorist and ...
  41. [41]
    General Problem Solver - Artificial Intelligence with Python - O'Reilly
    The General Problem Solver (GPS) was an AI program proposed by Herbert Simon, J.C. Shaw, and Allen Newell. It was the first useful computer program that came ...