Fact-checked by Grok 2 weeks ago

Lisp (programming language)

Lisp is a family of high-level programming languages originally developed by John McCarthy in 1958 at as an algebraic list-processing language for research, making it the second-oldest programming language still in widespread use after . It features a distinctive syntax based on prefix notation and symbolic expressions (S-expressions), where programs are represented as data structures, enabling and powerful through macros. Key characteristics include support for , lambda expressions, dynamic typing, garbage collection, and the ability to treat code as manipulable data via an function, which were pioneered in its early implementations on machines like the IBM 704. The language's development began with ideas from the 1956 Dartmouth Summer Research Project on , leading to the first implementation in 1959 and the influential Lisp 1.5 dialect by 1962, which introduced many foundational concepts in such as structures and automatic . Over decades, Lisp evolved into diverse dialects, including Maclisp in the 1970s for enhanced performance and error handling, Interlisp with advanced iteration constructs, and the standardized in 1984, which unified features for portability across systems. Lisp machines, developed in the 1970s and commercialized by companies like and in the 1980s, optimized hardware for Lisp execution, further solidifying its role in AI. Lisp's influence extends profoundly to artificial intelligence, where it became the dominant language for symbolic computation and expert systems due to its facilities for list manipulation and symbolic processing. It also shaped modern programming paradigms, inspiring functional languages like and , as well as concepts in garbage collection, higher-order functions, and domain-specific languages used in languages such as and . Despite competition from more mainstream languages, contemporary dialects like and Racket continue to support , extensible syntax, and applications in AI, computer algebra, and rapid development environments.

History

Origins in the 1950s

Lisp originated in 1958 when John McCarthy, an assistant professor at the (MIT), developed it as a programming language to formalize elements of for (AI) problem-solving. McCarthy's work was part of MIT's Artificial Intelligence Project, aimed at creating tools for symbolic computation that could support early AI experiments, such as the proposed Advice Taker system for processing declarative and imperative sentences with common-sense reasoning. The key motivations for Lisp stemmed from the need to simulate in , particularly for processing tasks central to . This design was inspired by the Information Processing Language (IPL), developed earlier for the 1956 Dartmouth summer project, though McCarthy sought a more general and machine-independent approach than IPL's hardware-specific implementation on the JOHNNIAC computer. Lisp emphasized with expressions rather than numbers, using as the primary to represent complex hierarchies of information efficiently. The initial implementation of Lisp occurred on the computer, where and his team hand-compiled functions into , relying entirely on recursive functions for without support for loops or arrays. This recursive paradigm allowed elegant definitions of algorithms, such as computing factorials or greatest common divisors, directly mirroring mathematical while leveraging list structures for data representation. detailed these concepts in his seminal paper, "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I," which formally defined Lisp's syntax and evaluation model. One early challenge in this implementation was managing dynamic memory allocation for the growing structures, leading to invent collection in 1959. This automatic process reclaimed unused storage cells when the free-storage was exhausted, preventing errors and enabling Lisp's flexible handling of symbolic data without explicit deallocation. The collector, though initially slow—taking several seconds per run—added thousands of registers to the system's capacity on the 704.

Early Development and AI Connections

Following the implementation of Lisp 1.5, significant expansion occurred at the Artificial Intelligence Laboratory in the mid-1960s, where Maclisp was developed starting in 1966 as the primary dialect for and systems. This dialect addressed the needs of research by providing enhanced support for interactive programming and symbolic computation on these DEC minicomputers, which became central to the lab's work. A key milestone preceding this was the first complete compiler, written in Lisp itself by Timothy Hart and at MIT in 1962, which enabled more efficient execution and of the language. Lisp's interactive read-eval-print loop (REPL), demonstrated in a time-sharing environment on the in 1960 or 1961, profoundly influenced the development of interrupts and systems. This setup allowed real-time input processing via interrupts, highlighting Lisp's suitability for exploratory development and inspiring broader adoption of interactive computing paradigms. A prominent application was SHRDLU, a system created by in 1970 at , which used Lisp to enable a virtual robot to understand and execute commands in a , demonstrating early successes in comprehension and manipulation. Early variants emerged to support networked efforts, including BBN Lisp in 1967, developed by , Beranek and Newman on the SDS 940 under an contract to distribute Lisp systems for research across sites. That same year, Interlisp (initially 940 Lisp) introduced advanced tools such as tracing calls, pretty-printing code, and breakpoints for inspecting parameters, facilitating complex program development. In the , collection saw refinements like incremental reference counting in Xerox Lisp machines and spaghetti stacks in BBN Lisp/Interlisp, which optimized for large-scale symbolic processing without full stops. The concept of dedicated Lisp machines was also conceptualized in the early at , with initial prototypes by 1974 incorporating specialized support for list processing and collection to boost efficiency. During the 1970s, Lisp implementations focused on efficiency for theorem proving, exemplified by the Pure Lisp Theorem Prover (PLTP) developed by Robert Boyer and J Strother Moore from 1971 to 1973, which automated proofs of recursive functions using Lisp's inductive logic in seconds to minutes on contemporary hardware. This work emphasized structural sharing and heuristic simplification to handle theorems like list reversal and , establishing Lisp's role in scalable automated reasoning for .

Dialect Evolution Through the 20th Century

In the 1980s, the Lisp ecosystem experienced significant fragmentation as various dialects proliferated to meet specialized needs in research and development. Major implementations included ZetaLisp, developed for Lisp machines, which extended earlier Lisp-Machine Lisp with features like the SETF macro for generalized assignment, the Flavors object system, and enhanced support for complex data structures such as multi-dimensional arrays. Lisp Machines, Inc. (LMI), founded in 1979 to commercialize MIT's CADR design, pursued a dialect based on Maclisp, emphasizing portability and efficiency for AI applications on their machines. Meanwhile, , originally conceived in 1975 by Gerald Sussman and Guy Steele for its minimalist design and lexical scoping, gained traction in the 1980s as a lightweight alternative focused on and teaching, distinguishing itself from more feature-rich dialects through its emphasis on recursion and first-class continuations. This divergence, driven by hardware-specific optimizations and institutional preferences, prompted concerns from funding agencies like about , setting the stage for unification efforts. Standardization initiatives emerged to address this fragmentation, beginning with Common Lisp in 1981 when representatives from MIT, Carnegie Mellon University (CMU), and Lisp machine vendors convened to design a unified dialect balancing expressiveness and portability. The first draft, known as the "Swiss Cheese Edition," appeared in summer 1981, followed by a more complete version by early 1983, culminating in Guy Steele's Common Lisp: The Language (CLtL1) in 1984, which defined core semantics including dynamic typing, packages, and condition handling. The ANSI X3J13 committee, formed in 1986, refined these through iterative drafts, incorporating the Common Lisp Object System (CLOS) in 1988 and finalizing the ANSI X3.226-1994 standard in 1994 after extensive community input. For Scheme, the Revised Fourth Report (R4RS), completed in spring 1988 and published in 1991, formalized portable features like hygienic macros and a core numeric tower, serving as the basis for the IEEE P1178 standard ratified that year. The Revised Fifth Report (R5RS), released in 1998, extended this with multiple return values, dynamic-wind for exception safety, and refined exception handling, promoting wider adoption in educational and embedded contexts. The late 1980s marked the decline of dedicated Lisp hardware, as the rise of affordable personal computers like those from Apple and eroded the market for specialized machines. By 1987, general-purpose hardware had advanced sufficiently under to run Lisp interpreters and compilers effectively, rendering expensive Lisp machines obsolete for most applications and contributing to the second . Companies such as and LMI faced bankruptcy or pivoted away from hardware, with production ceasing by the early . Amid these shifts, key events underscored the field's maturation: the Lisp and Symbolic Computation journal launched its first issue in , providing a dedicated forum for research on dialects, macros, and symbolic processing. In the , open-source efforts gained momentum, exemplified by CMU Common Lisp (CMUCL), which originated as Spice Lisp in 1980 at Mellon for the SPICE multiprocessor but was renamed in 1985 and ported to Unix workstations like the by 1984. CMUCL's compiler, rewritten in 1985 as the Python system, supported multiple architectures and influenced conformance; it evolved into (SBCL) via a 1999 fork, emphasizing and native compilation for broader accessibility.

Developments from 2000 to 2025

In the early 2000s, experienced a revival through the development of (SBCL), a high-performance open-source forked from CMU Common Lisp in 1999 and achieving significant stability by 2002 with enhanced native compilation and cross-platform support. This emphasized optimizing for modern hardware, including advanced garbage collection and , making it suitable for production systems. Complementing SBCL, Quicklisp emerged in 2010 as a centralized , simplifying library installation and dependency resolution across environments by hosting over 1,500 libraries in a single repository. Parallel to Common Lisp's resurgence, new dialects gained prominence for specific domains. Clojure, released in 2007 by Rich Hickey, integrated Lisp principles with the Java Virtual Machine (JVM), prioritizing immutable data structures and software transactional memory to address concurrency challenges in multi-threaded applications. Similarly, Racket, renamed from PLT Scheme in 2010, evolved in the 2010s as a versatile platform for education and scripting, offering modular language extensions and a robust ecosystem for teaching programming concepts through domain-specific languages. Lisp's historical ties to persisted into the 21st century, building on its use in early research, such as Yann LeCun's 1989 implementation of in Lisp for convolutional networks. In the 2020s, this legacy informed discussions on integrating Lisp with modern AI tools; for instance, the European Lisp Symposium (ELS) 2025 in featured talks on leveraging for large language models (LLMs) and processing volumes with flexible abstractions. These presentations highlighted Lisp's manipulation strengths in hybrid AI systems combining neural and symbolic approaches. The Lisp community sustained momentum through recurring events like the International Lisp Conference (ILC), which from 2000 onward facilitated advancements in language implementations and applications, with editions in 2002 and 2005 focusing on practical deployments. By 2025, symposium discussions emphasized Lisp's long-term stability for projects, citing its unchanged ANSI since 1994 and mature implementations as advantages for maintaining codebases over decades in evolving AI landscapes. Recent tooling advancements further bolstered Lisp's utility. Roswell, a Common Lisp environment manager, saw updates through 2023–2025, including improved implementation switching and script distribution features via its GitHub repository, aiding developers in reproducible setups. Additionally, integrations with WebAssembly enabled web deployment; by 2025, projects like Web Embeddable Common Lisp allowed running Common Lisp code natively in browsers through compiled modules, supporting interactive applications without JVM dependencies. Lisp found niche growth in specialized areas. In game development, the GOAL dialect—originally created by Naughty Dog in the late 1990s using Allegro Common Lisp—received modern extensions through the open-goal project, porting games like to PC while preserving Lisp's high-level expressiveness for low-level engine scripting. For embedded systems, Lisp variants like MakerLisp (2019) and LambLisp (2025) targeted real-time control, offering lightweight interpreters embeddable in C++ environments for edge devices and .

Dialects and Implementations

Major Historical Dialects

Maclisp, developed in the mid-1960s at MIT's Project MAC for the DEC and computers, became a foundational for research due to its extensions supporting advanced symbolic computation. It introduced arbitrary-precision integer arithmetic and syntax extensibility through parse tables and macros, enabling flexible language customization. For applications, Maclisp incorporated streams for operations, which facilitated portable data handling across systems, and a 1973 that generated efficient numerical comparable to , driven by needs in projects like . These features, including support for embedded languages such as PLANNER and CONNIVER, made Maclisp central to early development through the and . Interlisp, originating in 1966 at Bolt, Beranek and Newman and evolving through the 1970s and 1980s, emphasized interactive programming environments tailored for experimentation. Its key innovation was the (Do What I Mean) facility, an error-correction tool that analyzed and fixed common user mistakes during by inferring intended actions, significantly enhancing in exploratory . Interlisp provided comprehensive interactive tools, such as an editor and the BREAK package, allowing programmers to manipulate code as symbolic expressions and enabling programs to analyze or modify other programs dynamically. These capabilities, pioneered under Warren Teitelbaum's influence, positioned Interlisp as a leader in user-friendly systems until the 1980s. ZetaLisp, developed in the 1970s for Lisp Machines, optimized Lisp for high-performance and CAD applications on dedicated hardware. It featured incremental , automatic , and hardware-supported type checking and garbage collection, enabling efficient interactive development. A standout contribution was the Flavors object system, an early message-passing framework with and generic functions, which allowed non-hierarchical object structures and influenced subsequent Lisp object-oriented designs. Implemented on machines like the 3600 and 3670 from the late 1970s through the 1980s, ZetaLisp integrated with the Genera environment for advanced windowing and , solidifying its role in professional programming. T, a dialect of developed in the early at Yale but rooted in research, prioritized lexical scoping and continuations for expressive . It implemented full first-class continuations through optimized tail recursion and a CATCH for non-local exits, allowing programmers to capture and manipulate control contexts dynamically. Building on Steele and Sussman's prototypes, T tested efficient implementation on conventional architectures like VAX and MC68000, with portable interpreters and compilers that maintained compatibility between interpreted and compiled code. This focus on continuations highlighted T's influence on minimalistic, continuation-passing dialects in the late transition toward modern Lisp variants. The proliferation of dialects like Maclisp, Interlisp, ZetaLisp, and T in the 1960s through 1980s, each optimized for specific such as or Lisp Machines, resulted in incompatible features and syntax, severely limiting code portability across systems. dependencies, like 's 36-bit words or TENEX OS specifics, compounded fragmentation, making it challenging for researchers to share programs. These portability issues motivated the 1982 formation of the effort, which unified key elements from these dialects into a standardized, portable to resolve divergences and support broader adoption.

Standardized Dialects

The ANSI Common Lisp standard, formally known as ANSI INCITS 226-1994 (reaffirmed in 1999), defines a comprehensive of Lisp designed to promote portability of programs across diverse systems. This nearly 1,100-page specification outlines over 1,000 functions, macros, and variables, providing a robust foundation for general-purpose programming. Key elements include the condition system, which enables advanced error handling through signaling conditions and establishing handlers without immediate stack unwinding, allowing for restarts and interactive debugging. Additionally, it incorporates the Common Lisp Object System (CLOS), an integrated object-oriented framework supporting , multimethods, and dynamic redefinition. Compliance with the ANSI Common Lisp standard is verified through test suites such as the regression testing library, originally developed at and extended for the Common Lisp (GCL) ANSI test suite, which encompasses over 20,000 individual tests covering core language features and libraries. Prominent implementations adhering to this standard include (SBCL), a high-performance native-code , and CLISP, which supports both interpreted and compiled execution modes. In contrast, Scheme's standardization emphasizes minimalism and elegance, with the Revised Fifth Report on Scheme (R5RS), published in 1998, defining a core language that is statically (lexically) scoped and requires proper tail-call optimization to support efficient without . This 60-page report focuses on a small set of primitives, first-class procedures, and continuations, making it ideal for teaching and exploratory programming while ensuring portability. The subsequent Revised Sixth Report on Scheme (R6RS), ratified in 2007, expands on R5RS by introducing a modular system, support, , and enhanced data structures like bytevectors, while maintaining lexical scoping and tail-call requirements but adding phases for compile-time and run-time separation. Scheme implementations compliant with these standards include GNU Guile, an extensible library for embedding Scheme in applications, and Chez Scheme, a high-performance compiler with full R6RS support and optimizations for production use. The primary distinction between these standards lies in their philosophical approaches: ANSI Common Lisp offers a feature-rich environment suited for large-scale systems development, with extensive built-in facilities like CLOS and the condition system, whereas R5RS and R6RS prioritize simplicity and a minimal core, facilitating easier implementation and use in educational contexts, though at the cost of requiring more external libraries for advanced functionality.

Modern Dialects and Implementations

In the , Lisp dialects have adapted to modern environments, emphasizing with mainstream platforms, enhanced concurrency models, and performance optimizations. , first released in 2007 by , is a functional Lisp dialect designed for the (JVM), featuring persistent immutable data structures as core types to facilitate safe concurrent programming. It incorporates (STM) for handling concurrency, allowing atomic updates to shared state without traditional locks, and seamlessly interop with Java libraries through direct access to JVM classes and methods. Racket, evolving from PLT Scheme and renamed in the early , serves as a multi-paradigm platform for , supporting dialects tailored for domains like (via libraries such as Scribble and Rackunit) and graphics (through packages like pict and slideshow). Its ecosystem includes DrRacket, an that promotes educational use by providing interactive teaching languages and visualization tools for beginners. Racket's design facilitates the creation of domain-specific languages through its powerful macro system, making it suitable for both and . For , (SBCL), forked from CMU Common Lisp in 1999 and actively developed since the early 2000s, stands out as a high-performance with an optimizing native code that generates for multiple architectures. It employs advanced to enable optimizations like and precise garbage collection, often achieving speeds comparable to in numerical computations. SBCL supports features such as native threads on systems and foreign function interfaces for libraries, broadening its applicability in . Recent advancements include the integration of into Racket in 2019, where Racket was rebuilt atop Chez's efficient compiler to leverage its nanopass for faster execution and compilation times, passing the full Racket while maintaining . Additionally, Wisp, introduced in 2015 (SRFI-119) as an indentation-sensitive for and other Lisps, transforms whitespace-based syntax into standard S-expressions, aiming to improve readability while preserving and macro expressiveness. Cross-platform portability has advanced through (Wasm) support in various Lisp implementations during the 2020s, enabling browser-based execution; for instance, projects like uLisp and Medley Interlisp have compiled to Wasm for embedded and web environments, allowing Lisp code to run efficiently in sandboxes without native plugins.

Core Language Features

Syntax: Symbolic Expressions and Lists

Lisp's syntax revolves around symbolic expressions, or , which serve as the fundamental units for both data and code representation. An is defined recursively: it is either an atomic symbol—such as a string of capital letters and digits—or a compound expression formed by an of two , denoted as (e1 · e2), where e1 and e2 are themselves . In practice, this structure uses parentheses to enclose lists, making expressions like (add 1 2) a typical form, where "add" is an atomic symbol and 1 and 2 are numeric atoms. At the core of Lisp's list structure is the cons cell, a primitive data constructor that builds linked lists by pairing a head element (the car) with a tail (the cdr), represented as cons[e1; e2] or (e1 · e2). Lists are chains of cons cells terminating in the empty list, denoted as NIL, an atomic symbol that represents both the empty list and false in logical contexts. For instance, the list (A B C) abbreviates to (A · (B · (C · NIL))), forming a proper list that ends in NIL; improper lists, by contrast, terminate in a non-list value, such as (A · B), which is a cons cell rather than a true list. Lisp employs a notation for its reader , where operators precede their arguments without operators or precedence rules, ensuring uniform of all expressions as nested lists. To treat an literally without evaluation, Lisp provides , originally expressed as ( E) to yield E unchanged, later abbreviated in many dialects as 'E. This mechanism allows direct manipulation of symbolic structures. A defining feature of Lisp's syntax is its , where the itself is represented as identical to , enabling seamless programmatic inspection and transformation of program structure. For example, a definition like (LABEL F (LAMBDA (X) (CONS X X))) is an that can be processed as a , with its elements accessible via standard operations. This uniformity underpins Lisp's flexibility in symbolic computation, distinguishing it from languages with separate syntactic forms for and .

Semantics: Evaluation and Quoting

Lisp's interactive development is centered around the read-eval-print (REPL), a cycle that reads user input as S-expressions, evaluates them, and prints the results, enabling and experimentation. This , formalized in early Lisp systems like LISP 1.5, processes expressions sequentially until termination, with occurring in a specified . The core semantics, defined in the original design, operate via the function, which takes an expression and an —a associating symbols with —and returns the expression's . Atoms, such as numbers or symbols, are self-evaluating: numbers yield themselves, while symbols are looked up in the to retrieve their bound , or an occurs if unbound. For lists (non-atomic S-expressions), first applies to the operator (the first element); if it is a special form, special rules apply; otherwise, the remaining elements (arguments) are evaluated via evlis, and the operator is applied to those using . Special forms like quote, lambda, and if (or its predecessor cond) bypass standard function application to enable conditional execution, function creation, and unevaluated structures. The quote form, (quote <datum>) or abbreviated ' <datum>, returns its argument unevaluated as a literal S-expression, preventing recursive evaluation of lists or symbols. Lambda constructs a procedure from parameters and body without evaluating the body immediately, binding parameters dynamically upon application. If evaluates its predicate; if true, it evaluates and returns the consequent, skipping the alternate; if false, it evaluates the alternate or returns a default. Quasiquotation extends for templating, introduced informally in the and standardized later, using backquote syntax `<template> to a structure while allowing unquoting with (, <expr>) to insert evaluated subexpressions and splicing with comma-at (,@ <expr>) to insert lists inline. For example, `(+ ,x ,@y) evaluates to a list like (+ 1 2 3) if x is 1 and y is (2 3), facilitating . Lisp dialects vary in environment models: early systems and Common Lisp's special variables use dynamic scoping, where bindings are resolved at runtime based on the current call stack's association list, allowing outer bindings to affect inner functions unexpectedly. In contrast, employs lexical scoping, where variable bindings are determined by the static program structure, ensuring a function captures the in which it was defined, promoting . supports both, with lexical as default for non-special variables and dynamic for declared specials.

Functions, Lambdas, and Closures

Lisp embodies a paradigm where functions are first-class objects, meaning they can be passed as arguments to other functions, returned as results, and assigned to variables, a design directly inspired by the as formalized in John McCarthy's foundational work. This approach allows for concise expression of computations through composition and abstraction, distinguishing Lisp from imperative languages of its era that treated functions as second-class entities. Anonymous functions in Lisp are defined using lambda expressions, which take the form (lambda (parameters) body), where parameters specifies the arguments and body contains the expressions to evaluate. This syntax, rooted in Church's lambda calculus and adapted by McCarthy for symbolic computation, enables the creation of functions without names, facilitating higher-order programming. For instance, a simple lambda to compute the square of a number is (lambda (x) (* x x)), which can be immediately applied or stored. Named functions, in contrast, are typically defined using the defun macro in dialects like Common Lisp, as (defun name (parameters) body), which expands to a lambda expression and establishes a global binding. Function application occurs via built-in operators such as funcall, which invokes a function with explicit arguments, or apply, which spreads a list of arguments to the function, supporting dynamic invocation essential for meta-programming. A key feature enabled by lambda expressions is the formation of , where a captures and retains the lexical environment in which it was defined, allowing access to non-local variables even after the defining scope has exited. In , which adopts lexical scoping influenced by , evaluating (function (lambda (x) ...)) or a lambda form produces such a , preserving bindings from the surrounding . This mechanism supports higher-order that generate customized , such as a : (let ((count 0)) ([lambda](/page/Lambda) () (incf count))), which maintains its internal state across invocations. Recursion serves as the primary control mechanism in Lisp for iterative processes, particularly in list manipulation, eschewing explicit loops in favor of self-referential function calls for elegance and alignment with mathematical definitions. For example, the classic function is defined recursively as (defun factorial (n) (if (<= n 1) 1 (* n ([factorial](/page/Factorial) (- n 1))))). In Scheme, a standardized dialect, implementations must support proper tail recursion, ensuring that tail calls—where the recursive invocation is the last operation—do not consume additional stack space, enabling efficient unbounded recursion akin to iteration. Higher-order functions further exemplify Lisp's functional strengths, accepting other functions as arguments to process collections like lists. The mapcar function applies a given function to each element of a list (or multiple lists), returning a new list of results, as in (mapcar (lambda (x) (* x 2)) '(1 2 3)) yielding (2 4 6). Equivalents for filtering, such as remove-if-not, retain elements satisfying a predicate: (remove-if-not (lambda (x) (evenp x)) '(1 2 3 4)) produces (2 4). These utilities, integral to list processing, underscore Lisp's homoiconic nature where functions operate seamlessly on symbolic data structures.

Control Structures and Macros

Lisp's control structures support conditional execution and iteration through macros that integrate seamlessly with its list-based syntax. The cond macro in evaluates a series of clauses, each consisting of a test form followed by zero or more consequence forms; it returns the results of the first successful clause's consequences or nil if none succeed. Scheme's cond operates analogously, treating an else clause specially if its test is the symbol else. For dispatching on discrete values, 's case macro compares a key form against a list of clause keys using eql, executing the matching clause's body; variants ccase and ecase signal errors if no match occurs. Scheme provides a similar case that uses eqv? for comparisons and defaults to an else clause. Iteration in Lisp emphasizes flexibility over rigid loops. Common Lisp's loop macro offers an extensible domain-specific language for complex iterations, supporting variable initialization, stepping, termination tests, accumulation (e.g., sum, collect), and nesting across collections like lists or numbers. For simpler imperative loops, do binds variables with initial values, executes a body until an end-test succeeds, and applies step forms after each iteration; do* evaluates bindings and steps sequentially rather than in parallel. Scheme's do mirrors this structure, initializing variables, testing for termination before body execution, and updating via step expressions, with the final value from a result expression upon exit. Lisp's macro system empowers users to create custom special forms that expand at compile time, effectively extending the language's syntax without runtime overhead. In Common Lisp, defmacro defines a macro as a lambda-like form that takes arguments and produces code to replace invocations, evaluated during compilation or interpretation. For example, the when macro implements conditional execution without an else branch:
common
(defmacro when (condition &body body)
  `(if ,condition (progn ,@body)))
This expands (when (> x 0) (print x) (incf x)) to (if (> x 0) (progn (print x) (incf x))), splicing the body forms into a progn for multiple statements. Macros perform this before , enabling optimizations and new abstractions like domain-specific control flows. Macro authoring relies on tools to build and manipulate code templates safely. Lisp's backquote (`) creates quasi-quoted lists that preserve structure, while comma (,) unquotes expressions for evaluation within the template, and ,@ splices lists; this combination simplifies generating code with dynamic parts. To prevent variable capture—where a macro's temporary variables shadow user-defined ones—gensym generates unique symbols (e.g., #:G1234) uninterned in any package, ensuring no clashes during . For instance, a macro defining a local let-binding must use gensym for its internal variables to avoid capturing external bindings with the same name. In contrast, 's define-syntax with syntax-rules produces hygienic macros that automatically rename bound identifiers to avoid unintended captures, preserving lexical scoping without manual intervention. This hygiene in contrasts with Lisp's non-hygienic approach, where explicit techniques like gensym are required for safety, though 's system limits some low-level manipulations possible in Lisp.

Advanced Paradigms

Object Systems in Lisp

Lisp's object-oriented capabilities evolved significantly from early experimental systems in the to standardized frameworks in the 1980s and beyond. Initial developments occurred within the MIT Lisp Machine environment, where ZetaLisp incorporated Flavors, an object system introduced in 1979 that supported and message-passing mechanisms influenced by research tools like Planner and Conniver. Flavors, developed by David A. Moon and , integrated into programming environments for tasks such as window systems and emphasized non-hierarchical object structures. By the mid-1980s, as standardization efforts advanced under the X3J13 committee formed in 1986, these ideas merged with PARC's CommonLoops to form the Common Lisp Object System (CLOS), finalized in the ANSI standard in 1994. CLOS, developed primarily in the 1980s through collaborative efforts involving , Daniel G. Bobrow, and others, provides a robust object-oriented extension to centered on classes, methods, and generic functions. Classes are defined using the defclass , which specifies slots—named units for instance —with options for allocation (local to instances or shared across them) and automatic generation of accessor methods. supports multiple superclasses, resolved via a class precedence list to handle conflicts, with all classes forming a rooted at the universal superclass t. Generic functions serve as the core dispatch mechanism, allowing behavior to vary based on the classes of all arguments through , rather than single-argument calls typical in other systems. Methods, defined with defmethod, specialize on parameter classes or quoted objects and can employ qualifiers like :before, :after, and :around for combination strategies that compose behaviors flexibly, such as short-circuiting or wrapping primary methods. This design integrates seamlessly with Lisp's functional paradigm, treating objects, classes, generic functions, and methods as first-class entities that can be manipulated like any Lisp data. Generic functions extend Common Lisp's procedural abstraction, enabling polymorphism across multiple arguments while preserving closures and higher-order functions; for instance, methods can capture lexical environments, blending object state with functional composition. CLOS also aligns with Lisp's type hierarchy, where built-in types like float (with subtypes such as single-float and double-float) coexist with user-defined classes, supporting runtime type selection without disrupting existing code. In contrast, Scheme dialects offer less standardized object support, relying on libraries or extensions rather than built-in systems. SRFI-9, a Scheme Request for Implementation finalized in 2000, introduces record types via define-record-type, which define structured data with constructors, predicates, accessors, and optional modifiers, enabling object-like encapsulation without full or dispatch. These records provide distinct identities separate from core Scheme types, serving as a foundation for ad-hoc object-oriented patterns in implementations like Guile or Racket, though more advanced systems often build atop them using closures for methods.

Metaprogramming and Code as Data

Lisp's homoiconicity, a core feature originating from its design, enables the representation of programs as data structures—specifically, S-expressions—which are lists that serve as both source code and manipulable objects. This allows developers to inspect and modify the abstract syntax tree (AST) directly, as the AST is structurally identical to the list-based code representation. At runtime, functions like eval can execute dynamically generated lists as code, while at compile-time, macros facilitate structural transformations of these lists for optimization or extension. Reader macros extend this capability by customizing the Lisp reader's of input streams, associating special characters with functions that transform raw input into Lisp objects before standard . For instance, the #| character initiates a multi-line that the reader skips until a matching #|, effectively ignoring the enclosed text during . This mechanism supports by allowing the creation of domain-specific notations embedded within Lisp code, such as custom infix operators or , without altering the core language syntax. The function further empowers by evaluating arbitrary forms in the current environment, facilitating the implementation of domain-specific languages (DSLs) through programmatic and execution. Developers can construct lists representing DSL syntax—leveraging Lisp's list manipulation primitives like cons and append—and then invoke to interpret them as executable Lisp code, enabling embedded languages for configuration, querying, or scripting within applications. This approach contrasts with static in other languages, as it supports DSL evolution while maintaining full integration with the host Lisp environment. Compiler macros provide compile-time by offering optional expansions for function calls, allowing selective optimizations such as inlining small functions or to reduce runtime overhead. Unlike ordinary macros, compiler macros are invoked only if they enhance efficiency, and they can produce side effects during compilation, such as or conditional based on declarations. This enables advanced code transformations, like specializing arithmetic operations for known types, directly on the lists at . Dialects exhibit variations in metaprogramming power: grants unrestricted access to the full system for direct manipulation, permitting non-hygienic expansions that can intentionally capture identifiers for advanced effects. In contrast, enforces in its syntax-rules macros to prevent accidental variable capture, ensuring that macro-introduced identifiers do not interfere with the surrounding lexical scope, though this constrains flexibility compared to 's approach. These differences reflect trade-offs between safety and expressive power in code-as-data manipulation.

Concurrency and Parallelism

Lisp dialects have evolved to support concurrency and parallelism through libraries and language features that address shared-state synchronization and task distribution, often building on the language's functional and dynamic nature. These mechanisms enable multi-threaded execution while mitigating risks like race conditions, though implementations vary by dialect due to differences in runtime environments and standards. In , concurrency is primarily facilitated by the Bordeaux-Threads library, a portable providing primitives such as threads, mutexes, condition variables, and semaphores for shared-state synchronization across implementations like SBCL and Clozure CL. For parallelism, the lparallel library offers high-level constructs including task farms, kernel submissions, and promise-based futures, allowing efficient distribution of computations over thread pools without direct thread management. Clojure emphasizes immutable data and () for safe concurrent state management. Atoms provide uncoordinated, synchronous updates to single identities via compare-and-set operations, ensuring atomicity without locks. Refs enable coordinated, synchronous changes across multiple identities through transactions, while agents support asynchronous, independent updates to individual locations, queuing actions for sequential execution by a . Complementing these, the core.async library introduces channels for , inspired by Go's model, enabling non-blocking asynchronous communication and multiplexing via go blocks that park rather than block threads. Scheme supports concurrency via SRFI-18, a standard specifying multithreading with threads, mutexes, condition variables, and time-based operations, allowing implementations to provide portable thread creation and synchronization. In Gambit-C, an implementation of , lightweight green threads are managed entirely by the runtime, enabling with millions of threads on a single OS thread, suitable for I/O-bound or server applications without relying on host OS threading. A key challenge in Lisp concurrency arises from garbage collection (), which can introduce stop-the-world pauses that disrupt or latency-sensitive applications, particularly in generational collectors like those in SBCL where major collections may halt all threads for tens of milliseconds. Some dialects and libraries explore models for message-passing concurrency to avoid shared mutable state; for instance, actor systems built on using Bordeaux-Threads isolate state within , reducing synchronization overhead. In the 2020s, advances in (Wasm) integration have enabled Lisp implementations to leverage browser and environments with emerging threading support, such as and atomic operations in Wasm's threads proposal, facilitating distributed parallelism in dialects like those targeting Wasm runtimes for serverless applications.

Applications and Use Cases

Role in

Lisp played a pivotal role in the inception of artificial intelligence research, providing a foundation for symbolic computation that influenced early AI programs. Although the , developed in 1956 by Allen Newell, , and Cliff Shaw using the Information Processing Language (IPL), predated Lisp and demonstrated as a , its concepts of search and symbolic manipulation directly inspired subsequent AI efforts. John McCarthy's development of Lisp in 1958 was motivated by the need for a language supporting and list processing to formalize such algorithms, making it the tool for AI experimentation. A landmark early application was , Joseph Weizenbaum's 1966 program simulating a psychotherapist, originally implemented in MAD-SLIP on MIT's MAC system with later ports to Lisp, and showcased pattern-based dialogue generation. In the , Lisp dominated the expert systems boom, leveraging specialized Lisp machines for efficient symbolic processing and garbage collection. These systems, such as OPS5—a production rule language for rule-based reasoning—were implemented in Lisp interpreters and used for applications like diagnostic and planning tasks in . Similarly, Intellicorp's Knowledge Engineering Environment (KEE), a frame-based tool for building knowledge bases, ran on Lisp machines like the Symbolics 3670, enabling graphical modeling of rules, objects, and engines for commercial expert systems. Lisp machines facilitated and deployment, with hardware optimizations for list operations supporting the era's AI winter-era optimism around knowledge representation. Lisp's strength in symbolic reasoning stems from its homoiconic nature, where code and data are interchangeable lists, facilitating and algorithms central to . The LISP70 system, for instance, introduced pattern-directed computation via rewrite rules, allowing flexible symbolic manipulation for tasks like theorem proving and . In AI planning, Lisp enabled representations of states and actions as nested lists, with to unify goals and preconditions, as exemplified in early planners like STRIPS, which influenced hierarchical task networks and . In the 2020s, Lisp continues to contribute to symbolic , particularly in hybrid systems integrating large language models (LLMs) for enhanced reasoning. Architectures like persistent Lisp REPLs allow LLMs to dynamically generate and execute Lisp code for symbolic tasks, bridging neural with logical . Libraries such as CLML provide tools in , supporting statistical methods like clustering and neural networks alongside symbolic extensions for interpretable . The European Lisp Symposium 2025 highlighted these trends, featuring a on Lisp's in the AI era, a paper on in using frameworks like MGL, and a on Lisp's AI applications, underscoring its role in semantic processing and open reasoning paradigms.

Operating Systems and Embedded Applications

Lisp machines, developed in the late 1970s and 1980s by organizations such as MIT's Laboratory and companies including and Lisp Machines Incorporated (LMI), featured operating systems entirely implemented in Lisp dialects like ZetaLISP. These systems, such as ' Genera released around 1982, integrated the operating system, utilities, and programming environment into a cohesive Lisp-based framework, supporting multiple independent processes within a single via an event-driven scheduler. Genera pioneered innovations like a unified space where functions and data were treated as structured Lisp objects, enabling automatic collection for storage management and hardware-assisted type checking for reliability. Its Generic Network System provided seamless protocol-agnostic networking, allowing uniform file transfers and communication across diverse systems like Chaosnet, DECnet, and TCP/ without requiring user-level protocol expertise. LMI's operating system, a derivative of MIT's earlier OS, similarly emphasized extensibility and integration, powering machines like the LMI for research and development. In modern contexts, Lisp continues to influence operating systems and embedded applications through dialects suited to constrained environments. For instance, the Nyxt , implemented in , demonstrates Lisp's role in system-level software by providing a programmable, extensible environment that integrates low-level browser operations with high-level scripting. In robotics, ROSlisp serves as a client library for the (ROS), enabling nodes for control and perception in embedded robotic systems, such as those using ARM processors. Real-time capabilities have been enhanced in Lisp implementations for use. Clojure's clojure-rt compiler targets deterministic execution on real-time Java virtual machines compliant with the Real-Time Specification for Java (RTSJ), supporting applications requiring predictable response times. Similarly, (SBCL) allows garbage collection tuning via parameters like generation sizes and allocation limits to minimize pause times in real-time scenarios, leveraging its generational collector for systems with periodic GC invocations. Examples from the , such as the LMI-based systems, paved the way for contemporary efforts like Mezzano, a 64-bit OS designed for modern hardware. Lisp dialects for the 2020s, such as uLisp and MakerLisp, target and embedded devices on microcontrollers like AVR and , offering compact implementations with Lisp-1 semantics for resource-limited environments. These enable rapid development of firmware for sensors and edge devices, with uLisp supporting platforms like for wireless applications. A key advantage of Lisp in operating systems and applications is its dynamic typing, which facilitates and adaptability in memory-constrained settings, allowing developers to modify and structures incrementally without recompilation. This trait, combined with code-as-data principles, supports extensible kernels and tuning, as seen in SBCL's configurable for low-latency embedded tasks.

Influence on Modern Software and Languages

Lisp's introduction of automatic garbage collection in by John McCarthy revolutionized , eliminating the need for manual allocation and deallocation that plagued earlier languages. This innovation, first implemented in to handle dynamic list structures, directly influenced modern languages such as and , where garbage collection became a core feature for safe and efficient runtime environments. By automating the reclamation of unused memory, Lisp's approach reduced common errors like memory leaks and dangling pointers, enabling developers to focus on logic rather than low-level details. Lisp's emphasis on paradigms, including higher-order functions and immutable data, has shaped features in contemporary languages like and . 's support for functional constructs, such as and , draws from Lisp's foundational role in promoting pure functions and over imperative loops. Similarly, incorporates functional elements like closures and iterators, inspired by Lisp's treatment of functions as first-class citizens, which enhances code safety and composability in . These influences underscore Lisp's contribution to blending functional purity with practical performance needs in hybrid languages. Specific dialects and concepts from Lisp have permeated specialized domains. Emacs Lisp, a dialect tailored for extensibility, powers the Emacs text editor, allowing users to customize and extend its functionality through programmable macros and scripts, a model that has influenced interactive development environments. Julia's metaprogramming capabilities, including macros that manipulate abstract syntax trees, explicitly inherit Lisp's homoiconic design where code is treated as data, enabling creation without external tools. The read-eval-print loop (REPL), a hallmark of Lisp for interactive development, inspired the interactive computing paradigm in Jupyter notebooks, facilitating and prototyping in languages like . Rust's procedural macros, which allow arbitrary code generation at compile time, are inspired by Lisp-family languages like Scheme, providing hygienic metaprogramming to extend syntax safely while avoiding common pitfalls like variable capture. In 2025, Lisp's stability and expressiveness continue to influence AI tools and frameworks; for instance, its historical dominance in symbolic AI informs chain-of-thought reasoning in libraries like LangChain, where dynamic code generation mirrors Lisp's code-as-data philosophy for building adaptive agents. Lisp-family languages have influenced the design of nearly every major modern programming language through concepts like dynamic typing and recursion.

Examples

Basic Syntax Examples

Lisp's core syntax revolves around S-expressions, which are either atomic elements like numbers or symbols, or parenthesized lists that represent function calls or data structures. A fundamental aspect is the notation for expressions, where the operator precedes its arguments. For instance, the expression (+ 1 2 3) evaluates to 6 in both and , demonstrating arithmetic operations as the first element of the list followed by operands. Variable binding is typically achieved using the let special form, which introduces local variables within its body. In Common Lisp, (let ((x 10) (y 20)) (+ x y)) binds x to 10 and y to 20, then evaluates to 30. Scheme uses a similar construct, such as (let ((x 10) (y 20)) (+ x y)), yielding the same result, though Scheme requires all bindings to be specified before the body. List manipulation forms the backbone of Lisp data structures. The cons function constructs lists by prepending an element to an existing list; for example, (cons 'a '(b c)) produces the list (a b c). Accessors like car and cdr retrieve the first element and the rest of the list, respectively: (car '(a b c)) returns a, while (cdr '(a b c)) returns (b c). These operations are identical in both Common Lisp and Scheme. Quoting preserves expressions as literal data rather than evaluating them. The expression '(+ 1 2) yields the list (+ 1 2) as a , which can later be evaluated using the eval function: (eval '(+ 1 2)) returns 3. This code-as-data principle is central to and works consistently across dialects. In a Read-Eval-Print (REPL), interactively processes input and displays results. For example, entering (+ 1 2) at the prompt outputs 3, while (cons 'a '(b c)) displays (A B C) in (using uppercase by default) but (a b c) in (preserving case). This difference in printing conventions highlights minor dialect variations while keeping core evaluation uniform.

Functional Programming Example

Lisp supports paradigms through its emphasis on first-class functions, , and immutable data structures, enabling the composition of programs as transformations on data without side effects. A classic demonstration is the recursive computation of the function, which avoids iterative loops by breaking down the problem into smaller subproblems. In , the of a non-negative n can be defined recursively as follows:
lisp
(defun fact (n)
  (if (<= n 1)
      1
      (* n (fact (- n 1)))))
This function returns 1 for the base case where n \leq 1, and otherwise multiplies n by the factorial of n-1. For instance, (fact 5) evaluates to 120 by unfolding the recursion: $5 \times (4 \times (3 \times (2 \times (1 \times 1)))). Higher-order functions like mapcar apply a given function to each element of a list, producing a new list of results and preserving immutability. The mapcar function takes a function and one or more lists, applying the function to the corresponding elements. For example:
lisp
(mapcar #'sqrt '(1 4 9))
This yields (1 2 3), as sqrt is applied element-wise to the input list without modifying the original. , a dialect of , extends functional techniques with continuations via call-with-current-continuation (abbreviated call/cc), allowing non-local exits for control flow. This enables structured escapes from computations, such as early termination in searches. A representative example finds the first negative number in a list and exits immediately:
scheme
(call-with-current-continuation
 (lambda (exit)
   (for-each (lambda (x)
               (if (negative? x)
                   (exit x)))
             '(54 0 37 -3 245 19))))
This evaluates to -3, abandoning the rest of the loop upon encountering the negative value. To maintain immutability, Lisp provides functions like copy-list, which creates a shallow copy of a list's structure while sharing elements. For a list lst bound to (1 (2 3)), (setq clst (copy-list lst)) produces a new list clst that is equal to lst but distinct under eq, ensuring modifications to one do not affect the other. This supports pure functional styles by avoiding unintended mutations. Many Lisp implementations optimize tail-recursive functions, where the recursive call is the last operation, by reusing the current stack frame instead of allocating a new one—effectively turning recursion into iteration. In LispWorks, for example, self-tail-recursive functions like a tail-optimized factorial are compiled as efficiently as loops, preventing stack overflow for deep recursions.

Macro Usage Example

One common practical use of macros in Lisp is to define conditional execution constructs that simplify code readability without the overhead of runtime checks. For instance, the when macro provides a concise way to execute a sequence of forms only if a condition is true, avoiding the need to explicitly write an if with a nil else branch. This macro is defined as follows:
lisp
(defmacro when (condition &rest body)
  `(if ,condition (progn ,@body)))
Here, the macro takes a condition and a variable number of body forms, expanding them into an if form where the body is wrapped in progn if the condition holds. To illustrate the expansion process, consider the usage (when (> x 10) (print 'big-value) (incf x)). During macro expansion, which occurs at compile time or load time, this form is transformed into (if (> x 10) (progn (print 'big-value) (incf x))). The expansion trace can be observed using the macroexpand-1 function, which applies a single level of expansion: (macroexpand-1 '(when (> x 10) (print 'big-value) (incf x))) yields the if form shown, confirming that the macro generates efficient, direct code without introducing unnecessary runtime evaluation of the body when the condition is false. This expansion happens before the code is compiled or interpreted, ensuring the generated if is treated as ordinary Lisp code. Quasiquotation, or backquote, plays a central role in such definitions by allowing selective unquoting of parts of the form. In the , the backquote `(if ,condition (progn ,@body)) creates a quoted structure for the form, where ,condition splices in the evaluated condition form and ,@body splices the body forms as a list into the progn. This enables insertion while preserving the literal of the , as seen in expansions like (list 1 2 ,(+ 1 3) ,@'(4 5)) which becomes (list 1 2 4 4 5). Backquote is detailed further in the control structures section. For more complex iteration, macros can define custom loop variants tailored to specific needs, such as iterating over prime numbers. A simple for-each-like macro for this purpose, do-primes, might be defined to over primes in a range:
lisp
(defun is-prime (n)
  ([loop](/page/Loop) for i from 2 to (isqrt n)
        when (zerop (mod n i)) return nil
        finally (return t)))

(defun primep (n)
  (and (>= n 2) (is-prime n)))

(defmacro do-primes ((var start end) &body body)
  (let ((n (gensym))
        (endv (gensym)))
    `(do ((,n ,start (1+ ,n))
          (,endv ,end))
         ((> ,n ,endv))
       (when (primep ,n)
         (let ((,var ,n))
           ,@body)))))
This macro expands a form like (do-primes (p 0 10) (format t "~d " p)) into a do loop that increments a counter, checks for primality with primep, and executes the body only for primes, effectively providing a for-each iteration over primes without evaluating the primality test for non-primes in the body context. The use of gensym ensures hygienic expansion by generating unique symbols (e.g., #:G1234) for loop variables like n and endv, preventing unintended variable capture if the macro is used within a lexical scope where similarly named variables exist. For example, expanding the do-primes form introduces fresh symbols that do not conflict with outer bindings, maintaining lexical hygiene as required by the macro system. Macros differ fundamentally from functions in their evaluation timing: macros execute at to generate code, whereas functions receive data at . In the when example, the macro body runs during to produce the if form, allowing compile-time decisions that optimize the final code, such as avoiding any evaluation of the else branch (which is absent). This compile-time evaluation contrasts with a hypothetical when function, which would evaluate all arguments at , potentially executing the body even if the condition is false before discarding the result—leading to inefficiencies or side effects. Such timing enables macros to perform static or that functions cannot, as verified through expansion traces showing the generated code's structure before .

References

  1. [1]
    [PDF] History of Lisp - John McCarthy
    Feb 12, 1979 · As a programming language, LISP is characterized by the following ideas: computing with symbolic expressions rather than numbers, representation ...
  2. [2]
    1.1.2 History
    Lisp is a family of languages with a long history. Early key ideas in Lisp were developed by John McCarthy during the 1956 Dartmouth Summer Research Project ...
  3. [3]
    History of LISP - Software Preservation Group
    LISP was one of the earliest high-level programming languages and introduced many ideas such as garbage collection, recursive functions, symbolic expressions, ...Missing: features | Show results with:features
  4. [4]
    [PDF] Recursive Functions of Symbolic Expressions and Their ...
    John McCarthy, Massachusetts Institute of Technology, Cambridge, Mass. ∗. April 1960. 1 Introduction. A programming system called LISP (for LISt Processor) ...
  5. [5]
    The evolution of Lisp - ACM Digital Library
    We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special ...<|separator|>
  6. [6]
    [PDF] The Evolution of Lisp - UNM CS
    The rst real standard Lisps were MacLisp and Interlisp; as such they deserve some attention. 2.2 MacLisp. MacLisp was the primary Lisp dialect at the MIT AI Lab ...
  7. [7]
    Early Artificial Intelligence Projects - MIT CSAIL
    LISP: The language that made AI possible. John McCarthy introduced LISP in 1958, heralded as the language that made AI programming possible. LISP is special ...
  8. [8]
  9. [9]
    Interlisp Timeline
    In 1967, BBN purchased an SDS 940 computer from Scientific Data Systems and began work building a time-sharing system on it. The SDS 940 had a larger address ...
  10. [10]
    [PDF] Interlisp Reference Manual - Software Preservation Group
    Interlisp began with an implementation of the Lisp programming language for the PDP-l at Bolt. Beranek and Newman in 1966. It was followed in 1967 by 940 ...
  11. [11]
    [PDF] The Evolution of Lisp - Dreamsongs
    Ephemeral garbage collection was subsequently adapted for use on stock hardware.) The early MIT Lisp-Machine Lisp dialect [Weinreb, November 1978] was very ...
  12. [12]
    [PDF] Milestones from The Pure Lisp Theorem Prover to ACL2
    Mar 5, 2019 · Abstract We discuss the evolutionary path from the Edinburgh Pure Lisp. Theorem Prover of the early 1970s to today's ACL2.
  13. [13]
    CLHS: About the Common Lisp HyperSpec (TM)
    In 1981, representatives of several major dialects began to pool their efforts to design Common Lisp, an `industrial strength' dialect of Lisp that would ...
  14. [14]
    [PDF] The History of Lisp Standardization during 1984 – 1990 Masayuki Ida
    This paper describes what the author observed from his work as a member of the X3J13 Common Lisp. Standardization committee in the period between 1984 and ...
  15. [15]
    [PDF] Revised7Report on the Algorithmic Language Scheme
    Feb 13, 2021 · Work in the spring of 1988 resulted in R4RS [10], which became the basis for the IEEE Standard for the Scheme. Programming Language in 1991 ...
  16. [16]
    The market for specialised AI hardware collapsed in 1987 | aiws.net
    Oct 22, 2021 · The specialized AI hardware market collapsed in 1987 because Apple and IBM computers became more powerful and cheaper, leading to the Second AI ...
  17. [17]
    Announcement | Higher-Order and Symbolic Computation
    Cite this article. Announcement. Lisp and Symbolic Computation 1, i–ii (1988). https://doi.org/10.1007/BF01806179. Download citation. Issue date: June 1988.
  18. [18]
    Project history - CMUCL
    Apr 13, 2023 · We started calling it CMU Common Lisp when the Spice project ended (around 1985). The code base has gown tremendously and mutated greatly ...Missing: 1987 | Show results with:1987
  19. [19]
    History and Copyright - Steel Bank Common Lisp
    SBCL derives most of its code from CMU CL, created at Carnegie Mellon University. Radical changes have been made to some parts of the system.Missing: 1987 1990s
  20. [20]
    Quicklisp beta
    Quicklisp is a library manager for Common Lisp. It works with your existing Common Lisp implementation to download, install, and load any of over 1,500 ...Quicklisp news · Quicklisp beta releases · Quicklisp beta FAQ
  21. [21]
    From PLT Scheme to Racket
    PLT Scheme is a cover for a gang of academic hackers who want to fuse cutting-edge programming-language research with everyday programming.Missing: 2010s | Show results with:2010s
  22. [22]
    Deep Neural Nets: 33 years ago and 33 years from now
    Mar 14, 2022 · The original network was implemented in Lisp using the Bottou and LeCun 1988 backpropagation simulator SN (later named Lush). The paper is in ...
  23. [23]
    [PDF] 2025.pdf - European Lisp Symposium
    May 20, 2025 · Aside from data volume, another big challenge is flexibility and the ... Programming with Useful Quantifiers. ELS'25, May 19–20 2025, Zurich.Missing: LLMs | Show results with:LLMs
  24. [24]
    BACK TO THE FUTURE: LISP IN THE NEW AGE OF AI - YouTube
    May 21, 2025 · This is a talk given by Anurag Mendhekar at the European Lisp Symposium 2025 about Common Lisp's place in computing in the age of LLMs.Missing: Zürich data
  25. [25]
    Conferences - Software Preservation Group
    International Lisp Conference, San Francisco, October 2002. ILC 2005 by Paul ... 2000-2025 by the Plone Foundation et al. Plone® and the Plone logo are ...
  26. [26]
    ELS 2025, Zürich - European Lisp Symposium
    European Lisp Symposium, May 19th - May 20th 2025, Zürich (and online). The conference is over! Invited Speakers.Missing: LLMs big data
  27. [27]
    Using Common Lisp from inside the Browser - TurtleWare
    Aug 21, 2025 · Because the runtime is included as a script, the browser will usually cache the ~10MB WebAssembly module. JS-FFI – low level interface. The ...
  28. [28]
    open-goal/jak-project: Reviving the language that brought ... - GitHub
    This project is to port the original Jak and Daxter and Jak II to PC. Over 98% of the games are written in GOAL, a custom Lisp language developed by Naughty Dog ...
  29. [29]
    MakerLisp Boasts Lisp and CP/M for Makers and Embedded ...
    Jun 14, 2019 · The MakerLisp Embedded Lisp Machine is targeted at makers and developers of embedded systems for use at the edge of the IoT.
  30. [30]
    Embedded Lisps - Software Preservation Group
    Mar 25, 2025 · Tinylisp is a language intended for 'programming-in-the-small' in SRC's Modula-2+ environment. It is a lexically scoped Lisp implemented as a package that can ...
  31. [31]
    An overview of COMMON LISP - ACM Digital Library
    We first give an extensive history of LISP, particularly of the MACLISP branch, in order to explain in context the motivation for COMMON LISP.
  32. [32]
  33. [33]
  34. [34]
    T: a dialect of Lisp or LAMBDA: The ultimate software tool
    The T project is an experiment in language design and implementation. Its purpose is to test the thesis developed by Steele and Sussman in their series of ...
  35. [35]
  36. [36]
    Common Lisp Documentation - LispWorks
    The document ANSI INCITS 226-1994 (formerly ANSI X3.226:1994) American National Standard for Programming Language Common LISP is the official standard, ...
  37. [37]
    Common Lisp Language Overview - LispWorks
    This standard includes the Common Lisp Object System (CLOS); features like multimethods and dynamic class redefinition make CLOS among the most advanced object ...
  38. [38]
    rt - CLiki
    rt is a 1990 regression testing library that is used by some older software. Paul Dietz has added some features to rt for his GCL ANSI Test Suite.
  39. [39]
    The GCL ANSI Common Lisp Test Suite - Semantic Scholar
    The conformance test suite for ANSI Common Lisp distributed as part of GNU Common Lisp (GCL) includes more than 20000 individual tests, as well as random ...
  40. [40]
    CLISP - an ANSI Common Lisp Implementation
    Jul 7, 2010 · How CLISP implements and extends the ANSI standard INCITS 226-1994 (R1999) "Information Technology - Programming Language - Common Lisp" , ...
  41. [41]
    [PDF] Revised5Report on the Algorithmic Language Scheme
    Feb 20, 1998 · SUMMARY. The report gives a defining description of the program- ming language Scheme. Scheme is a statically scoped and.
  42. [42]
    [PDF] Revised6Report on the Algorithmic Language Scheme - R6RS
    Sep 26, 2007 · Scheme is a statically scoped and properly tail-recursive dialect of the Lisp programming language invented by Guy Lewis Steele Jr. and Gerald ...
  43. [43]
    Guile and Scheme (Guile Reference Manual) - GNU.org
    In 2007, the Scheme community agreed upon and published R6RS, a significant installment in the RnRS series. R6RS expands the core Scheme language, and ...
  44. [44]
  45. [45]
    Clojure
    Clojure is a robust, practical, and fast programming language with a set of useful features that together form a simple, coherent, and powerful tool.Rationale · Getting Started · Clojure Downloads · Clojure CLR
  46. [46]
    A history of Clojure | Proceedings of the ACM on Programming ...
    Jun 12, 2020 · Initially designed in 2005 and released in 2007, Clojure is a dialect of Lisp, but is not a direct descendant of any prior Lisp.
  47. [47]
    Racket
    Racket is the first language to support higher-order software contracts and safe gradual typing. Programmers can easily deploy these tools to harden their ...Download Racket · All Racket Books · Racket Documentation · The Racket Guide<|separator|>
  48. [48]
    SBCL 2.5.10 User Manual
    SBCL is essentially a compiler-only implementation of Common Lisp. That is, for all but a few special cases, eval creates a lambda expression, calls compile on ...
  49. [49]
    [PDF] Rebuilding Racket on Chez Scheme (Experience Report)
    We rebuilt Racket on Chez Scheme, and it works wellÐas long as we're allowed a few patches to Chez Scheme. DrRacket runs, the Racket distribution can build ...
  50. [50]
    SRFI 119: wisp: simpler indentation-sensitive scheme
    Jun 25, 2015 · The syntax shown here is the minimal syntax required for the goal of wisp: indentation-based, general lisp with a simple preprocessor, and code ...Missing: dialect | Show results with:dialect
  51. [51]
    uLisp port to C and WebAssembly - Platforms
    Aug 23, 2025 · Hello fellow Lispers, I'm pleased to share some progress made on porting uLisp to C and WebAssembly.Missing: 2020s | Show results with:2020s
  52. [52]
  53. [53]
    [PDF] Recursive Functions of Symbolic Expressions and Their ...
    John McCarthy, Massachusetts Institute of Technology, Cambridge, Mass. ∗. April 1960. 1 Introduction. A programming system called LISP (for LISt Processor) ...
  54. [54]
    [PDF] Quasiquotation in Lisp
    By the mid-1970s many Lisp pro- grammers were using their own personal versions of quasiquote|it wasn't yet built in to any Lisp dialect. My personal knowledge ...
  55. [55]
    [PDF] r5rs.pdf - Scheme Conservatory
    Feb 20, 1998 · SUMMARY. The report gives a defining description of the program- ming language Scheme. Scheme is a statically scoped and.
  56. [56]
    CLHS: Macro DEFUN - LispWorks
    defun implicitly puts a block named block-name around the body forms (but not the forms in the lambda-list) of the function defined. Documentation is attached ...
  57. [57]
    CLHS: Special Operator FUNCTION
    If name is a lambda expression, then a lexical closure is returned. In situations where a closure over the same set of bindings might be produced more than once ...
  58. [58]
    CLHS: Section 1.1.2 - LispWorks
    The major contributions of Scheme were lexical scoping, lexical closures ... In 1986 X3J13 was formed as a technical working group to produce a draft for an ANSI ...
  59. [59]
    CLHS: Function MAPC, MAPCAR, MAPCAN, MAPL... - LispWorks
    mapcar operates on successive elements of the lists. function is applied to the first element of each list, then to the second element of each list, and so on.
  60. [60]
    CLHS: Macro COND - Common Lisp HyperSpec (TM)
    [LISPWORKS] [Common Lisp HyperSpec (TM)] [Previous] [Up] [Next]. Macro COND ... cond allows the execution of forms to be dependent on test-form. Test ...
  61. [61]
    CLHS: Macro CASE, CCASE, ECASE - Common Lisp HyperSpec (TM)
    [LISPWORKS] [Common Lisp HyperSpec (TM)] [Previous] [Up] [Next]. Macro CASE, CCASE, ECASE. Syntax: case keyform {normal-clause}* [otherwise-clause] => result*.
  62. [62]
    CLHS: Macro DO, DO* - LispWorks
    [LISPWORKS] [Common Lisp HyperSpec (TM)] [Previous] [Up] [Next]. Macro DO, DO*. Syntax: do ({var | (var [init-form [step-form]])}*) (end-test-form result-form ...
  63. [63]
    2.4.6 Backquote - LispWorks
    2.4.6 Backquote. The backquote introduces a template of a data structure to be built. For example, writing. `(cond ((numberp ,x) ,@y) (t (print ,x) ,@y))Missing: defmacro gensym
  64. [64]
    Common Lisp vs. Scheme macros - Eli Bendersky's website
    Sep 16, 2007 · It appears that Scheme's macros can be used to implement (non-hygienic) CL macros, but not the other way around.Missing: backquote | Show results with:backquote
  65. [65]
    History - Franz Inc.
    Jul 25, 2022 · These systems influenced the design of the Common Lisp Object System (CLOS). CLOS was developed specifically for this standardization effort, ...
  66. [66]
    [PDF] The Common Lisp Object System: An Overview - Dreamsongs
    The Common Lisp Object System is an object-oriented system that is based on the concepts of generic functions, multiple inheritance, and method combination.
  67. [67]
    [PDF] CLOS: Integrating Object-Oriented and Functional Programming
    May 3, 2004 · CLOS supports the notion of classes separate from, but integrated with, Com- mon Lisp types. There are at least four different meanings of “ ...
  68. [68]
    SRFI 9: Defining Record Types
    This SRFI describes syntax for creating new data types, called record types. A predicate, constructor, and field accessors and modifiers are defined for each ...Missing: Object oriented programming
  69. [69]
    Functions and Function Definitions
    ### Summary of Lisp's Representation of Code as S-expressions, Enabling Code as Data
  70. [70]
    homoiconicity in nLab
    Mar 12, 2014 · Homoiconicity is where a program's source code is written as a basic data structure that the programming language knows how to access.
  71. [71]
    CLHS: Section 2.4
    ### Summary of Reader Macros in Common Lisp (from http://www.lispworks.com/documentation/HyperSpec/Body/02_d.htm)
  72. [72]
    [PDF] Read-Macros | Lisp - UMBC
    Read-macros in Lisp work at read-time, using special characters that are treated differently by the Lisp reader. They are defined using `set-macro-character`.
  73. [73]
    Macros (Common Lisp Extensions) - GNU
    This package also includes the Common Lisp define-compiler-macro facility, which allows you to define compile-time expansions and optimizations for your ...
  74. [74]
    3.2.2.1.1 Purpose of Compiler Macros
    The purpose of the compiler macro facility is to permit selective source code transformations as optimization advice to the compiler. When a compound form is ...
  75. [75]
    Documentation - Bordeaux-Threads
    Jan 7, 2022 · Bordeaux-Threads is a minimal library that aims to provide the basic concepts required for multi-threading programming, such as threads, mutexes ...Missing: concurrency | Show results with:concurrency
  76. [76]
    lmj/lparallel: Parallelism for Common Lisp - GitHub
    lparallel is a library for parallel programming in Common Lisp, featuring See http://lparallel.org for documentation and examples.
  77. [77]
    Atoms - Clojure
    Atoms provide a way to manage shared, synchronous, independent state. They are a reference type like refs and vars. You create an atom with atom, and can ...Missing: concurrency docs
  78. [78]
    Agents and Asynchronous Actions - Clojure
    Where Refs support coordinated, synchronous change of multiple locations, Agents provide independent, asynchronous change of individual locations. Agents are ...Missing: docs | Show results with:docs
  79. [79]
    Clojure core.async Channels
    Jun 28, 2013 · core.async is a new contrib library for Clojure that adds support for asynchronous programming using channels.
  80. [80]
    SRFI 18: Multithreading support
    This SRFI defines the following multithreading datatypes for Scheme. It also defines a mechanism to handle exceptions and some multithreading exception ...
  81. [81]
    Gambit, a portable implementation of Scheme
    Threads. Gambit supports the execution of multiple Scheme threads. These threads are managed entirely by Gambit's runtime and are not related to the host ...Missing: green | Show results with:green
  82. [82]
    The Common Lisp Cookbook – Threads, concurrency, parallelism
    This page discusses the creation and management of threads and some aspects of interactions between them.
  83. [83]
    naveensundarg/Common-Lisp-Actors - GitHub
    This is a simple and easy to use Actor system in Common Lisp. Set-Up Requires Bordeaux threads. http://common-lisp.net/project/bordeaux-threads/<|separator|>
  84. [84]
    Concurrency in WebAssembly - ACM Queue
    Jul 3, 2025 · This article aims to describe how concurrent programs are compiled to Wasm today given the unique limitations that the Web operates under with respect to multi ...
  85. [85]
    (PDF) Newell and Simon's Logic Theorist: Historical Background ...
    Aug 10, 2025 · The Logic Theorist and other cognitive simulations developed by Newell and Simon in the late 1950s had a large impact on the newly developing ...
  86. [86]
    ELIZA Reinterpreted: The world's first chatbot was not intended as a ...
    Jun 25, 2024 · ... Lisp was rapidly becoming the go-to language of AI. Once Lisp came onto the scene no one thought much again about SLIP, or, for that matter ...
  87. [87]
    OPS5 User's Manual - DTIC
    OPS5 is used primarily for applications in the areas of artificial intelligence, cognitive psychology, and expert systems. OPS5 interpreters have been ...
  88. [88]
    [PDF] Knowledge Engineering Environment - KEE - Stacks
    The expert system developer can use KEE to develop either an expert, stand ... All KEE development systems allow access to the underlying LISP.
  89. [89]
    Expert System Building Tools - ScienceDirect.com
    KEE was the first in the field and is therefore the most used of the three. All three are based on LISP so that the designer is free at any time to program in ...
  90. [90]
    [PDF] THE LISP70 PATTERN MATCHING SYSTEM* - IJCAI
    The aim of LISP70 is to provide a flexible and parsimonious programming medium for symbolic processing and an efficient implementation for that medium on ...
  91. [91]
    norvig/paip-lisp: Lisp code for the textbook "Paradigms of ... - GitHub
    This is an open-source repository for the book Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp by Peter Norvig (1992), and the ...
  92. [92]
    From Tool Calling to Symbolic Thinking: LLMs in a Persistent Lisp ...
    Jun 8, 2025 · We propose a novel architecture for integrating large language models (LLMs) with a persistent, interactive Lisp environment.<|separator|>
  93. [93]
    mmaul/clml: Common Lisp Machine Learning Library - GitHub
    CL Machine-Learning is high performance and large scale statistical machine learning package written in Common Lisp developed at MSI.
  94. [94]
    Genera (operating system) - IT History Society
    It is essentially a fork of an earlier operating system originating on the MIT AI Lab's Lisp machines which Symbolics had used in common with LMI and Texas ...
  95. [95]
    [PDF] Genera Concepts Genera The Best Software Environment Available
    Most new users of Symbolics machines have worked on traditional time-sharing systems, such as VAX/VMS, or on microcomputers or conventional UNIX worksta- tions ...
  96. [96]
    Nyxt browser: The hacker's browser
    Nyxt is a browser for hackers that allows for keyboard-only navigation, link hinting, and built-in programmability, enabling quick analysis and information ...FAQ · Manual · Download · Videos
  97. [97]
    roslisp - ROS Wiki
    Roslisp is a Lisp client library for ROS, used for writing ROS nodes in Common Lisp, supporting ease of use and quick scripting.
  98. [98]
    mll/clojure-rt - GitHub
    Clojuire Real Time (clojure-rt) is a compiler of Clojure programming language. It is being developed to allow deterministic and fast execution that could ...
  99. [99]
  100. [100]
    uLisp
    uLisp (pronounced "You-Lisp") is a version of the Lisp programming language specifically designed to run on microcontrollers with a limited amount of RAM.Arduino Uno and Nano · Download and install uLisp · Lisp for microcontrollers · ListsMissing: IoT embedded 2020s
  101. [101]
    [PDF] IoT devices and embedded systems with uLisp
    May 16, 2022 · The language is generally a subset of Common Lisp, this is: uLisp programs should also run under Common Lisp. • major difference: It's a Lisp-1.
  102. [102]
    History of LISP - ACM Digital Library
    This paper concentrates on the development of the basic ideas and distinguishes two periods - Summer 1956 through. Summer 1958 when most of the key ideas ...
  103. [103]
    [PDF] COMMON LISP: A Gentle Introduction to Symbolic Computation
    The factorial function recursively computes factorial of N-1, a slightly simpler problem than ... Most recursive Lisp functions fall into a few standard forms.
  104. [104]
    Simplified Common Lisp reference - mapcar
    MAPCAR applies function FN to elements of lists with same index. Each application result is put into resulting list. Length of resulting list is the length ...
  105. [105]
    Revised(5) Report on the Algorithmic Language Scheme
    A common use of call-with-current-continuation is for structured, non-local exits from loops or procedure bodies, but in fact call-with-current-continuation is ...
  106. [106]
    CLHS: Function COPY-LIST
    Returns a copy of list. If list is a dotted list, the resulting list will also be a dotted list. Only the list structure of list is copied.
  107. [107]
    3.4.1 Tail call optimization - LispWorks
    3.4.1 Tail call optimization. Unless directed otherwise, the Compiler optimizes self-recursive function calls, tail calls, and self-tail calls. In particular, ...
  108. [108]
    8. Macros: Defining Your Own - gigamonkeys
    Common Lisp doesn't support macros so every Lisp programmer can create their ... Use GENSYM at macro expansion time to create variable names used in the expansion ...
  109. [109]