Fact-checked by Grok 2 weeks ago

AI winter

An AI winter denotes a phase of diminished funding, enthusiasm, and progress in artificial intelligence research, typically ensuing from cycles of inflated expectations and subsequent disillusionment when promised breakthroughs prove elusive due to fundamental technical constraints and overoptimistic projections. The phenomenon underscores the field's vulnerability to boom-and-bust dynamics, where rapid surges in investment during perceived advances—such as symbolic AI in the or expert systems in the —give way to sharp contractions when empirical results lag behind rhetoric, leading to reevaluation of priorities and redirection of resources toward narrower, more feasible subfields like . The first such winter, spanning roughly 1974 to 1980, stemmed from high-profile critiques highlighting the gap between ambitious goals and actual capabilities, including the 1973 in the , which prompted the Research Council to slash AI grants by over 90 percent, and parallel funding reductions by the U.S. Defense Advanced Research Projects Agency amid demands for tangible deliverables that early and logic-based systems could not meet. This era saw AI programs curtailed at major institutions, though some dispute its severity, arguing that core research communities expanded despite fiscal pressures. The second winter, from the late to the mid-1990s, arose from the commercial implosion of specialized hardware like Lisp machines, the brittleness of rule-based expert systems unable to scale beyond narrow domains, and the abandonment of Japan's initiative after it failed to yield general-purpose intelligent computing. These episodes, while halting grandiose pursuits, inadvertently fostered pragmatic shifts, enabling later resurgence through data-driven approaches that prioritized measurable outcomes over speculative generality.

Conceptual Foundations

Definition and Identifying Features

An AI winter refers to a period of significant decline in funding, research activity, and enthusiasm for , often triggered by the exposure of technical limitations and the failure to realize overly ambitious promises made during preceding phases of hype. The term, coined in 1984 during a at the American Association for Artificial Intelligence (AAAI) annual meeting, analogizes these downturns to a harsh, stagnant "winter" following optimistic "summers" of investment and progress, where expectations outpace demonstrable capabilities. Unlike routine fluctuations in technological development, AI winters are marked by systemic retrenchment, including government policy shifts that slash budgets—such as U.S. federal AI funding dropping to near-zero levels in certain programs by the late 1970s—and a broader loss of confidence among investors and policymakers. Distinguishing features include a rapid contraction in project viability, where high-profile initiatives collapse due to insurmountable computational or algorithmic barriers, leading to canceled contracts and institutional disbandments. For instance, metrics of decline encompass publication rates in AI conferences falling by orders of magnitude, inflows plummeting (e.g., from peaks exceeding hundreds of millions in adjusted terms during booms to minimal sustenance levels), and expert forecasts shifting from near-term human-level to skepticism about feasibility within decades. These periods also feature critical assessments from bodies like national academies or funding agencies, highlighting overpromising by proponents, which erodes credibility and prompts reallocations to more tangible domains such as basic computing infrastructure. Empirically, AI winters contrast with normal innovation plateaus by their depth and duration, often spanning 5–15 years, during which foundational advances stagnate despite incremental work in subfields, as measured by citation impacts and patent filings. Recovery typically requires paradigm shifts, like the pivot to statistical methods in the , underscoring that these are not mere corrections but profound resets driven by causal mismatches between rhetoric and results.

Distinction from Temporary Setbacks or Normal Innovation Cycles

AI winters represent periods of systemic stagnation in research, characterized by sharp declines in public and private funding—often exceeding 90% in key programs—coupled with widespread skepticism that permeates academic institutions, government agencies, and industry, leading to the near-halt of broad initiatives for 5 to 15 years. This contrasts with temporary setbacks, which are typically confined to specific projects or technologies, such as a failed or short-term market correction, allowing parallel efforts in adjacent areas to persist without broader retrenchment. For instance, the first AI winter, triggered by critiques like the 1966 ALPAC report, resulted in U.S. funding dropping from approximately $20 million annually to negligible levels by 1969, diverting resources entirely from symbolic paradigms rather than merely pausing them. In normal innovation cycles, as depicted in models like the , fields experience a "trough of disillusionment" followed by a gradual "slope of enlightenment" with incremental advancements sustained by ongoing, albeit reduced, investment in viable subcomponents. winters diverge by inducing a deeper "freezing" of support, where unmet expectations from overambitious claims—such as achieving human-level general —foster of failure, prompting funders to reallocate to less speculative domains like basic computing hardware. Historical analyses indicate that these winters involved not just funding cuts but the dissolution of dedicated labs and programs, as seen in the U.K.'s post-Lighthill report () elimination of nearly all grants, whereas regular tech downturns, such as those in development during the 1970s , maintained core R&D trajectories without field-wide contraction. The causal depth further separates AI winters from routine plateaus: while normal cycles often stem from executable scaling issues resolvable through engineering refinements, AI downturns arise from fundamental mismatches between promised capabilities (e.g., robust ) and intrinsic computational or representational limits, amplified by hype cycles that inflate expectations beyond empirical feasibility. This leads to a loop of talent exodus and aversion, with recovery requiring shifts like the shift to statistical methods in the , rather than mere optimization. In contrast, temporary setbacks in fields like during regulatory hurdles (e.g., early trials in the ) rebound via targeted fixes without engendering decade-long doubt about the entire discipline's viability.

Causal Mechanisms

Hype-Driven Expectation Gaps and Overpromising

The phenomenon of hype-driven expectation gaps in research manifests as cycles of exuberant projections from researchers and promoters, which attract substantial and , only for subsequent shortfalls to provoke severe retrenchment. Developers and advocates often portray as poised for rapid, transformative achievements, such as replicating human cognition or automating complex within short timelines, thereby inflating investor and governmental commitments beyond demonstrable technical feasibility. This overpromising creates a mismatch between anticipated deliverables and actual progress, eroding confidence when empirical results—constrained by algorithmic limitations, scarcity, or computational demands—fail to materialize, as evidenced in historical precipices where billions in investments evaporated amid unmet milestones. A foundational instance occurred following the 1956 Dartmouth Summer Research Project proposal, which asserted that "every aspect of learning or any other feature of can in principle be so precisely described that a machine can be made to simulate it," encompassing language use, abstraction formation, problem-solving in human domains, and self-improvement through programming alone. Proponents anticipated significant advances from a mere two-month, ten-person effort, framing AI as an imminent challenge rather than a protracted scientific endeavor. Such declarations, echoed by figures like —who in 1967 predicted that "within a generation... the problem of creating '' will substantially be solved"—sustained elevated expectations through the , drawing initial U.S. government allocations exceeding $100 million annually by the mid-1960s before disillusionment set in. In the 1980s resurgence, expert systems epitomized renewed overpromising, with claims that rule-based knowledge encoding could replicate specialized human expertise across scalable domains, spurring corporate and governmental investments totaling hundreds of millions, including Japan's project launched in 1982 with a budget of approximately 50 billion yen (around $350 million USD at the time). These systems were hyped as harbingers of a logic-programming enabling inference akin to human reasoning, yet they faltered on brittleness outside narrow scopes, bottlenecks, and maintenance costs, amplifying the expectation-reality chasm. The resultant backlash, including market crashes for specialized hardware and slashed R&D budgets, underscored how such gaps not only deplete resources but also stigmatize the field, deterring sustained support until restore credibility.

Intrinsic Technical Barriers and Computational Constraints

Early AI research encountered severe computational constraints that restricted systems to rudimentary tasks, as hardware in the 1950s and 1960s offered limited processing speeds and memory; for instance, machines like the could perform around 40,000 additions per second but struggled with the data volumes required for complex or simulation. These limitations meant that even optimistic projects, such as efforts post-Dartmouth Conference (1956), could not scale beyond toy problems without prohibitive runtime, exacerbating failures when expectations outpaced feasible execution. A core intrinsic barrier was the in search-based algorithms, where problem spaces expanded exponentially with variables—e.g., chess position evaluations grew factorially, rendering brute-force methods intractable on era hardware, as critiqued in James Lighthill's 1973 report on AI's scalability issues. This fundamental challenge in symbolic AI, rooted in the intractability of NP-hard problems without effective heuristics, persisted across domains like and theorem proving, where state-space exploration demanded resources beyond available means, contributing to disillusionment by the mid-1970s. Neural network approaches faced mathematical impossibilities in single-layer perceptrons, as demonstrated by and in their 1969 book Perceptrons, which proved these models could not compute non-linearly separable functions like XOR due to their inability to handle exclusive-or logic without multi-layer architectures. This analysis, while not dismissing deeper networks outright, highlighted the representational poverty of shallow models prevalent at the time, leading researchers to abandon connectionist methods in favor of symbolic paradigms and correlating with funding retrenchment in neural research through the . In the second AI winter, expert systems revealed intrinsic brittleness from the knowledge acquisition bottleneck, where encoding domain expertise into exhaustive rule sets proved labor-intensive and incomplete; systems like (1970s) required hundreds of rules for narrow medical diagnostics but failed to generalize due to unmodeled exceptions and the —difficulty specifying what remains unchanged in state updates. This scalability limit, compounded by maintenance overhead for large knowledge bases, underscored symbolic AI's causal shortfall in mimicking human without ad hoc expansions, hastening the 1987–1993 downturn as deployments stagnated.

Extrinsic Influences: Policy, Funding, and Market Dynamics

Government policy critiques of AI research in the mid-1960s and 1970s, exemplified by the Automatic Language Processing Advisory Committee (ALPAC) report released on November 1, 1966, highlighted the high costs and limited progress in , deeming systems uneconomical at $9 to $66 per 1,000 words compared to human translators, which prompted U.S. federal agencies to slash funding for such projects. Similarly, the of 1973 in the UK dismissed AI's potential for advanced automation and , arguing that and knowledge representation issues rendered the field unproductive, influencing the Science Research Council to terminate most university AI grants. These policy assessments, driven by demands for immediate practical outcomes amid fiscal pressures, shifted official stances toward skepticism, curtailing institutional support and signaling to broader stakeholders that AI promises were overstated. Funding reductions followed directly from these policy pivots, with the U.S. exemplifying the trend by dropping annual allocations from approximately $30 million in the early 1970s to near zero by 1974, as agency leaders prioritized verifiable milestones over exploratory work after projects like speech understanding failed to deliver. In the UK, post-Lighthill, budgets evaporated, confining research to isolated centers like while most universities disbanded programs. During the second winter, 's Strategic Computing Initiative, launched in 1983 with over $1 billion invested by 1993 across 92 projects, faced scaling back in the late 1980s due to unmet goals in autonomous systems, compounded by broader Reagan-era defense reallocations that diminished -specific outlays. Japan's project (1982–1992), backed by the Ministry of International Trade and Industry with industry contributions, similarly faltered as technical shortfalls in hardware led to program curtailment without commercial viability, eroding confidence in state-led investments. Market dynamics amplified these pressures through commercial underperformance, particularly in the late 1980s when the Lisp machine sector—specialized hardware optimized for symbolic —collapsed as general-purpose workstations from and others undercut prices, rendering bespoke AI machines obsolete without sufficient sales volume to offset development costs. Expert systems, hyped for rule-based decision-making in domains like , stagnated commercially due to brittleness in handling novel scenarios and the "knowledge acquisition bottleneck," where encoding domain expertise proved labor-intensive and non-scalable, leading firms to abandon deployments after initial pilots yielded marginal returns. These failures eroded investor appetite, as and corporate R&D shifted away from amid evidence that specialized tools lacked adaptability to real-world variability, reinforcing funding droughts by demonstrating insufficient despite prior hype.

First AI Winter (Mid-1960s to Late 1970s)

Early Overoptimism in Machine Translation and the ALPAC Report (1966)

In the early 1950s, amid imperatives to decipher Soviet documents, U.S. researchers pursued (MT) as a promising application of . A pivotal demonstration occurred on January 7, 1954, when and showcased a system translating 60 simple Russian sentences—restricted to a 250-word vocabulary in the domain of —into English using rule-based algorithms on an computer. This limited experiment, involving direct word-for-word substitution with basic syntactic rules and no handling of ambiguity or context, generated widespread media acclaim and overoptimistic forecasts, such as predictions from participants that fully automatic high-quality MT could be achieved within three to five years. The 1954 demonstration fueled a surge in federal funding, with the U.S. government allocating over $20 million to MT projects by the mid- through agencies like the and the Department of Defense, expecting rapid operational systems for intelligence purposes. However, progress faltered as researchers confronted inherent linguistic challenges, including , idiomatic expressions, and the need for deep semantic understanding, which rule-based methods—lacking robust or contextual —could not adequately address despite computational advances. By the early 1960s, systems remained brittle, producing literal but often nonsensical outputs, revealing the overreliance on simplistic grammars and the underestimation of natural language's combinatorial complexity. In response to these discrepancies between hype and results, the convened the Automatic Language Processing Advisory Committee (ALPAC) in 1964 to evaluate MT's viability. The committee's report, Languages and Machines: Computers in Translation and Linguistics, released on November 1, 1966, assessed that after a decade of effort, MT had not yielded usable systems for unrestricted text, deeming full uneconomical and technically distant due to unresolved issues in , semantics, and . ALPAC recommended curtailing large-scale development funding—projected at up to $30 million annually—and redirecting resources to fundamental research in and computation, critiquing the field's isolation from broader theoretical advances. The report's publication precipitated a near-total collapse in MT , dropping U.S. from several million dollars yearly to under $2 million by , eroding researcher morale and signaling the onset of the first AI winter by exposing how domain-specific optimism had masked broader limitations. This retrenchment underscored a where initial successes in narrow tasks engendered exaggerated timelines, detached from the causal barriers of across unstructured data.

Limitations of Perceptrons and Single-Layer Neural Networks (1969)

In 1969, Marvin Minsky and Seymour Papert published Perceptrons: An Introduction to Computational Geometry, a mathematical analysis that exposed the fundamental shortcomings of single-layer perceptrons, the prevailing neural network architecture at the time. Single-layer perceptrons, pioneered by Frank Rosenblatt in 1957–1958, operated as linear threshold units capable of classifying inputs only if they were linearly separable in the input space, limiting their utility to simple decision boundaries. Minsky and Papert employed geometric and algebraic proofs to demonstrate that these networks could not represent or learn certain basic Boolean functions, such as the XOR (exclusive-or) operation, which requires a nonlinear separation of inputs—e.g., outputting true only when inputs differ (true for (0,1) and (1,0), false otherwise). The authors further proved broader constraints, including the inability of perceptrons to perform parity checks on more than a fixed number of inputs or to detect connectedness in patterns without exhaustive scaling of units, which rendered them computationally inefficient for complex pattern recognition. These limitations stemmed from the perceptron's reliance on a single layer of weighted sums followed by a threshold, precluding the hierarchical feature extraction possible in deeper architectures. Although Minsky and Papert acknowledged in principle that multilayer networks might circumvent some issues, they argued that training such systems lacked viable algorithms given the era's computational constraints, emphasizing instead the need for symbolic, rule-based approaches to AI. The book's rigorous demonstrations shattered earlier hype around perceptrons as a path to general , as promoted by Rosenblatt's claims of scalable learning machines. This led to a rapid contraction in neural network research funding, particularly from U.S. agencies like the Advanced Research Projects Agency (), redirecting resources toward symbolic AI paradigms and contributing to the onset of the first AI winter by the early . Despite isolated continuations in connectionist work, the perceived theoretical dead-end stifled innovation in biologically inspired networks for over a decade, until advances in and hardware in the 1980s revived interest.

Lighthill Report Critique and UK Funding Collapse (1973–1974)

In 1973, the Science Research Council (SRC) commissioned Sir , a professor of at and an outsider to , to evaluate the of AI in the . His report, titled ": A General Survey," classified AI efforts into three categories: Category A (advanced , including and proving), Category B (bridging activities like ), and Category C (computer-based studies of central nervous systems). Lighthill critiqued Category A for failing to achieve general applicability due to combinatorial explosions in problems such as , , and , where computational demands outstripped available resources despite early optimism. Category B was dismissed as lacking coherence and , rooted in unrealistic science-fiction-inspired expectations rather than rigorous engineering, while Category C showed limited generalization beyond specialized psychological insights, with overstated claims about replicating functions. Lighthill's analysis emphasized that AI's foundational approaches, reliant on heuristic search and symbolic manipulation, encountered intrinsic barriers like the intractability of searching vast state spaces, which invalidated promises of human-level intelligence in the near term. He recommended selective support for promising subareas in Categories A and C, such as targeted automation and neuroscientific modeling, while cautioning against broad investment in Category B and advocating for infrastructure like PDP-10 computers to nurture talent over the next 5–7 years, but framed these within a broader skepticism toward expansive AI ambitions. The report's publication in Artificial Intelligence: A Paper Symposium provoked immediate controversy, including a televised debate on 14 October 1973 featuring Lighthill against AI proponents like Donald Michie, who accused the assessment of being hasty and disconnected from ongoing empirical work in pattern recognition and machine learning. The SRC largely endorsed Lighthill's pessimistic conclusions, viewing them as evidence of overpromising relative to delivered results, and responded by drastically redirecting resources away from . By 1974, this led to the collapse of most public funding for AI research, with the government dismantling key programs and laboratories, including significant cuts to institutions like the University of Edinburgh's AI department under Michie, where projects such as the Freddy robot assembly system lost support. Funding for AI-specific initiatives evaporated, shifting priorities toward more immediately applicable computing fields like , which contributed to a decade-long setback in British AI development and marked the onset of the first AI winter in the . This retrenchment reflected causal realism in policy: empirical underdelivery amid hype justified reallocating scarce resources, though critics like Michie argued it stifled incremental progress in specialized domains.

US DARPA Cuts, SUR Project Failure, and Broader Retrenchment

The Speech Understanding Research (SUR) program, initiated by the U.S. in 1971, allocated approximately $3 million over five years to advance continuous speech recognition technology toward systems capable of handling a minimum vocabulary of 1,000 words in constrained domains, with goals including speaker-independent processing and natural dialogue interaction. Major contractors such as (developing the system, which achieved recognition of 1,011 words at around 90% accuracy in limited, grammar-constrained scenarios), Bolt Beranek and Newman, and Stanford Research Institute pursued approaches like , acoustic-phonetic decoding, and architectures for hypothesis resolution. Despite these technical outputs, the systems demonstrated persistent limitations, including heavy reliance on predefined grammars, poor generalization to unrestricted speech variability, and inadequate handling of semantic context or accents, falling short of the program's ambitious benchmarks for practical, deployable performance. DARPA research managers and participants ultimately regarded SUR as a failure for not delivering transformative capabilities in speech understanding, leading to the program's termination in 1976. This outcome crystallized broader disillusionment with AI's near-term prospects, particularly amid concurrent critiques like the 1969 perceptron limitations and mounting evidence of computational and representational hurdles in symbolic AI paradigms. In response, enacted sharp funding reductions for AI initiatives starting in 1974, slashing budgets from early-1970s peaks (when the agency supported millions in annual AI grants) to trough levels by 1975, as new director George Heilmeier prioritized measurable, engineering-oriented projects over exploratory research. These cuts dismantled ongoing efforts in machine intelligence, with federal AI support contracting agency-wide and cascading to academic and industrial partners dependent on grants. The retrenchment extended beyond SUR, triggering a systemic pullback in U.S. investment that idled researchers, closed specialized labs, and deterred new entrants to the field, as principal investigators struggled to secure alternative funding amid skepticism from policymakers and funders. By the late , DARPA's pivot—coupled with fiscal pressures and demands for demonstrable returns—had reduced overall research momentum, contributing decisively to the first winter's stagnation through the early , though isolated narrow-domain applications persisted. This episode underscored DARPA's role as a funder, where high-profile shortfalls amplified extrinsic pressures on the discipline.

Second AI Winter (Late 1980s to Mid-1990s)

Lisp Machine Market Crash and Hardware Overspecialization

The , a specialized optimized for executing code with hardware support for features like garbage collection, tagged memory, and list manipulation, emerged in the late 1970s from MIT's AI Lab and became central to symbolic development in the . Companies such as (founded 1980) and Lisp Machines Inc. (LMI, founded 1979) produced these systems, which powered expert systems and AI research by offering superior performance for AI workloads compared to general-purpose hardware of the era. also entered with its Explorer line, targeting AI applications in defense and industry. By the mid-1980s, the Lisp machine market had grown amid AI optimism, with achieving peak revenue of $101.6 million in 1986, driven by demand from AI labs, corporations, and government-funded projects like the . However, the market collapsed abruptly in 1987, as an industry valued at approximately half a billion dollars evaporated within a year, replaced by cheaper general-purpose workstations. ' revenue fell to $82.1 million in 1987 and $55.6 million in 1988, reflecting a sharp decline in sales amid broader AI retrenchment. LMI filed for in 1987, and followed suit in 1988 after failing to pivot. This crash stemmed primarily from hardware overspecialization, which locked machines into a narrow niche while rendering them uncompetitive against rapidly advancing general-purpose systems. machines, priced from $36,000 to $125,000 for models like the Symbolics 3600, incorporated custom and processors tailored to Lisp primitives, but lacked versatility for non-AI tasks and compatibility with standard networking or peripherals. In contrast, RISC-based workstations from (e.g., Sun-3 series at around $14,000) and others like Apollo and , equipped with efficient Lisp compilers, delivered comparable AI performance at a fraction of the cost by the late 1980s, as amplified general CPU speeds. The absence of broadly applicable "killer applications" beyond AI prototyping—coupled with waning defense funding post-Cold War shifts—exposed the fragility of this specialized ecosystem, as customers shifted to multipurpose hardware supporting diverse workloads. The fallout accelerated the second AI winter by eroding investor confidence in AI-specific , prompting a pivot to software-only approaches on commodity and highlighting the risks of domain-specific without scalable . This event underscored how technical advantages in niche domains can evaporate when general trajectories outpace specialized innovations, a observed in subsequent AI ventures.

Expert Systems Deployment Stagnation and Knowledge Acquisition Bottlenecks

During the 1980s, expert systems—rule-based programs designed to emulate human decision-making in specialized domains—faced severe limitations in scaling beyond prototype stages, primarily due to the , where eliciting, structuring, and encoding tacit expertise from domain specialists proved extraordinarily labor-intensive and error-prone. Pioneering AI researcher highlighted this issue as early as 1983, noting that while initial systems like (developed in the late 1960s) succeeded with hundreds of rules, expanding to thousands required disproportionate effort from knowledge engineers, often taking years per project and yielding incomplete or inconsistent rule bases. This bottleneck arose because human experts struggled to articulate intuitive judgments explicitly, leading to knowledge gaps that rendered systems brittle outside narrow, predefined scenarios. Deployment stagnation ensued as these systems failed to transition reliably from academic or pilot environments to widespread commercial or industrial use, with high development costs—often exceeding millions of dollars—and maintenance demands deterring adoption. For instance, the medical diagnostic system , which demonstrated 69% accuracy in tests by , was never deployed in clinical practice due to physicians' reluctance to trust opaque rule chains and the impracticality of updating its 450+ rules amid evolving medical knowledge. Similarly, while Digital Equipment Corporation's XCON (R1) configured systems profitably from 1980 to 1986, saving an estimated $40 million annually, its expansion stalled as rule proliferation (reaching over 10,000 by the mid-1980s) amplified verification challenges and integration issues with dynamic real-world variables. By the late 1980s, surveys indicated that over 80% of initiatives in fields like and were abandoned post-prototype, exacerbating investor disillusionment as promised gains evaporated. These intertwined challenges—knowledge elicitation inefficiencies and deployment hurdles—culminated in a broader retrenchment, with shell vendors collapsing and funding for symbolic AI drying up by 1987–1988, as enterprises shifted toward cheaper, more flexible alternatives like conventional software. The inability to automate itself, despite attempts via tools like ETS in the early 1980s, underscored fundamental limits, where rule-based architectures demanded manual intervention that outpaced gains in hardware. This stagnation not only idled billions in sunk investments but also eroded confidence in knowledge-intensive AI paradigms, paving the way for probabilistic methods in the 1990s.

Fifth Generation Computer Systems Project Demise (Japan, 1982–1992)

The (FGCS) project, launched in April 1982 under Japan's Ministry of International Trade and Industry (MITI), represented a ¥50 billion (approximately $400 million USD at contemporary exchange rates) decade-long effort to pioneer computers leveraging massive parallel inference machines and paradigms, primarily Prolog-based, for advanced knowledge processing and tasks. Coordinated through the newly established Institute for New Generation Computer Technology (ICOT), the initiative involved major firms like , , , and , aiming to deliver by 1992 prototypes with capabilities such as 1 giga logical inferences per second (GIPS), handling of 1,000 facts in knowledge bases, and interfaces surpassing fourth-generation systems. Early phases yielded foundational outputs, including the Personal Sequential Inference (PSI) machine in 1984 and the Kernel Language KL0 prototype by 1986, which explored concurrent logic programming to enable parallelism. However, scaling these to the targeted multi-processor systems proved intractable due to inherent inefficiencies in logic programming resolution—such as backtracking overhead and unification bottlenecks—that hindered efficient parallel execution on thousands of processors, as backtracking did not distribute readily across nodes without prohibitive communication costs. Midway assessments around 1987 highlighted these shortfalls, with inference speeds lagging targets by orders of magnitude and software architectures struggling to abstract hardware complexities without sacrificing performance. By the late 1980s, external factors compounded internal hurdles: Japan's began deflating in 1989, straining public R&D commitments, while global computing shifted toward cost-effective RISC architectures and CISC optimizations that prioritized general-purpose scalability over specialized AI hardware. The project's bet on symbolic, deductive AI overlooked emerging evidence of bottlenecks and the brittleness of rule-based systems in real-world variability, failing to produce deployable applications beyond niche prototypes like the EDR for . Funding pressures led to scope reductions, with parallel machine goals deferred in favor of software refinements, such as the KL1 language finalized in 1991. The project formally concluded in February 1992 without achieving its core vision of fifth-generation machines dominating commercial markets, as no systems matched the promised inference throughput or integrated AI functionalities at viable costs, rendering them uncompetitive against U.S.-led advances in workstations and early neural approaches. Post-mortem analyses attributed the demise to overreliance on unproven paradigms amid underestimating bottlenecks' persistence and the qualitative challenges of , such as absent scalable algorithms, rather than mere quantitative hardware deficits. While spin-offs influenced later concurrent programming research, the FGCS's inability to yield economic returns—despite full budget expenditure—exemplified the risks of centralized, hype-driven mega-projects, eroding international confidence in symbolic and precipitating funding retrenchments in the second AI winter.

Strategic Computing Initiative Reductions (US, 1983–1993)

The Strategic Computing Initiative (SCI), initiated by the in 1983, sought to integrate advanced hardware architectures, software, and technologies to enable machine intelligence for military purposes, such as autonomous land vehicles, pilot's associates, and battle management systems. The program targeted breakthroughs in , , speech understanding, and expert systems, with an overall budget exceeding $1 billion allocated over its decade-long span through 1993. Initial optimism stemmed from competitive pressures, including Japan's project, positioning SCI as a U.S. response to advance computing speeds and AI scalability for defense applications. Reductions commenced in 1985 amid fiscal pressures, including a $47.5 million cut to funding and broader reductions under the , which mandated $11.7 billion in fiscal year 1986 trims, with the Department of Defense absorbing half. A shift that year to Saul Amarel as director prompted a refocus, as he expressed about achieving generic capabilities applicable across domains, emphasizing instead incremental advancements in specialized systems. These adjustments reflected early recognition of technical hurdles, such as difficulties in scaling beyond controlled environments, though the program continued with redirected priorities toward elements. By 1987, flagship efforts like the autonomous land vehicle project were abandoned due to persistent shortfalls in integrating vision, navigation, and decision-making under real-world conditions, compounded by Reagan administration budget constraints. New spending on AI under was canceled in 1988, marking a pivotal contraction as leadership determined that rapid breakthroughs in human-level machine intelligence were unattainable within the timeframe. The initiative formally concluded in 1993 without delivering its core vision of autonomous military systems, contributing to disillusionment in AI research funding and exacerbating the second AI winter through demonstrated gaps between ambitious goals and empirical progress. Despite setbacks, yielded ancillary advances in and , though these were overshadowed by the program's unfulfilled promises.

Inter-Winter Developments and Paradigm Shifts

Transition to Statistical and Probabilistic Approaches

The limitations of symbolic AI, particularly the brittleness of rule-based systems and the insurmountable bottlenecks exposed during the second AI winter, prompted a toward statistical and probabilistic methods in the late and early . These approaches prioritized empirical learning from over hand-engineered logic, enabling systems to handle real-world through probability distributions rather than deterministic rules. This transition was driven by recognition that symbolic methods failed to scale with complex, noisy inputs, whereas statistical techniques could infer patterns probabilistically, leveraging growing computational power and datasets. A foundational advance was Judea Pearl's development of Bayesian networks in the mid-1980s, formalized in his 1988 book Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. These graphical models represented joint probability distributions efficiently, allowing AI systems to update beliefs based on evidence via algorithms like , which addressed the computational intractability of full probabilistic inference in symbolic frameworks. Pearl's work shifted AI toward viewing probabilities as degrees of belief, enabling applications in diagnostics and decision-making under uncertainty, and marked a departure from the crisp logic of earlier eras. In speech recognition, the adoption of hidden Markov models (HMMs)—statistical tools for sequential data modeling—gained traction from the 1970s but achieved breakthrough success in the late 1980s, reviving interest at the decade's turn. -funded programs in the demonstrated rapid error rate reductions, with statistical methods outperforming rule-based alternatives by integrating acoustic probabilities and language models, leading to deployable systems like those achieving under 10% word error rates on controlled vocabularies by the mid-. Similarly, machine translation pivoted to statistics with researchers' 1990 paper introducing noisy channel models, which treated translation as probabilistic alignment of source and target texts using parallel corpora, yielding viable French-to-English systems without exhaustive rule sets. This approach, refined through models 1–5 by 1993, demonstrated that data-driven fertility and alignment probabilities could approximate translation quality, contrasting with the post-ALPAC rule-based stagnation. By 1992, statistical methods had become mainstream in natural language processing, fueled by large corpora (e.g., IBM's analysis of 100 million words from legal texts) and declining compute costs, enabling probabilistic context-free grammars and maximum entropy models for parsing and disambiguation. These developments underscored the causal realism of empirical validation: systems succeeded where symbolic ones faltered because they adapted to data distributions rather than assuming perfect knowledge encoding. The shift restored modest funding and confidence, paving the way for sustained narrow AI progress without the hype of prior eras, as evidenced by practical deployments in filtering and prediction tasks by the late 1990s.

Sustained but Understated Progress in Narrow AI Applications

Despite the disillusionment following the second AI winter, targeted advancements in techniques enabled practical deployments in specialized domains, emphasizing statistical models over symbolic reasoning. These efforts, often conducted in industry rather than , focused on probabilistic methods like hidden Markov models (HMMs) and neural networks for , yielding reliable tools without the overpromising of prior eras. Such progress was incremental and application-specific, prioritizing measurable performance on real-world data over general . In , HMMs facilitated the transition from rule-based to statistical approaches, supporting commercial viability by the early 1990s. Dragon Dictate, released in 1990, marked the first consumer-oriented continuous software, allowing users to dictate up to 100 words per minute with basic vocabulary training. This built on DARPA-funded research from the but achieved practical utility through data-driven refinements, enabling applications in and accessibility tools without fanfare. Financial services saw understated integration of neural networks for fraud detection and risk modeling. FICO's Falcon system, launched in 1992, employed neural networks to analyze transaction patterns in real-time, reducing false positives in credit card fraud by adapting to evolving schemes. Similarly, banks like Security Pacific National adopted AI-driven anomaly detection for debit card misuse as early as 1990, processing volumes unattainable by manual methods. In quantitative finance, machine learning techniques for predictive modeling and credit scoring emerged in the 1990s, leveraging available transaction data for portfolio optimization, though initial enthusiasm tempered by validation challenges. Document processing advanced via (OCR), where accuracy gains from pattern-matching algorithms supported widespread digitization. The 1990s proliferation of personal computers drove OCR adoption for scanning printed text into editable formats, with systems like those tested by NIST achieving character error rates below 1% on clean documents by the mid-decade. Tools such as , refined through annual accuracy evaluations, exceeded early expectations, facilitating archival projects and automated in business workflows. By the early , Bayesian classifiers extended this pragmatism to , with Paul Graham's 2003 refinements enabling naive Bayes methods to classify with over 99% accuracy on personal datasets, influencing enterprise tools without broad hype. These narrow successes contrasted with stalled expert systems by relying on scalable data rather than exhaustive knowledge bases, laying groundwork for later expansions while evading the funding droughts of broader AI pursuits. Industry adoption prioritized deployable metrics, such as error rates under operational constraints, fostering quiet innovation amid skepticism.

Recent Boom and Emerging Risks (2010s–2025)

Deep Learning Surge and Private Sector Investment Explosion

The resurgence of began with the model's victory in the Large Scale Visual Recognition on September 30, 2012, where it reduced the top-5 error rate to 15.3%—a 10.8 improvement over the runner-up—demonstrating the power of large-scale convolutional neural networks trained via on graphics processing units (GPUs). This achievement, led by , , and from the , overcame prior computational and vanishing gradient challenges, validating deep architectures for complex pattern recognition tasks like image classification and sparking a from shallow models and hand-engineered features. The model's reliance on unsupervised pre-training followed by supervised , combined with ReLU activations and dropout regularization, enabled training on datasets exceeding 1 million images, influencing subsequent architectures in and beyond. Subsequent milestones amplified the surge, including the 2014 development of generative adversarial networks (GANs) by , which enabled realistic data synthesis, and the 2017 introduction of the transformer architecture by Vaswani et al. at , revolutionizing sequence modeling for through self-attention mechanisms that scaled efficiently with . These advances, fueled by in compute power—doubling roughly every 3.4 months from 2012 to 2018 per estimates—and vast datasets from the , extended deep learning's dominance to , , and autonomous systems. By the late 2010s, deep learning underpinned breakthroughs like AlphaGo's 2016 victory over human Go champions, showcasing end-to-end learning without domain-specific heuristics. Private sector investment in exploded alongside these technical gains, shifting funding dominance from government programs that faltered in prior winters to and corporate R&D, with annual private investments rising from under $1 billion in 2010 to approximately $100 billion globally by 2021, per aggregated venture data. Tech giants like , , and poured billions into in-house labs—e.g., 's DeepMind acquisition in 2014 for $500 million—while startups attracted record venture rounds, such as OpenAI's $10 billion partnership in 2019. The November 30, 2022, public release of , based on GPT-3.5 large language models, catalyzed a further surge in generative funding, reaching $33.9 billion in private investment for that sector alone in 2024—an 18.7% increase from 2023 and over eightfold from 2022 levels—driven by applications in content creation, coding assistance, and enterprise automation. This influx, totaling over $300 billion in global startup funding annually by 2024 across tech sectors with heavy overlap, reflected expectations of trillion-dollar market opportunities but also concentrated risks in scaling infrastructure like data centers. Global private investment in generative reached $33.9 billion in , marking an 18.7% increase from 2023 levels and over eight times the 2022 figure. funding for AI startups similarly escalated, with generative AI attracting approximately $45 billion in , nearly doubling the $24 billion recorded in 2023. In the first half of 2025 alone, generative AI funding surpassed the full-year total, comprising over 50% of global allocations and reflecting sustained investor enthusiasm amid competitive pressures. Big technology firms amplified this trend through unprecedented capital expenditures on AI infrastructure. Meta, Alphabet, Amazon, and Microsoft collectively planned $320 billion to $400 billion in 2025 spending, primarily for data centers, chips, and related hardware to support model training and deployment. This surge, equivalent to a significant portion of national defense budgets in some contexts, underscored the oligopolistic concentration of AI advancement in a handful of entities, with Amazon projecting $100 billion and Microsoft $80 billion individually. Despite the influx, economic pressures mounted, including elevated borrowing costs from sustained high interest rates, which strained financing for AI-dependent expansions. Annual depreciation on new AI facilities reaching $40 billion by late 2025 highlighted the , as facilities required massive upfront outlays with delayed revenue realization. Venture investors grew more selective, prioritizing proven over speculative ventures, while reports indicated 95% of AI pilot projects failed to deliver meaningful returns despite billions invested. Concerns over an investment intensified, with analysts warning of overinvestment risks akin to historical manias, potentially amplified by geopolitical imperatives driving inefficient capital allocation. CEO cautioned that excessive inflows could lead to losses for overzealous backers, echoing patterns where hype outpaces commercial viability. These dynamics propped up broader temporarily—AI-related spending offsetting consumer slowdowns—but raised questions about long-term sustainability absent transformative productivity gains.

Signs of Diminishing Returns: Data, Energy, and Scalability Limits

As large language models and other systems have scaled to trillions of parameters, empirical observations indicate diminishing marginal returns in performance gains relative to increased computational resources, data volumes, and energy inputs, challenging the of pure scaling approaches. Scaling laws, initially formulated by researchers at , predicted predictable improvements in model loss and capabilities with in compute and data, but recent analyses show these laws bending toward saturation, with post-training performance plateaus on benchmarks despite massive investments. For instance, from 2023 to 2025, frontier models like those from and achieved incremental benchmark improvements—such as 5-10% gains on tasks like MMLU—while requiring 10x or more compute compared to predecessors, signaling inefficiencies. Data availability poses a primary bottleneck, as the stock of high-quality, human-generated text for training is nearing exhaustion under current trends. Epoch AI estimates that language models will consume the entirety of publicly available human-generated text data—approximately 100-300 tokens—between 2026 and 2032, assuming historical growth rates of 4x annual increases in training dataset sizes continue. This projection aligns with observations that state-of-the-art models already utilize datasets exceeding 10 tokens, with generation offering partial mitigation but introducing risks of model collapse from amplified errors. High-quality sources, such as filtered web crawls, are depleting faster, forcing reliance on lower-quality or proprietary data, which yields suboptimal scaling. Energy demands exacerbate scalability constraints, with training a single frontier model consuming electricity equivalent to thousands of households over weeks or months. Estimates for GPT-4-scale training in 2023 ranged from 1-10 GWh, scaling to projected 100+ GWh for 2025-era models due to larger parameter counts and longer training runs, equivalent to the annual output of small power plants. Inference phases, often overlooked, amplify this: deploying models like GPT-4o across millions of queries daily requires data centers drawing 1-10 continuously, contributing to grid strains and carbon emissions exceeding those of some countries. Efficiency optimizations, such as sparse training or quantization, recover only 20-30% of wasted power, insufficient to offset exponential growth in model sizes. Broader scalability limits manifest in compute inefficiencies and hardware bottlenecks, where additional yield progressively smaller capability uplifts. Chinchilla-optimal suggested balanced data-compute ratios, yet post-2022 models deviate, with compute-optimal for dense architectures hitting walls around 10^28-10^29 due to chip fabrication constraints and overheads in distributed systems. By mid-2025, leading labs reported that brute-force parameter alone fails to deliver proportional gains, prompting shifts toward architectural innovations like mixture-of-experts or test-time compute, as marginal returns on raw investments diminish to near-zero on saturated benchmarks. and constraints further cap feasible run sizes, with projections indicating that 10^30 FLOP trainings—envisaged for AGI-level systems—may remain infeasible before 2030 without breakthroughs in non-silicon substrates.

Controversies and Alternative Interpretations

Debates on Overstatement: Hype Correction vs. Genuine Stagnation

Critics of the recent argue that diminished returns in large language models (LLMs) signal genuine stagnation rather than mere hype correction, pointing to of plateauing performance despite exponential increases in compute and data. For instance, scaling laws, which predicted consistent gains from larger models, have shown signs of breakdown, with recent LLMs exhibiting only marginal improvements in benchmarks like reasoning tasks even as training costs soared into billions of dollars by 2024. AI researcher has highlighted this trend, noting in October 2024 that generative AI usage may be declining amid failures to deliver on promises of robust , such as persistent hallucinations and in novel scenarios, suggesting fundamental architectural limits in transformer-based systems rather than solvable scaling issues. Proponents of the hype correction view counter that current challenges reflect overhyped expectations from media and industry rather than inherent stagnation, emphasizing sustained progress in specialized applications and the potential for paradigm shifts beyond pure scaling. Meta's Chief AI Scientist has critiqued the "religion of scaling" as insufficient for true intelligence, arguing in May 2025 that AI requires objective-driven architectures for planning and causal understanding, but he maintains that incremental advances will continue without a full winter, drawing parallels to historical recoveries after disillusionment phases. Similarly, roboticist , reflecting on AI's cyclic history in his January 2025 predictions scorecard, attributes slowdown fears to exaggerated timelines for —such as claims of superintelligence by 2030—but observes steady embedding of AI in hybrid human-machine systems, like industrial automation, where practical gains persist despite unmet grand promises. The debate hinges on interpreting metrics like enterprise ROI and energy efficiency: stagnation advocates cite 2024-2025 reports of underwhelming productivity boosts from adoption, with U.S. data showing no aggregate labor displacement or output surge attributable to generative tools, implying a nearing burst. In contrast, correction proponents reference venture funding stability—over $100 billion in investments in 2024 alone—and niche successes, such as AlphaFold's breakthroughs, as evidence that winters arise from broad hype mismatches, not core technological arrest, allowing for quieter, data-driven evolution post-2025. This tension underscores source credibility issues, as optimistic narratives often stem from industry insiders with equity stakes, while skeptics like Marcus emphasize peer-reviewed critiques of deep learning's brittleness, urging hybrid neuro-symbolic approaches over unexamined faith in .

Fundamental Limits vs. Solvable Engineering Problems

The distinction between fundamental limits and solvable engineering problems has been central to understanding AI winters, where periods of stagnation often stemmed from overreliance on paradigms that hit inherent barriers, yet subsequent revivals demonstrated that many obstacles were addressable through methodological shifts and resource scaling. For instance, the first AI winter in the 1970s followed critiques like the (1973), which highlighted the in symbolic AI systems unable to scale to real-world complexity without exponential resource demands—a challenge rooted in the and qualification problem, where systems failed to handle unforeseen scenarios without exhaustive rule specification. These issues appeared fundamental to rule-based approaches but were largely circumvented by the pivot to statistical and probabilistic methods in the , which traded exhaustive logic for data-driven approximations, enabling progress in and despite noisy inputs. In the second winter of the late to early , expert systems collapsed under the bottleneck and maintenance costs, as encoding domain expertise proved labor-intensive and brittle outside narrow scopes, leading to funding cuts after DARPA's Strategic Computing Initiative scaled back in 1987 due to unmet milestones. Critics like argued these reflected deeper limits in representation and reasoning, yet engineering innovations—such as for multi-layer neural networks, introduced effectively in 1986—overcame earlier limitations exposed in 1969, allowing hidden layers to approximate nonlinear functions and revive interest by the mid-. This pattern underscores that perceived fundamentals, like single-layer perceptrons' inability to solve XOR problems, were paradigm-specific rather than universal, resolvable via architectural and algorithmic refinements rather than impossible in principle. Contemporary debates frame current risks of stagnation not as insurmountable walls but as engineering hurdles amid scaling laws' diminishing returns, where performance gains per additional compute or data logarithmically taper, as observed in models beyond scale by 2023. Proponents of fundamental limits, including skeptics like , contend that transformer-based systems inherently struggle with systematic , , and robustness to distribution shifts due to reliance on over compositional understanding—evidenced by persistent failures in novel tasks despite trillion-parameter models. However, empirical evidence favors solvability: techniques like generation, mixture-of-experts architectures, and inference-time compute (e.g., chain-of-thought prompting) have extended capabilities, with studies showing that long-horizon task yields outsized economic value even as benchmark losses plateau, suggesting adaptation through efficiency gains rather than abandonment. and data scarcity, projected to constrain training by 2030 without nuclear-scale infrastructure, remain addressable via algorithmic compression and multimodal pretraining, mirroring how GPU parallelization resolved compute bottlenecks. This engineering optimism is tempered by meta-awareness of hype cycles: past winters correlated with vendor overpromising (e.g., fifth-generation computer projects failing brittleness tests), while today's private-sector incentives prioritize measurable narrow AI gains over risky general intelligence pursuits, potentially averting deep freezes through diversified applications like (, 2020) that validate incremental scaling. Ultimately, while theorems like no free lunch imply no universal learner without domain priors, historical pivots—from logic to statistics to —indicate that AI trajectories hinge more on iterative problem-solving than fixed impossibilities, with credibility favoring data-backed advances over speculative doomsaying from biased academic narratives.

Implications for Future AI Trajectories and Policy Lessons

The historical AI winters of the and late to early illustrate that hype-driven expectations, when unmet by practical outcomes, lead to sharp funding declines—such as the U.S. DARPA's reduction of AI program budgets by over 90% following the 1969 Perceptrons critique and Japan's Fifth Generation Computer failure by —prompting a reevaluation of dominant paradigms like symbolic . These periods, rather than halting progress entirely, enabled quieter advancements in probabilistic models and data-driven techniques, which laid groundwork for the resurgence post-2000, as reduced pressure allowed pluralism in research approaches. Consequently, future trajectories may follow a non-linear path, where current deep learning scaling—exemplified by models like trained on trillions of parameters—encounters diminishing marginal returns amid data exhaustion (global text data projected to suffice only until around 2026 at current rates) and energy constraints (training a single large model consuming energy equivalent to thousands of households), potentially ushering a selective winter that favors efficient, specialized systems over broad generality. Such cycles underscore the risk of overreliance on exponential compute growth under variants, which historically slowed AI winters but cannot indefinitely compensate for unsolved challenges like robust reasoning or beyond ; empirical evidence from benchmark plateaus, such as limited gains in abstract reasoning tasks despite parameter increases, suggests that without paradigm shifts—possibly toward hybrid neuro-symbolic methods or neuromorphic hardware—trajectories could stagnate in high-cost, low-generalization regimes by the late . This realism tempers optimism around near-term , prioritizing verifiable utility in domains like (AlphaFold's 2020 impact) over unsubstantiated promises, thereby sustaining incremental gains amid volatility. Policy lessons from these winters emphasize diversified, long-term funding mechanisms to buffer against boom-bust dynamics; for example, the U.S. National Science Foundation's steady support for basic research during the 1990s contrasted with politically influenced cuts like the UK's post-Lighthill Report defunding in 1973, highlighting the pitfalls of centralized, expectation-tied allocations. Governments should thus advocate for public-private partnerships focused on —such as subsidized energy-efficient compute clusters—and empirical ROI metrics, avoiding mandates for unproven technologies that echo the expert systems overinvestment of the , which saw evaporate by 1990. International competition, as evidenced by sustained East Asian investments post-1990s winters, can mitigate national risks, but policies must incorporate in progress claims to counter institutional biases toward overstated capabilities in academic and media narratives. Ultimately, fostering regulatory frameworks that reward narrow, deployable —such as in optimization yielding 10-20% gains—over speculative ventures promotes , ensuring trajectories align with causal realities rather than promotional cycles.

References

  1. [1]
    What is AI Winter? Definition, History and Timeline - TechTarget
    Aug 26, 2024 · AI winters are periods of stagnation in interest in funding for AI. Learn the history of AI winters, why they occur and how they differ from ...
  2. [2]
    AI and Its New Winter: from Myths to Realities by Luciano Floridi
    Apr 30, 2021 · An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a ...
  3. [3]
    (PDF) A Brief History of AI: How to Prevent Another Winter (A Critical ...
    Sep 6, 2021 · However, AI development experienced a decline between the 1980s and early 2000s a period often referred to as the "AI Winter" due to criticism ...
  4. [4]
    The History of Artificial Intelligence - IBM
    This report contributed to the onset of the first AI winter16, a period of diminished interest and investment in AI research. 1980–2000. 1980. WABOT-217, a ...The history of artificial... · Pre-20th century
  5. [5]
    The First AI Winter (1974–1980) — Making Things Think - Holloway
    Nov 2, 2022 · The term AI winter was explicitly referencing nuclear winters, a name used to describe the aftermath of a nuclear attack when no one can live ...
  6. [6]
    From AI Winters to Generative AI: Can This Boom Last? - Forbes
    Aug 24, 2025 · By the early 1970s, DARPA began demanding concrete results and judging AI proposals against stringent goals. Many projects fell short, and by ...
  7. [7]
    There Was No 'First AI Winter' | Communications of the ACM
    Nov 17, 2023 · Despite challenges and failures, the artificial intelligence community grew steadily during the 1970s.
  8. [8]
    The Second AI Winter (1987–1993) — Making Things Think
    Nov 2, 2022 · The Second AI Winter began with the sudden collapse of the market for specialized AI hardware in 1987.Missing: 1990s | Show results with:1990s
  9. [9]
    Appendix I: A Short History of AI
    AI was officially born in 1956, with early ideas from Turing, and a resurgence in the 90s. The field experienced an "AI winter" in the 80s.
  10. [10]
    AI Winter - Why enthusiasm around AI sometimes wanes?
    Mar 13, 2025 · AI winters are periods of stagnation and skepticism after rapid growth, caused by unmet expectations and declining interest, following a cycle ...
  11. [11]
    Professor's perceptron paved the way for AI – 60 years too soon
    Sep 25, 2019 · The perceptron's rise and fall helped usher in an era known as the “AI winter” – decades in which federal funding for artificial intelligence ...
  12. [12]
    (PDF) Analyzing the Prospect of an Approaching AI Winter
    May 12, 2019 · This paper examines the probability of an approaching AI winter by comparing historical patterns to the modern era.<|separator|>
  13. [13]
    AI winter?: Navigating the cycles of AI hype and just regulation
    Jun 25, 2024 · An AI winter refers to a period when the enthusiasm for AI cools significantly, often due to failed, overhyped projects and economic downturns.Missing: features | Show results with:features
  14. [14]
    Analyzing the Prospect of an Approaching AI Winter - Academia.edu
    Following the AAAI's usage, in this work AI winter is defined as: A period of declining enthusiasm after times of success in the field of AI. Depending on the ...
  15. [15]
    Understanding basic principles of artificial intelligence: a practical ...
    The initial enthusiasm was followed by the so-called “AI winter”; a period identified from the 1970s to the 1990s, in which problems related to the ...
  16. [16]
    Understanding AI Winter: Navigating Through Hype, Disappointment ...
    AI Winter refers to a cyclical phenomenon in the field of artificial intelligence characterized by a significant decline in funding, interest, and research ...
  17. [17]
    AI Hype Cycles: Lessons from the Past to Sustain Progress - NJII
    May 13, 2024 · As a result, AI lost its luster with governments and businesses drastically reducing investments. This ushered in an “AI winter” that would last ...
  18. [18]
    [PDF] Semantic Web And Software Agents – A Forgotten Wave of Artificial ...
    May 23, 2025 · An AI winter refers to a period of reduced funding, interest, and progress in AI research, typically following unmet expectations or ...Missing: distinction | Show results with:distinction
  19. [19]
    (PDF) The Cycles of AI Winters: A Historical Analysis and Modern ...
    Nov 8, 2024 · These downturns are known as "AI winters," a term that perfectly captures the chilling effect on research, development, and investment in ...
  20. [20]
    Is an 'AI winter' coming? Here's what investors and executives can ...
    Sep 3, 2025 · Academic skepticism and business frustration have triggered past AI winters. This time, both factors are present.
  21. [21]
    AI Winters: Cycles of Boom and Bust in Artificial Intelligence
    Aug 8, 2024 · AI winters refer to periods when enthusiasm for artificial intelligence research and development diminishes, resulting in reduced funding and interest.
  22. [22]
    AI Winter: The Highs and Lows of Artificial Intelligence
    The term AI winter first appeared in 1984 as the topic of a public debate at the annual meeting of the American Association of Artificial Intelligence ...
  23. [23]
    A PROPOSAL FOR THE DARTMOUTH SUMMER RESEARCH ...
    We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire.Missing: promises | Show results with:promises
  24. [24]
    The Turbulent Past and Uncertain Future of Artificial Intelligence
    Sep 30, 2021 · In 1967, MIT professor Marvin Minsky wrote: "Within a generation...the problem of creating 'artificial intelligence' will be substantially ...Missing: source | Show results with:source
  25. [25]
    How the AI Boom Went Bust - Communications of the ACM
    Jan 26, 2024 · The 1980s, in contrast, saw the rapid inflation of a government-funded AI bubble centered on the expert system approach, the popping of which began the real AI ...
  26. [26]
    The Fifth Generation Computing project | Locklin on science
    Jul 25, 2019 · They took all the hyped technology of the time; AI, massive parallelism, databases, improved lithography, prolog like languages, and hoped by ...
  27. [27]
    Overhyped Promises and the Cycles of AI Development - Birow
    Mar 9, 2024 · In the early 1980s, AI regained momentum, primarily due to the success of "expert systems. ... Renewed Overpromising: The hype surrounding expert ...
  28. [28]
    3. AI Winter and Funding Challenges - The ARF
    Jan 23, 2024 · A major obstacle was the limited computational ... When the promised results failed to materialize, funding for AI dwindled, leading to a period ...
  29. [29]
    A Chilly History: How a 1973 Report Caused the Original AI Winter
    Sep 4, 2025 · The impact of the Lighthill Report was swift and severe. The SRC, swayed by Lighthill's authoritative critique, drastically cut funding for AI ...
  30. [30]
    [PDF] Search Techniques To Contain Combinatorial Explosion in Artificial ...
    This paper reviews the current literature regarding the artificial intelligence techniques to contain combinatorial explosion as well as some methods.
  31. [31]
    A Revisionist History of Connectionism
    In Perceptrons, Minsky and Papert (1969) argued that there were a number of fundamental problems with the network research program.
  32. [32]
    The AI Winters: A Chilling History of Hype and Disillusionment
    Jul 2, 2025 · The first AI winter descended in the mid-1970s, chilling the initial, fiery optimism of the field's pioneers. In the 1950s and 60s, researchers ...Missing: intrinsic barriers
  33. [33]
    3 Lessons Learned From The Second AI Winter - Forbes
    Apr 9, 2024 · “McCarthy criticized expert systems (AI designs of the day) because they lacked common sense and knowledge about their own limitations,” writes ...Missing: acquisition bottleneck<|separator|>
  34. [34]
    The ALPAC report - Pangeanic Blog
    Apr 7, 2013 · Concerning cost, ALPAC considered what government agencies were paying to human translators, which varied from $9 to $66 per 1000 words.
  35. [35]
    The ALPAC Report: The Failings of Machine Translation
    Nov 22, 2013 · The downside to the report is that research into MT was effectively suspended for two decades, since all significant government funding was cut.
  36. [36]
    [PDF] Lighthill Report: Artificial Intelligence: a paper symposium
    Lighthill's report provoked a massive loss of confidence in AI by the academic establishment in the UK including the funding body. It persisted for almost a ...
  37. [37]
    The AI Winter: When the Field of Artificial Intelligence Froze Over: 1970
    DARPA funding for AI research plummeted from approximately $30 million annually in the early 1970s to almost nothing by 1974, forcing many laboratories to ...
  38. [38]
    How to fail and be successful - MAIZE
    May 13, 2021 · After 1973, most AI funding had dried up. In the U.K., AI research disappeared at most universities (exceptions were Edinburgh and Essex;) ...
  39. [39]
    A Cautionary Tale on Ambitious Feats of AI: The Strategic ...
    May 22, 2020 · Between 1983 and 1993, DARPA spent over $1 billion of federal funding on this program, before its eventual collapse. The program differed from ...
  40. [40]
    'Fifth Generation' Became Japan's Lost Generation
    Jun 5, 1992 · A bold 10-year effort by Japan to seize the lead in computer technology is fizzling to a close, having failed to meet many of its ambitious goals.
  41. [41]
    The market for specialised AI hardware collapsed in 1987 | aiws.net
    Oct 22, 2021 · The collapse coincided with the end of the 5th Generation Computer project of Japan and the Strategic Computing Initiative in the USA. The ...
  42. [42]
    [PDF] The Georgetown-IBM experiment demonstrated in January 1954
    Sep 28, 2004 · The public demonstration of a Russian-English machine translation system in New York in January 1954 – a collaboration of IBM and Georgetown ...
  43. [43]
    [PDF] The first public demonstration of machine translation
    The public demonstration of a Russian-English machine translation system in New York in January 1954 – a collaboration of IBM and Georgetown University – caused ...
  44. [44]
    A history of machine translation from the Cold War to deep learning
    Mar 12, 2018 · In 1966, the US ALPAC committee, in its famous report, called machine translation expensive, inaccurate, and unpromising.<|separator|>
  45. [45]
    The 1966 ALPAC Report and Its Consequences - IEEE Xplore
    Published in November 1966, the ALPAC Report was a milestone in the history of machine translation: its influence was significant, but is now perhaps a bit ...
  46. [46]
    [PDF] ALPAC-1966.pdf - The John W. Hutchins Machine Translation Archive
    In this report, the Automatic Language. Processing Advisory Committee of the National Research Council describes the state of development of these applications.
  47. [47]
    [PDF] ALPAC -- the (in)famous report - ACL Anthology
    The best known event in the history of machine translation is without doubt the publication thirty years ago in November 1966 of the report by the Automatic ...
  48. [48]
    Explained: Neural networks | MIT News
    Apr 14, 2017 · The Perceptron's design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, ...
  49. [49]
    Marvin Minsky and Seymour Pa pert Perceptrons, Cambridge , MA
    Minsky and Papert used two important classes of limitations on the local predicates in their proofs: order limited, where only a certain maximum number of ...
  50. [50]
    Review of ``Artificial Intelligence: A General Survey''
    The Lighthill Report is organized around a classification of AI research into three categories: Category A is advanced automation or applications, and he ...
  51. [51]
    What is science for? The Lighthill report on artificial intelligence ...
    Jul 10, 2020 · This paper uses a case study of a 1970s controversy in artificial-intelligence (AI) research to explore how scientists understand the relationships between ...Information · The Lighthill Review · Lighthill's Cause
  52. [52]
    Freddy the Robot Was the Fall Guy for British AI - IEEE Spectrum
    Apr 30, 2025 · Michie eventually lost, his funding was gutted, and the ensuing AI winter set back U.K. research in the field for a decade. Why were the Freddy ...
  53. [53]
    9 Development in Artificial Intelligence | Funding a Revolution
    As a result, some researchers—including DARPA research managers—believed that the SUR program had failed to meet its objectives. DARPA terminated the program ...
  54. [54]
    Speech Understanding Research - DTIC
    The goal of the research is the development, over a five-year period, of a speech understanding system capable of engaging a human operator in a natural ...Missing: 1970s | Show results with:1970s
  55. [55]
    [PDF] DARPA Technical Accomplishments. Volume 3. An Overall ... - DTIC
    The impacts of DARPA's programs have been extremely varied. Indeed, programs that some have tagged as "failures," others have seen as major accomplishments.
  56. [56]
    [PDF] Symbolics, Inc.:
    Lisp machines were designed from the ground up in order to run Lisp. This principle manifests in a variety of technical details of the hardware and software ...Missing: overspecialization | Show results with:overspecialization<|separator|>
  57. [57]
    History of Symbolics lisp machines - Dan Luu
    This is an archive of Dan Weinreb's comments on Symbolics and Lisp machines. Rebuttal to Stallman's Story About The Formation of Symbolics and LMI.
  58. [58]
    History of AI Winters - Actuaries Institute
    Sep 5, 2018 · AI winters are two major funding droughts in AI research, occurring in 1974-1980 and 1987-1993, where the field's value was perceived as ...Missing: intrinsic barriers
  59. [59]
    History Of AI In 33 Breakthroughs: The First Expert System - Forbes
    Oct 29, 2022 · Already in 1983, Feigenbaum identified the “key bottleneck” that led to their eventual demise, that of scaling the knowledge acquisition ...
  60. [60]
    What Happened With Expert Systems? - Towards Data Science
    Apr 27, 2024 · You know, during the 1980s and 90s, there was what is called "an AI winter," during which AI projects struggled to find funds and support. AI ...
  61. [61]
    An Overview of the Rise and Fall of Expert Systems | by Shaq Arif
    Oct 16, 2023 · The AI community began developing Expert Systems, designed to capture the knowledge of experts and provide solutions to domain-specific problems.
  62. [62]
    The Rise and Fall of Expert Systems: A Cautionary Tale for Modern AI
    Jul 26, 2025 · An AI winter, foreshadowed by the collapse of the LISP machine market in 1987, chilled investment. The Fifth Generation Project quietly ...
  63. [63]
    Understanding the Knowledge Acquisition Bottleneck in Artificial ...
    Apr 30, 2025 · The knowledge acquisition bottleneck was most acutely felt during the heyday of expert systems (roughly the 1970s to early 1990s). These systems ...
  64. [64]
    2 Science, Technology, and Innovation in Japan
    ... projects. Also announced in 1981 was the Fifth Generation Computer Project, a 10-year program of advanced computing research with a budget of $450 million.
  65. [65]
    [PDF] The Japanese national Fifth Generation project - Stacks
    ICOT's life may be extended for a total of 3 more years with a reduced staffing level. The Fifth Generation Computer Systems pro- jectwas motivated by the ...
  66. [66]
    [PDF] fifth generation computer systems 1992 - Bitsavers.org
    The Fifth Generation Computer Systems (FGCS) project was started in 1982 by the initiative of the late Professor Tohru Moto- Oka with the purpose of making a ...Missing: timeline | Show results with:timeline<|separator|>
  67. [67]
    [PDF] A Retrospective and Prospects of the Fifth Generation Computer ...
    Japan could not surmount the U.S. in the new world of computer technology nor produce applications that would make a difference. The Japan Economic and Industry ...
  68. [68]
  69. [69]
    Probabilistic Reasoning (1993–2011) — Making Things Think
    Nov 2, 2022 · Judea Pearl's influential work, in particular with Bayseian networks, gave new life to AI research and was central to this period. Maximum ...Missing: revival | Show results with:revival
  70. [70]
    Between the Booms: AI in Winter - Communications of the ACM
    Oct 8, 2024 · In this column, I look at the shift of artificial intelligence research toward probabilistic methods and at the revival of neural networks.
  71. [71]
    [PDF] Automatic Speech Recognition – A Brief History of the Technology ...
    Oct 8, 2004 · The success of statistical methods revived the interest from DARPA at the juncture of the 1980's and the 1990's, leading to several new speech ...
  72. [72]
    A statistical approach to machine translation - ACM Digital Library
    In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English ...
  73. [73]
    [PDF] A STATISTICAL APPROACH TO MACHINE TRANSLATION
    In this paper, we present a statistical approach to machine translation. We describe the application of our approach to translation from French to English and ...
  74. [74]
    Resurgence of AI During 1983-2010 - KDnuggets
    Feb 16, 2018 · Applications of such networks include those in robotics, time series prediction, speech recognition, grammar learning, handwriting recognition, ...
  75. [75]
    History of ASR Technologies | U.S. Legal Support Services
    Aug 31, 2023 · 1971-1976 – The DARPA Speech Understanding Research (SUR) initiative, created by the U.S. Department of Defense, was used to advance voice ...
  76. [76]
    What is the history of speech recognition technology? - Milvus
    The 1980s and 1990s saw a shift to statistical methods, particularly Hidden Markov Models (HMMs), which modeled speech as probabilistic sequences of sounds.
  77. [77]
    The Dream of the 90s: FICO Delivers XAI with Fraud Detection Models
    Sep 22, 2017 · In 1992, FICO introduced FICO® Falcon, a neural network-based fraud detection system that detects fraudulent payment card transactions in real-time.
  78. [78]
    Impact of Artificial Intelligence on Banking - Los Angeles Times
    Jan 17, 1990 · Like many financial institutions, Security Pacific National Bank had a problem with fraud, specifically fraudulent use of debit cards at ...
  79. [79]
    The History of OCR - Veryfi
    May 19, 2023 · In the 1990s, the widespread adoption of personal computers and the internet led to a significant increase in the use of OCR technology. OCR ...
  80. [80]
    Optical Character Recognition: Then and Now - OCR - Wandb
    May 25, 2022 · Indeed, by the mid-1990s at the UNLV Annual Test of OCR Accuracy the tool––known as Tesseract––began to perform far in excess of the creators' ...
  81. [81]
    Better Bayesian Filtering - Paul Graham
    Jan 10, 2003 · Spam filtering is a subset of text classification, which is a well established field, but the first papers about Bayesian spam filtering per se ...
  82. [82]
    A Brief History of Machine Learning - Dataversity
    Dec 3, 2021 · During this time, the ML industry maintained its focus on neural networks and then flourished in the 1990s. Most of this success was a ...Missing: understated | Show results with:understated
  83. [83]
    Machine Learning - CFA Institute
    These techniques first appeared in finance in the 1990s and have since flourished with the explosion of data and cheap computing power. This reading provides a ...
  84. [84]
    How a stubborn computer scientist accidentally launched the deep ...
    Nov 11, 2024 · But in 2012, a team from the University of Toronto trained a neural network on the ImageNet dataset, achieving unprecedented performance in ...
  85. [85]
    Is fundamental innovation slowing? - Economist Impact
    A major step forward came in 2012 with AlexNet, a convolutional neural network architecture that outperformed other models in image recognition and is ...
  86. [86]
    The Evolution of Deep Learning Key Milestones and Breakthroughs
    Jan 21, 2025 · The foundations of deep learning were laid in 1943 when Walter Pitts and Warren McCulloch introduced the first artificial neuron. This concept ...
  87. [87]
    A Timeline of Deep Learning | Flagship Pioneering
    “Deep learning” takes off after Hinton and two of his students establish that a neural network trained in their method outperforms other computing techniques ...Missing: surge 2010-2025
  88. [88]
    Annual private investment in artificial intelligence - Our World in Data
    Annual private investment in artificial intelligence. Includes companies that received more than $1.5 million ininvestment. This data is expressed in US dollars ...
  89. [89]
    Economy | The 2025 AI Index Report | Stanford HAI
    Private investment in generative AI reached $33.9 billion in 2024, up 18.7% from 2023 and over 8.5 times higher than 2022 levels. The sector now represents more ...
  90. [90]
    Decade Of Disruption: How Megarounds, Global Expansion And AI ...
    May 7, 2025 · Startup investment has doubled over the past 10 years to roughly $300 billion globally per annum, Crunchbase data shows.<|separator|>
  91. [91]
    Investments in AI: Summarizing A Decade of Growth - Hyreo
    It is expected to soar from a market size of $40 billion in 2022 to $1.3 trillion over the next decade, further manifesting AI's profound impact on business ...
  92. [92]
    The State of the Funding Market for AI Companies: A 2024 - Mintz
    Mar 10, 2025 · In 2024, global venture capital funding for generative AI reached approximately $45 billion, nearly doubling from $24 billion in 2023. Late- ...
  93. [93]
    AI Deals in 2025: Key Trends in M&A, Private Equity, and Venture ...
    Sep 29, 2025 · AI now accounts for more than 50% of global venture capital funding, with mega-rounds significantly influencing venture economics. The ...Investment Trends · Significance · Key TakeawaysMissing: 2023 2024
  94. [94]
    Global Venture Capital Outlook: The Latest Trends - Bain & Company
    Generative AI funding continues to grow rapidly, with funding in the first half of 2025 already surpassing the 2024 total. Software and AI companies now account ...
  95. [95]
    Tech megacaps to spend more than $300 billion in 2025 to win in AI
    Feb 8, 2025 · Meta, Amazon, Alphabet and Microsoft intend to spend as much as $320 billion combined on AI technologies and datacenter buildouts in 2025.
  96. [96]
    Big Tech's AI investments set to spike to $364 billion in 2025 as ...
    Aug 1, 2025 · The firm said Wednesday it expects spending to tally between $66 billion and $72 billion in its fiscal year 2025 versus its prior range of $64 ...
  97. [97]
    Big Tech's Capex Surge in AI Infrastructure Investment 2025
    Jul 17, 2025 · Amazon, Microsoft, Google, and Meta are expected to spend $300B+ on AI infrastructure in 2025, with Amazon at $100B, Microsoft at $80B, ...
  98. [98]
    How staggering are 5 Tech giants' AI capital expense - Andy Lin
    Aug 16, 2025 · Five tech giants' AI capital expense is projected to approach $400 billion in 2025, even exceeding the EU's defense budget for the entire ...<|separator|>
  99. [99]
    AI Investment Surge Drives Up Borrowing Costs, Experts Warn of ...
    Oct 2, 2025 · AI investment surged in 2024, with the market expected to grow significantly by 2030. · Higher interest rates are raising borrowing costs, ...Missing: slowdown | Show results with:slowdown<|separator|>
  100. [100]
    When Will AI Investments Start Paying Off?
    Oct 2, 2025 · Investor Harris Kupperman calculates that AI facilities coming online in 2025 will face roughly $40 billion in annual depreciation costs — ...
  101. [101]
    2025 Trends in Venture Capital | Deloitte US
    VC has become more selective in their investments, while founders pivot to more creative tactics to control costs. Adaptability and a proactive approach to ...
  102. [102]
    This is no AI bubble - here's why - EthicAI
    Oct 3, 2025 · AI winters happened when external capital was withdrawn. This mechanism – the sudden retreat of funding from disappointed sponsors – has driven ...Missing: characteristics | Show results with:characteristics
  103. [103]
    Is AI a Boom or a Bubble? - Harvard Business Review
    Oct 16, 2025 · The AI economy is being propelled by two races: a geopolitical contest for supremacy and a financial race to generate the returns necessary to ...
  104. [104]
    This Is How the AI Bubble Bursts | Yale Insights
    Oct 8, 2025 · Reports estimate that AI-related capital expenditures surpassed the U.S. consumer as the primary driver of economic growth in the first half of ...This Is How The Ai Bubble... · Pockets Of Concern · Governance Conflict Exposes...
  105. [105]
  106. [106]
    The A.I. Spending Frenzy Is Propping Up the Real Economy, Too
    Aug 27, 2025 · Companies will spend $375 billion globally in 2025 on A.I. infrastructure, the investment bank UBS estimates. That is projected to rise to $500 ...
  107. [107]
    Current AI scaling laws are showing diminishing returns, forcing AI ...
    Nov 20, 2024 · OpenAI improved its GPT models largely through traditional scaling laws: more data, more power during pretraining. But now that method ...
  108. [108]
    AI Progress Shifts Beyond Raw Data and Computing Power
    Jul 28, 2025 · Highlights. AI performance gains from traditional scaling laws are slowing as data and compute become more limited and expensive, leading to ...
  109. [109]
    Has AI scaling hit a limit? - Foundation Capital
    Nov 27, 2024 · Signs are emerging that brute-force scaling alone may not be enough to drive continued improvements in AI.
  110. [110]
    Will we run out of data to train large language models? - Epoch AI
    Jun 6, 2024 · If trends continue, language models will fully utilize the stock of human-generated public text between 2026 and 2032.Introduction · Results · Comparison with previous...
  111. [111]
    AI firms will soon exhaust most of the internet's data - The Economist
    Jul 23, 2024 · Epoch ai, a research firm, estimates that, by 2028, the stock of high-quality textual data on the internet will all have been used.
  112. [112]
    Can AI scaling continue through 2030? - Epoch AI
    Aug 20, 2024 · We investigate four constraints to scaling AI training: power, chip manufacturing, data, and latency. We predict 2e29 FLOP runs will be ...
  113. [113]
    We did the math on AI's energy footprint. Here's the story you haven't ...
    May 20, 2025 · A 2024 paper by Microsoft analyzed energy efficiencies for inferencing large language models and found that doubling the amount of energy used ...
  114. [114]
    Up to 30% of the power used to train AI is wasted. Here's how to fix it.
    Nov 7, 2024 · A less wasteful way to train large language models, such as the GPT series, finishes in the same amount of time for up to 30% less energy.
  115. [115]
    How much energy will AI really consume? The good, the bad and ...
    Mar 5, 2025 · Two energy-analyst firms had estimated that implementing ChatGPT-like AI into every Google search would require between 400,000 and 500,000 ...
  116. [116]
    Scaling Laws in AI: Current Limits - Fathom Blog
    May 2, 2025 · Scaling laws explain how AI performance improves as models, data, and compute grow. But there's a catch: scaling has limits.
  117. [117]
    Efficient LLM Inference: Bandwidth, Compute, Synchronization, and ...
    Jul 18, 2025 · This study provides valuable insights into the fundamental performance limits of LLM inference, highlighting the potential benefits of future ...
  118. [118]
  119. [119]
    LLMs have indeed reached a point of diminishing returns
    Nov 9, 2024 · It is on us to decide whether what they say is useful, with the help of experts like Gary Marcus of course. Until the algorithms change ...
  120. [120]
    AI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall
    Nov 13, 2024 · AI researcher Gary Marcus warns that recent signs that AI improvements are stagnating could signal a crash in the industry in the future.
  121. [121]
    Meta Chief AI Scientist Yann LeCun Slams AI Hype - Yahoo Finance
    May 4, 2025 · LeCun pushed back on what he described as the "religion of scaling," arguing that AI progress will stall unless systems are taught to understand ...
  122. [122]
    Meta's Chief AI Scientist Yann LeCun Questions the Longevity of ...
    Feb 11, 2025 · LeCun's perspective is contrary to the growing hype around AGI and superintelligence. Sparked by breakthroughs like ChatGPT and agentic AI, ...
  123. [123]
    Predictions Scorecard, 2025 January 01 - Rodney Brooks
    Jan 1, 2025 · Here is a list of some of those hype cycles that I, personally, have perceived and lived through, as taken from my presentation at MIT in late ...
  124. [124]
    AI: great expectations - Rodney Brooks
    Aug 10, 2025 · The three pieces of text in blue described the cyclic nature of hype, and the third one of them pointed out how each new set of hype drivers ...
  125. [125]
    AI winter – where are the gains from the planet's massive AI uptake ...
    Aug 22, 2025 · Hype, social hysteria, and a collective abandonment of common sense. ... Enterprise hits and misses - AI adoption debates heat up, robotics ...
  126. [126]
  127. [127]
    The Great AI Retrenchment has begun - by Gary Marcus
    Jun 15, 2024 · The ludicrously high expectations from last 18 ChatGPT-drenched months were never going to be met. LLMs are not AGI, and (on their own) never will be.
  128. [128]
    Silicon Valley Is Investing in the Wrong A.I. - The New York Times
    Oct 16, 2025 · By Gary Marcus. Mr. Marcus is a founder of two A.I. companies and the author of “Taming Silicon Valley: How We Can Ensure That AI Works for Us.
  129. [129]
    [PDF] AI is Harder Than We Think - arXiv
    Apr 28, 2021 · AI winter. Minsky and Papert's 1969 book Perceptrons [13] showed that the kinds of problems solvable by. Rosenblatt's perceptrons were very ...
  130. [130]
    The AI Industry's Scaling Obsession Is Headed for a Cliff | WIRED
    Oct 15, 2025 · ... AI models may soon offer diminishing returns compared to smaller models. By mapping scaling laws against continued improvements in model ...
  131. [131]
    AI Effort And Money Misplaced - Semiconductor Engineering
    Aug 28, 2025 · We try to derive scaling laws. But we don't fundamentally understand it. And secondly, AI research is generating an information bandwidth ...<|control11|><|separator|>