Fact-checked by Grok 2 weeks ago

Polanyi's paradox

Polanyi's paradox refers to the epistemological observation formulated by Hungarian-British philosopher and physical chemist that humans possess extensive enabling skilled performance, yet this knowledge resists complete explicit articulation or formalization. Expressed succinctly as "we know more than we can tell," the paradox highlights the inherent limitations of propositional language and rule-based systems in capturing the subsidiary particulars integrated into focal awareness during acts like recognizing a face or balancing on a . Introduced in Polanyi's 1966 monograph The Tacit Dimension, the concept critiques objectivist epistemologies that prioritize explicit, codifiable knowledge, arguing instead for a framework where knowing relies on personal commitment and indwelling of tools or traditions. This distinction has profound implications across disciplines: in , it explains resistance to task beyond routine due to unformalizable perceptual-motor skills; in , it underscores ongoing challenges for systems to replicate human intuition and adaptation without genuine comprehension; and in , it affirms the irreducibly subjective elements in empirical discovery and validation. The remains a cornerstone for understanding why empirical progress in knowledge-intensive fields often depends on and embodied practice rather than disembodied algorithms.

Philosophical and Historical Origins

Michael Polanyi's Intellectual Background

was born in 1891 in , , into a secular Jewish family of intellectuals and entrepreneurs. His early education at the Minta gymnasium exposed him to a rigorous classical curriculum, fostering interests in science, literature, and . studied at the of , earning his M.D. in 1913, followed by a Ph.D. in 1917 for research on adsorption theory. During , he served as a medical officer in the , an experience that deepened his engagement with scientific inquiry amid practical constraints. Post-war, Polanyi shifted to physical chemistry, joining the Kaiser Wilhelm Institute for Fiber Chemistry in Berlin in 1920, then moving to the Institute of Physical Chemistry and Electrochemistry in 1923, where he collaborated with figures like Fritz Haber and Albert Einstein. His research focused on reaction kinetics, X-ray diffraction, and the structure of crystals, yielding contributions such as the Polanyi-Schrodinger equation for adsorption isotherms. In 1933, amid rising Nazism and his Jewish heritage, Polanyi emigrated to Britain, accepting the chair of physical chemistry at the University of Manchester, where he continued experimental work on reaction mechanisms until 1948. These years in empirical science highlighted the intuitive, subsidiary processes underlying discovery, planting seeds for his later philosophical critiques of strict objectivism. By the mid-1940s, Polanyi's focus pivoted toward the , influenced by his firsthand observation of scientific practice's reliance on unarticulated commitments rather than purely explicit rules. He critiqued and for neglecting personal judgment in knowledge validation, drawing from Gestalt psychology's emphasis on holistic perception. In 1948, he transitioned to the chair of at , publishing works like Science, Faith and Society (1946) that argued for frameworks in inquiry. This evolution culminated in his appointment as senior research fellow in 1958, where he formalized ideas on personal knowledge, underscoring how scientists indwell traditions and rely on tacit integration of clues. Polanyi's intellectual arc—from clinician to chemist to philosopher—reflected a commitment to reconciling subjective elements with objective aspirations in knowledge pursuit.

Formulation in "The Tacit Dimension" (1966)

In The Tacit Dimension, published in 1966 by Doubleday & Company, opens with the foundational statement of the paradox: "we can know more than we can tell." This assertion, drawn from his earlier ideas in Personal Knowledge (1958), is refined here to emphasize that much of human operates through an inarticulable tacit component, challenging objectivist ideals of fully explicit . positions this as a critique of in science and , where attempts to specify all elements of knowing fail because comprehension inherently relies on unstated integrations. Polanyi structures tacit knowing around a "from-to" relation, involving subsidiary awareness of particulars that are integrated to achieve focal awareness of a coherent whole. For example, in recognizing a familiar face, one attends from subtle clues—such as the of eyes or contours, held subsidiarily—to the unified of the , without explicitly enumerating each feature. Similarly, balancing on a demands proximal sensations from the body and machine as subsidiary cues, enabling focal control of motion; verbal rules alone cannot transmit this skill, as novices fail despite instructions. These illustrations underscore the paradox: skills and perceptions depend on tacit coefficients that evade complete verbalization, yet are indispensable for performance. The book delineates how underpins tradition, craftsmanship, and scientific discovery, descending into cultural practices and ascending to intellectual commitments. Polanyi contends that explicit systems, like formal logic or algorithmic specifications, presuppose tacit understanding; for instance, interpreting a requires unarticulated bodily to integrate symbols into spatial . Efforts to eliminate this tacit dimension, as in positivist demands for verifiable propositions, distort knowing by ignoring its personal, integrative nature—leading to the paradox that "an explicit statement abolishes the reliance on which it seeks to evoke." Thus, Polanyi's formulation reveals knowing as irreducibly , rooted in in indwelling particulars beyond explicit control.

Predecessors in Philosophy of Knowledge

The concept of knowledge that resists full articulation has antecedents in ancient Greek philosophy. In Plato's Meno (circa 380 BCE), the titular paradox questions how inquiry is possible: one cannot seek what one already knows, nor what one does not know, implying an innate or pre-explicit grasp enabling recognition of truth. Michael Polanyi later invoked this paradox to argue that tacit integration of particulars subsidiarly supports explicit discovery, resolving the impasse through unverbalized foresight. Aristotle, in Nicomachean Ethics (circa 350 BCE), distinguished episteme (theoretical knowledge) from phronesis (practical wisdom) and techne (craft skill), both involving habitual discernment acquired via experience rather than deduction alone. Phronesis, oriented toward ethical action, relies on perceptual judgment (aisthesis) that defies complete propositional encoding, as it emerges from repeated engagement with particulars. This experiential basis anticipates tacit knowing's reliance on subsidiary awareness, where skills like balancing or moral deliberation integrate clues without focal articulation. In the , Gilbert Ryle's (1949) formalized the divide between "knowing that" (propositional facts) and "knowing how" (dispositional abilities), critiquing intellectualist legends that reduce skills to inner formulations. Ryle emphasized that intelligent practice, such as riding a , involves capacities not exhausted by rules or descriptions, paralleling Polanyi's later insistence on the unspecifiability of tacit coefficients in performance. Michael Oakeshott, in essays like "Rationalism in Politics" (first published 1947), contrasted technical knowledge (rule-bound, abstract) with practical knowledge (traditional, concrete, and largely inarticulate), arguing the latter underpins rational conduct without reducible maxims. This unformulable traditionality echoes subsidiary reliance, where equilibrated habits sustain endeavors beyond explicit control, influencing critiques of rationalist overreach in both and .

Core Concepts and Statement

Definition of Tacit vs. Explicit Knowledge

Michael Polanyi articulated the distinction between tacit and explicit knowledge in his 1966 book The Tacit Dimension, emphasizing that human understanding fundamentally involves elements that transcend precise verbal description. Explicit knowledge refers to information that can be codified, formalized, and communicated through symbols, rules, or propositions, such as mathematical equations or procedural manuals, allowing for straightforward transmission without reliance on personal context. In contrast, tacit knowledge comprises intuitive, skill-based insights acquired through experience, which integrate subsidiary particulars—such as sensory cues or bodily adjustments—into a focal awareness, yet resist full articulation because they depend on an unstated background of personal commitment and perception. Polanyi's framework posits that explicit knowledge is subordinate to the tacit dimension, as even the most formalized expressions presuppose an underlying tacit grasp for comprehension and application; for instance, reading a manual requires an implicit feel for the game's dynamics that cannot be derived solely from the text. He famously summarized this asymmetry with the statement "we can know more than we can tell," underscoring that attempts to exhaustively specify tacit processes, like balancing on a , inevitably fail because such knowing involves dynamic, context-embedded integrations beyond propositional capture. This distinction arises from Polanyi's phenomenological analysis of , where knowledge operates through a "from-to" structure: subsidiary clues (the "from") are relied upon to attend to a proximal object (the "to"), rendering the former ineffable when focalized. Scholarly interpretations affirm that Polanyi's tacit knowledge is not merely a gap in articulation but a foundational mode of human , irreducible to explicit rules without loss of , as evidenced in fields like craftsmanship and scientific where performance degrades under hyper-explicitation. While later thinkers like Nonaka and Takeuchi adapted the binary for , Polanyi rejected a strict , viewing all as tacitly rooted, with explicit forms deriving meaning only through fiduciary reliance on unarticulated commitments. Empirical support from aligns with this, showing that expertise often manifests in pattern recognition and heuristics defying complete algorithmic decomposition.

Articulation of the Paradox

Michael Polanyi articulated the paradox in the opening of his 1966 book The Tacit Dimension, stating: "I shall reconsider human knowledge by starting from the fact that we can know more than we can tell." This declaration highlights the existence of —personal, context-bound understanding that resists complete formalization into explicit rules or verbal descriptions—underlying all human cognition. Polanyi emphasized its paradoxical character, noting that while explicit knowledge appears self-sufficient, it invariably relies on unarticulated subsidiary awareness, such as the intuitive integration of sensory cues in tasks like face recognition or maintaining balance while riding a . The paradox arises because efforts to fully explicate tacit elements often distort or destroy the functional whole; for example, a cyclist focusing on explicit adjustments to —such as pedal pressure or handlebar tilt—loses the seamless subsidiary-focal for the . Polanyi argued this structure pervades scientific and craftsmanship, where practitioners "indwell" tools and particulars to achieve focal , defying reduction to algorithmic prescriptions. Thus, the paradox challenges positivist views of knowledge as wholly articulable, asserting instead that human knowing inherently transcends what can be told.

Illustrative Examples

One prominent illustration of Polanyi's paradox is the skill of riding a . Proficient cyclists maintain balance through an intricate, subconscious integration of visual, vestibular, and proprioceptive cues, yet attempts to verbalize the exact process—such as the precise timing of weight shifts or handlebar adjustments—inevitably fall short, as the relies on unarticulated rather than explicit rules. This example underscores how tacit knowing operates below conscious formulation, enabling performance that defies complete codification. Another classic case involves recognizing a familiar face in a crowd. Observers achieve near-instantaneous identification by subsidiarily apprehending features like contours, expressions, and contextual hints, but they cannot exhaustively describe the composite particulars that converge into the focal , revealing the limits of explicit . Polanyi emphasized this in his analysis of perceptual integration, where the whole is known through indwelling particulars that evade verbal isolation. Everyday manual skills further exemplify the paradox, such as tying a or catching a . These actions draw on embodied, context-sensitive judgments honed through , which the performer knows intuitively but struggles to convey in step-by-step instructions sufficient for replication without , as the tacit dimension encompasses fluid adaptations beyond rule-based description. In professional domains, this manifests in craftsmanship, like a surgeon's intuitive incision or a mechanic's diagnostic feel, where expertise transcends algorithmic breakdown, highlighting the paradox's persistence across routine and complex tasks.

Implications for Human Cognition and Science

Limits on Verbalizing Skills and Intuitions

Polanyi's paradox underscores the inherent difficulty in fully articulating human skills and intuitions, as much of this knowledge operates through subsidiary awareness that integrates particulars into a functional whole without explicit rules. In skills such as bicycle riding, performers rely on unverbalized adjustments to maintain , where conscious attempts to specify every bodily movement—such as counter-swaying in response to perturbations—disrupt the fluid execution, revealing that the skill exceeds propositional description. Similarly, hammer use exemplifies focal awareness on the task (driving a ) supported by subsidiary clues from grip and swing feel, which resist exhaustive verbalization because articulating them shifts from the integrated act to isolated components, impairing . This limitation arises from the structure of tacit knowing, where knowledge is personal and context-embedded, defying transfer via mere instructions or algorithms without embodied practice. Intuitions further illustrate these verbalization constraints, particularly in domains requiring from accumulated experience. For example, expert physicians or scientists often diagnose or hypothesize based on perceptions that draw on unarticulated precedents, as seen in Polanyi's to subception experiments where subjects detect thresholds below conscious reportability, knowing outcomes they cannot fully explain. Such intuitions function as "premises of ," fundamental guesses about reality that guide inquiry but evade complete formalization, since verbal efforts to codify them lose the nuanced, integration. Cognitive studies corroborate this, showing that expertise develops through apprenticeship-like rather than rule enumeration, with tacit elements enabling adaptation to novel variations that explicit knowledge alone cannot encompass. These limits imply that verbalization, while useful for subsidiary aspects, cannot capture the full fidelity of skilled performance or intuitive judgment, as the act of telling subordinates the knowing process to awareness, fragmenting the holistic grasp Polanyi described. thus remains essential, transmitting skills through imitation and feedback rather than disembodied , a process evident across crafts, arts, and sciences where innovators like Polanyi himself relied on tacit faculties to advance understanding beyond prior articulations.

Role in Scientific and Creative Processes

Polanyi's paradox highlights the indispensable role of in scientific discovery, where researchers integrate particulars—such as experimental , patterns, and intuitions—into a coherent focal understanding that defies complete verbalization. contended that the initial phases of scientific insight, including formation and the appraisal of , depend on personal commitments and judgments rather than mechanical rule-following, as explicit instructions cannot capture the awareness guiding these processes. For instance, a recognizing a novel in relies on tacit pattern recognition honed through , which Polanyi described as an integrative act irreducible to articulated algorithms. This tacit dimension extends to the validation of discoveries, where exercise a form of connoisseurship to discern promising theories amid , akin to an art critic's intuitive grasp of aesthetic value. Polanyi argued that such judgments stem from subsidiary skills acquired through prolonged immersion in a , enabling the to "dwell in" the particulars while attending to the emerging whole, a process that formal methodologies alone cannot replicate. Empirical illustrations include the intuitive leaps in fields like physics, where figures such as Einstein integrated disparate observations through tacit before formalizing , underscoring that scientific progress hinges on unarticulated knowing. In creative processes, the paradox manifests as the reliance on tacit inference for invention and artistic production, where creators draw on unspecifiable bodily and mental skills to achieve novel integrations. Polanyi viewed as an extension of tacit knowing, wherein subsidiary clues—such as sensory experiences or fragmented ideas—are fused into original wholes through intuitive processes that evade exhaustive description. For example, a composer's to improvise harmonies or an inventor's flash of insight emerges from internalized heuristics and personal vision, not reducible to explicit rules, as these acts involve committing to particulars in pursuit of an indeterminate focal outcome. This framework explains why creative emphasizes demonstration and practice over verbal transmission, preserving the paradox's tension between ineffable skill and explicit output.

Empirical Evidence from Cognitive Studies

Cognitive studies on implicit learning have demonstrated the acquisition of that guides without conscious or verbal articulation, aligning with Polanyi's that much knowing resists explicit formulation. In a seminal experiment, Berry and Broadbent () tasked participants with controlling a simulated production system, where performance required adjusting variables to maintain output levels. Subjects improved significantly through trial-and-error practice, yet when probed, they generated few verbal rules and showed a negative between performance and the ability to state explicit strategies, indicating reliance on non-articulable patterns. Similar findings emerged in person-interaction tasks within the same study, where implicit outperformed explicit rule-based approaches, underscoring a between procedural and declarative explanation. Reber's artificial grammar learning paradigm further illustrates this phenomenon. Participants in Reber (1967) memorized letter strings generated by an unfamiliar finite-state , then classified novel strings as grammatical or not. Classification accuracy exceeded chance levels, even for items never seen, but subjects could not accurately describe the underlying rules, performing at near-chance when attempting explicit rule induction. Follow-up studies confirmed that this tacit structural knowledge persists across formats (e.g., auditory or visual) and resists verbal transfer, as learners fail to teach the effectively to others without re-exposure. Reber interpreted these results as evidence of a cognitive unconscious, where abstract representations form implicitly and evade focal awareness, directly supporting the paradox's claim of knowing more than can be told. In expertise domains, Wagner and Sternberg (1985) conducted three experiments assessing —defined as action-guiding insights from experience not formally taught—in real-world tasks like . High-performing participants outperformed others on situational judgment tests measuring tacit managerial acumen, yet this correlated weakly with standard IQ measures and resisted straightforward verbalization, as experts relied on intuitive over rule recitation. These findings extend to broader practical , where tacit elements predict success in adaptive behaviors, such as navigating social dynamics, beyond what explicit instructions convey. and lesion studies reinforce this, showing procedural memory systems (e.g., ) support skill execution independently of declarative systems (e.g., ), as seen in amnesic patients acquiring motor skills without episodic recall. While some researchers argue portions of tacit knowledge can be partially explicated through or , core empirical patterns persist: performance gains from implicit processes often surpass those from explicit ones, and articulation attempts frequently distort or incomplete the underlying mechanisms. This body of evidence from controlled psychological experiments validates Polanyi's paradox in human cognition, highlighting limits on formalizing intuitive competencies.

Technological and AI Applications

Historical Challenges in Rule-Based Automation

Rule-based automation, predominant in and industrial applications from the through the , relied on explicitly programming logical rules to replicate human expertise, but encountered fundamental obstacles rooted in the inability to fully articulate . Early systems, such as the developed in 1956 by Allen Newell and Herbert Simon, demonstrated success in narrow domains like theorem proving by encoding formal logic, yet struggled to generalize beyond predefined scenarios due to the of required rules for real-world complexity. This limitation became evident in expert systems of the and , where developers aimed to capture domain-specific knowledge—such as in (completed in 1976)—but found that experts could not verbalize all decision heuristics, leading to incomplete rule sets. A primary barrier was the knowledge acquisition bottleneck, identified as the core constraint in development by the late 1980s, wherein extracting, structuring, and validating rules from human specialists proved labor-intensive and prone to omissions because much expertise resides in unarticulated intuitions and pattern recognitions. For instance, in manufacturing automation, early robotic systems deployed in the 1960s, like the arm introduced by in 1961, excelled at repetitive welding or assembly under fixed conditions but failed to adapt to material variations or environmental perturbations, necessitating human intervention for tacit adjustments like force sensing or visual inspection. This issue contributed to the first around 1974, as funding dried up amid unmet promises of scalable intelligence, with systems unable to handle the "common sense" inferences humans perform subconsciously. Brittleness further undermined rule-based approaches, as systems performed reliably only within their explicitly programmed scope but degraded sharply on edge cases or noisy inputs, lacking the flexible enabled by tacit of sensory and experiential . In industrial contexts, such as automated in assembly during the , rule-driven systems required exhaustive rule enumeration for defect detection, yet overlooked subtle anomalies that skilled workers intuitively identified, resulting in high false negatives and maintenance costs. By the second AI winter in 1987, these shortcomings—exacerbated by the maintenance overhead of sprawling rule bases, which could exceed thousands of if-then statements—led to project abandonments, as seen in the collapse of Japan's initiative, which invested billions from 1982 to 1992 in but yielded no broad breakthroughs. These historical impediments persisted into the early 2000s, stalling rule-based in sectors demanding perceptual-motor skills, such as or craftsmanship, where explicit rules could not encode the subsidiary awareness of bodily cues or contextual nuances that Polanyi's formulation deems inarticulable. Empirical assessments, including those from studies, confirmed that even after decades, rule formalization captured at most 20-30% of expert performance in adaptive tasks, underscoring the paradox's role in confining to routine, explicit processes while preserving advantages in judgment-intensive domains.

Machine Learning as a Response

Machine learning addresses Polanyi's paradox by shifting from explicit rule-based programming to data-driven inference, allowing systems to replicate human-like performance on tacit tasks without requiring full articulation of underlying knowledge. Traditional , reliant on symbolic representations and hand-coded heuristics, struggled with tasks involving ineffable skills, as engineers could not adequately formalize the intuitive judgments humans employ. In response, algorithms, particularly supervised and reinforcement variants, learn approximate mappings from input examples to outputs, effectively distilling tacit patterns observed in human demonstrations or data traces. This approach emulates , where novices acquire skills through observation and practice rather than verbal instruction, enabling machines to perform functions like or sequential that resist explicit encoding. A key mechanism involves training models on large datasets to minimize prediction errors, thereby converging on implicit representations of . For example, neural networks process subsidiary particulars—such as pixel distributions in images—through layered transformations, yielding focal awareness of wholes akin to human cognition, without needing programmers to specify intermediate rules. highlights this as a deliberate effort in to infer rules from human examples, potentially resolving the paradox for perceptual and manipulative tasks where verbalization fails. Empirical validation appears in domains like , where hidden Markov models in the 1990s, followed by deep neural networks, reduced word error rates from over 20% in rule-based systems to below 10% by 2010 through statistical learning from audio corpora, outperforming methods dependent on phonetic rules. ![Lee Sedol](.assets/Lee_Sedol_(B) Reinforcement learning further extends this response by enabling autonomous skill acquisition via trial-and-error interaction with environments, bypassing the need for comprehensive human guidance. DeepMind's , which defeated world champion 4-1 in a 2016 Go match, exemplifies this: policy and value networks, trained initially on 30 million human games and refined through 30 million iterations, mastered strategic intuitions—such as evaluating board positions amid 10^170 possible configurations—that grandmasters articulate only partially. The system's move 37 in game 2, praised by experts for its creativity, emerged from learned heuristics rather than predefined tactics, demonstrating machine learning's capacity to internalize tacit expertise at superhuman levels in combinatorial domains. While not eliminating the paradox entirely— as models remain "black boxes" without inherent comprehension—this paradigm has scaled to automate tasks once deemed inherently human, from driving simulations to approximations.

Contemporary Advances and Potential Resolutions (2010s–2025)

The advent of deep neural networks in the early 2010s marked a significant shift in addressing Polanyi's paradox by enabling machines to learn complex, intuitive skills from data rather than explicit instructions. In 2012, the AlexNet architecture achieved breakthrough performance on the ImageNet dataset, surpassing traditional rule-based methods in visual recognition tasks that rely on tacit perceptual knowledge, such as identifying objects amid clutter—skills humans execute effortlessly but cannot fully articulate through rules. This data-driven approach, scaling with computational power and datasets, allowed AI to approximate tacit integration of sensory cues without programmed heuristics. Reinforcement learning further advanced this in strategic domains, exemplified by DeepMind's in 2016, which defeated world champion by self-discovering intuitive board evaluation and move prediction through millions of simulated games, circumventing the paradox's barrier to codifying expert play. Subsequent systems like (2017) generalized this to chess and , learning from scratch via self-play and policy-value functions, achieving superhuman performance on tasks deemed inarticulable. These developments suggested that could internalize tacit patterns empirically, resolving challenges in perceptual-motor and game-theoretic contexts where explicit had faltered. The 2020s brought generative models, particularly transformer-based large language models (LLMs), which exhibit emergent abilities in handling tacit linguistic and reasoning tasks. OpenAI's , released in 2020, demonstrated for nuanced language understanding, generating coherent responses to prompts requiring implicit context integration, as evaluated in benchmarks where it outperformed prior explicit models. A 2023 NBER working paper argues that such generative extends beyond Polanyi's limits by automating creative and intuitive outputs, such as and hypothesis formulation, trained on vast corpora capturing human tacit expressions. However, empirical scrutiny reveals persistent gaps: LLMs often confabulate explanations post-hoc without genuine causal comprehension, as shown in adversarial tests where performance drops on novel tacit integrations like physical tasks. Efforts toward explainable AI (XAI) since 2015 aim to articulate neural tacit knowledge, with techniques like (2016) and SHAP (2017) approximating feature attributions in black-box models, potentially bridging the "tell" aspect of the paradox. Yet, a 2025 INFORMS study notes that while narrows the human-AI knowledge gap in collaborative settings, full resolution remains elusive due to AI's reliance on statistical correlations over causal mechanisms inherent in human tacit knowing. Human-aware AI frameworks, proposed in 2025 research, integrate hybrid systems where AI augments rather than supplants human intuition in scientific discovery, leveraging tacit human oversight to validate machine outputs. These advances indicate partial circumvention of the paradox through scalable learning but underscore ongoing limits in verifiable, generalizable tacit mastery.

Economic and Societal Impacts

Effects on Employment and Skill Polarization

Polanyi's paradox underscores the challenges in automating tasks reliant on , thereby shaping patterns by preserving demand for human skills in non-routine occupations while enabling displacement in routine ones. In labor economics, this manifests as skill polarization, where middle-skill routine jobs—such as clerical work and repetitive —decline due to their susceptibility to codification and computerization, whereas high-skill non-routine cognitive tasks (e.g., managerial and ) and low-skill non-routine manual tasks (e.g., personal care and food service) expand. Economist argues that the paradox limits for adaptable, judgment-based activities, explaining why complements human labor in abstract and service roles rather than fully supplanting it. U.S. employment data from 1979 to 2012 exemplify this dynamic: middle-skill occupations, comprising routine cognitive and manual tasks, fell from 60% to 46% of total , coinciding with a 1.7 trillion-fold drop in computing costs since 1980 that facilitated routine task . Conversely, low-skill service occupations doubled in share since 1980, driven by tacit demands for perceptual-motor dexterity and in roles like caregiving and , which resist formal rule-based programming. High-skill jobs grew steadily, with the share of college-educated workers in hours worked rising from 15% in 1963 to 65% by 2012 among young adults, as computers augment non-routine cognitive abilities requiring unarticulated expertise. This polarization contributes to stagnant wages in low-skill sectors due to elastic labor supplies, while boosting and premiums in high-skill domains, as information processing investments surged from 8% to over 30% of nonresidential business investment between and 2012. Autor's analysis, building on earlier work showing routine task decline from the onward, attributes these shifts partly to the paradox's constraint on and progress in tacit-heavy tasks until machine learning advancements in the 2010s. The result is a labor market where growth concentrates at the skill extremes, challenging narratives of uniform technological .

Insights from Autor's 2014 Analysis

In his 2014 analysis, David Autor applies Polanyi's paradox to interpret patterns of U.S. employment growth from 1980 to 2012, arguing that the inherent difficulty of codifying tacit knowledge explains why automation has disproportionately displaced middle-skill, routine occupations while fostering expansion in high-skill analytic roles and low-skill service jobs. Autor posits that computers excel at substituting for tasks amenable to explicit rules—such as routine cognitive (e.g., bookkeeping) and manual (e.g., repetitive assembly) activities—but struggle with non-routine tasks requiring unarticulated intuition, pattern recognition, or adaptability, such as managerial decision-making or interpersonal caregiving. This limitation, rooted in Polanyi's observation that "we can know more than we can tell," constrains automation's reach into domains like novel problem-solving or physical dexterity under variable conditions, thereby reshaping labor demand toward polarization. Autor's empirical examination of data reveals stark employment shifts: middle-skill routine occupations, comprising about 60% of U.S. jobs in 1980, experienced net employment declines averaging -0.6% annually through 2012, while high-skill non-routine cognitive jobs grew at 2.1% annually and low-skill non-routine manual/service roles at 0.5% annually. He attributes this "" primarily to technological advances in information processing, which automate codifiable routines but complement human skills in abstract reasoning and social interaction; for instance, software tools augment lawyers' efficiency without replacing judgment-laden advocacy. reinforces these trends by relocating routine tasks abroad, yet Autor emphasizes automation's dominant role, as evidenced by correlations between routine task intensity and occupational decline exceeding those for tradability. Looking forward from 2014, Autor cautions that while shows promise in eroding Polanyi's paradox for perceptual-motor tasks—such as image recognition or robotic manipulation—the core challenge of generalizing tacit knowledge to unstructured, context-dependent problems persists, likely sustaining demand for human flexibility in non-routine domains. This framework challenges overly pessimistic forecasts by highlighting how task complementarities generate new human roles, though it underscores risks of wage inequality if upgrading lags behind . Autor's task-based model, grounded in disaggregated occupational data, thus provides a causal lens for understanding resilience amid technological disruption, prioritizing empirical task exposure over aggregate biases.

Broader Policy and Organizational Ramifications

Polanyi's paradox underscores the inherent difficulties organizations face in codifying and transferring tacit knowledge, necessitating designs that prioritize human interaction over purely documentation-based systems. Knowledge management strategies often falter when attempting to explicitize intuitive skills, leading to inefficiencies in training and innovation diffusion, as tacit elements resist formal articulation and require contextual embedding through mentorship or apprenticeships. Firms that succeed in leveraging tacit knowledge typically foster environments with low barriers to sharing, such as collaborative teams or rotational assignments, but persistent challenges like cultural silos and employee turnover result in knowledge loss estimated at up to 20-30% in high-expertise sectors like engineering. In organizational adaptation to , the paradox implies a shift toward hybrid models where handles explicit routines but humans retain oversight for tacit judgments, as seen in where infers rules from examples yet struggles with novel contexts without human intuition. This has ramifications for hierarchical structures, favoring decentralized to harness distributed tacit expertise over centralized rule-based controls, though empirical studies show that rigid bureaucracies amplify codification failures, reducing adaptability by 15-25% in dynamic industries. Public policy formulation encounters similar hurdles, as evidence-based approaches overemphasize explicit data while undervaluing policymakers' tacit insights from experience, leading to suboptimal regulations in areas like healthcare where intuitive clinical judgments defy full protocolization. For instance, tacit knowledge informs adaptive responses in crisis management, yet policies mandating exhaustive documentation can stifle professional discretion, contributing to delays observed in public health implementations. Labor market policies must account for the paradox's role in employment polarization, where displaces routine tasks but sustains demand for non-routine cognitive and manual roles reliant on tacit skills, as evidenced by U.S. data from 1979-2012 showing middle-skill job declines offset by growth in (up 20%) and personal care (up 40%). Recommendations include targeted investments in vocational for interpersonal and perceptual-motor abilities, rather than universal upskilling assumptions, to mitigate wage stagnation in low-tacit sectors. AI governance policies risk inefficiency by prioritizing data-driven infrastructures that indirectly recapture tacit knowledge, mirroring organizational pitfalls, with proposals like those from 2021 urging balanced integration of explicit rules to avoid redundant learning costs. Broader ramifications extend to social safety nets, where persistent tacit task resilience suggests supplementing automation gains through mechanisms like enhanced retraining (e.g., community college expansions proposed in 1964 responses to tech displacement) or fiscal redistribution, as unchecked cognitive automation could exacerbate inequality without addressing non-substitutable human elements.

Criticisms, Debates, and Empirical Scrutiny

Claims of Overcoming the Paradox via AI

In the mid-2010s, advocates of pointed to systems like , developed by DeepMind and victorious over world champion in March 2016, as evidence of overcoming Polanyi's paradox through intuitive derived from self-play simulations rather than exhaustive rule articulation. 's neural networks inferred strategic —such as balancing territorial control and capturing opportunities—implicit in millions of game positions, enabling superhuman performance without programmers explicitly encoding every humans employ subconsciously. Subsequent claims extended to machine learning's broader capacity to infer unarticulated rules from human examples, as articulated in economic analyses positing that data-driven algorithms bypass the need for explicit formalization of tacit skills in domains like image recognition and . For instance, convolutional neural networks trained on labeled datasets achieve human-level or superior accuracy in visual tasks by implicitly learning perceptual invariances—e.g., and object hierarchies—that evade verbal description, suggesting a partial resolution for sensory-motor . By the early 2020s, proponents argued that large language models (LLMs) further erode the paradox by distilling tacit cultural and contextual knowledge from vast text corpora, generating coherent responses to queries requiring unspoken assumptions, such as pragmatic inference in dialogue or domain-specific intuitions in and . These models, exemplified by GPT-3's 2020 release with 175 billion parameters, purportedly capture "undocumented" expertise—e.g., stylistic nuances in writing or causal heuristics in reasoning—through statistical approximations of human-generated data, enabling applications in knowledge work previously deemed automation-resistant. Optimists contend this end-to-end learning paradigm scales to approximate any articulable-or-not skill given sufficient data and compute, as evidenced by LLMs' proficiency in zero-shot tasks mimicking expert tacit judgment. Empirical demonstrations include AI-assisted coaching systems that, per 2024 field experiments, improved call center resolution rates by 14% by inferring and applying unverbalized best practices from agent interactions, implying machines can now operationalize tacit operational knowledge at scale. In scientific contexts, "human-aware" AI frameworks claim to integrate explicit models with learned tacit priors, facilitating discoveries like novel protein folds via in 2020, where structural intuitions beyond codified chemistry rules were approximated through evolutionary data patterns. These advancements, while not universally accepted as full resolutions, underpin assertions that iterative training on proxy tasks effectively codifies what humans know but cannot fully tell.

Persistent Limits of AI on Tacit Tasks

Despite advances in , systems continue to demonstrate fundamental limitations on tasks reliant on , such as intuitive abstraction, from sparse examples, and cognitive priors that humans acquire implicitly through embodied . These shortcomings persist because architectures, predominantly statistical matchers trained on explicit datasets, cannot replicate the central to Polanyi's —where focal integrates unarticulated . Empirical benchmarks reveal this gap: for instance, the Abstraction and Reasoning Corpus (), designed to test efficient skill acquisition via of novel visual , yields human performance around 85% while frontier large language models like OpenAI's o1-pro score only 1-1.3% on the harder ARC-AGI-2 evaluation set released in 2025. Even specialized efforts, such as OpenAI's o3 model trained on public ARC data, achieve at most 75.7% on semi-private subsets, but falter on unseen private tasks, indicating reliance on memorization rather than genuine tacit . In domains requiring physical or dexterous —archetypal tacit tasks—AI exhibits brittleness in novel environments, failing to adapt without exhaustive retraining or explicit programming that Polanyi's paradox deems infeasible. Robotic systems, for example, struggle with unstructured object handling, such as grasping irregular tools or improvising repairs, where humans draw on unverbalized sensorimotor schemas; state-of-the-art models like those from pipelines achieve success rates below 50% in zero-shot transfer to varied real-world clutter, per evaluations in embodied literature up to 2025. This reflects a causal disconnect: AI lacks innate priors for , numerosity, or goal-directed agency, leading to errors in chains that humans navigate tacitly. Studies on large language models corroborate this, showing they simulate tacit outputs via correlation but collapse under adversarial perturbations or out-of-distribution scenarios, as in theory-based critiques highlighting absent internal models for hypothesis generation beyond training distributions. These limits underscore that scaling data and compute has not resolved Polanyi's challenge, as evidenced by stagnant progress on benchmarks prioritizing sample efficiency over brute-force prediction. Claims of overcoming the paradox through generative AI often overlook such empirical counters, where models generate plausible but causally invalid responses—e.g., hallucinating physical impossibilities in simulation tasks—due to the absence of embodied, subsidiary cues. Ongoing scrutiny, including ARC Prize competitions through 2025, confirms that no system has approached human-level tacit proficiency, suggesting architectural innovations beyond current paradigms are required for tasks like intuitive or creative synthesis in ambiguous contexts.

Verifiable Counterexamples and Tests

Empirical assessments of Polanyi's paradox frequently examine labor market outcomes to test its implications for , revealing that tasks reliant on —such as those demanding perceptual-motor skills, adaptability to novel contexts, or —have resisted displacement more than routine cognitive tasks. David Autor's analysis of U.S. occupational data from 1979 to 2012 demonstrates job polarization, with middle-skill routine employment (e.g., clerical and production work) declining from 60% to 46% of total employment, while non-routine high-skill professional roles and low-skill manual service occupations expanded. This pattern aligns with the paradox, as computers substitute for codifiable routines but struggle with tacit elements like nuanced judgment in patient care or flexible response in manual trades, verified through task-based decompositions of occupational requirements. Cross-national data reinforces this test: in 16 European Union countries from 1993 to 2010, middle-wage occupations declined by approximately 5 percentage points, with growth concentrated in high- and low-wage non-routine roles involving tacit skills, such as management or cleaning, which evade rule-based automation. Autor attributes the persistence to Polanyi's insight that machines infer rules from examples via machine learning (e.g., in speech recognition or image classification) but falter on unstructured variability, as seen in limited IT substitution for high-skill tasks post-2000, where investment growth slowed despite productivity gains elsewhere. Verifiable counterexamples to claims of overcoming the paradox include AI limitations in non-routine domains. For instance, early systems like exhibited errors in hypothesis generation for medical diagnostics, misclassifying ambiguous cases due to insufficient handling of tacit contextual inference, despite training on vast explicit data. In , state-of-the-art models as of 2025 fail basic physical reasoning tests, such as evaluating fracture risks in slender components (length-to-diameter ratios exceeding 10:1), where human tacit succeeds without formal rules, highlighting persistent gaps in real-world deployment beyond simulated environments. These instances underscore that while data-driven learning circumvents explicit codification, it does not replicate the subsidiary awareness central to .

Moravec's Paradox

Moravec's paradox denotes the observation in and that high-level cognitive tasks requiring abstract reasoning, such as playing chess or solving mathematical problems, are relatively straightforward to implement in computational systems, whereas low-level sensorimotor activities—like walking on uneven terrain, grasping objects with varying shapes, or visually perceiving everyday environments—prove extraordinarily difficult despite their ease for humans. This disparity arises because human proficiency in such perceptual and motor skills stems from billions of years of evolutionary refinement in biological neural architectures, which operate with massive parallelism, low-precision analog processing, and immense efficiency on limited power, making replication in digital hardware computationally prohibitive. , a roboticist at , first articulated the paradox in his 1988 book Mind Children: The Future of Robot and Human Intelligence, stating that it is "comparatively easy to make computers exhibit adult level performance on intelligence tests, or playing chess, or doing calculations, and hard or impossible to give them the skills of a one-year-old baby when it comes to perception and mobility." The paradox underscores a core challenge in AI development: formal symbolic methods excel at explicit, rule-based domains but falter in the tacit, context-dependent integration of sensory data and that humans perform subconsciously after minimal training. Moravec attributed this to the phylogenetic recency of advanced reasoning capabilities in —developed over mere millennia—versus the deep entrenchment of perceptual-motor primitives shaped over eons, rendering the latter resistant to efficient algorithmic encoding without vast data and compute resources. Empirical evidence persists in , where systems like early mobile manipulators in the 1980s–1990s required specialized hardware and exhaustive programming for basic locomotion, while chess engines like achieved superhuman play by 1997 through brute-force search and evaluation functions. In relation to Polanyi's paradox, Moravec's formulation provides a complementary computational lens, explaining why tacit knowledge in intuitive, embodied tasks resists articulation and : these skills are not merely ineffable but demand approximating the brain's evolved, subsymbolic efficiency, which digital systems approximate poorly without mimicking biological scale. Economist , in his 2014 analysis of 's labor impacts, explicitly connected the two, noting that Polanyi's emphasis on the limits of formalizing human aligns with Moravec's into the of computational tractability, where "high-level reasoning is straightforward to computerize" but perceptual-motor remains elusive, sustaining human advantages in flexible, real-world . This linkage highlights ongoing barriers to general AI, as advances in have narrowed gaps in (e.g., via convolutional neural networks for image recognition since the ) but still lag in seamless, energy-efficient compared to human baselines.

Learning Paradoxes (Meno, Poverty of the Stimulus)

Meno's paradox, articulated in Plato's dialogue (c. 380 BCE), questions the possibility of genuine learning or inquiry: if a person already possesses knowledge of a subject, there is no need to learn it; if they lack it entirely, they cannot identify or pursue it effectively during inquiry. This dilemma underscores a fundamental tension in , suggesting that learning requires some pre-existing, inarticulable grasp of the domain to guide recognition and assimilation of new information. invoked this paradox in Personal Knowledge (1958) to illustrate the indispensable role of , arguing that explicit articulation alone cannot initiate or sustain inquiry. Polanyi resolved Meno's paradox by positing that enables subsidiary awareness of unarticulated particulars—such as intuitive hunches or pattern intimations—which are focally integrated into coherent wholes without full verbalization. This process allows learners to anticipate and recognize solutions amid uncertainty, bridging the gap between ignorance and discovery; for instance, a problem-solver tacitly relies on background skills to formulate questions, even if unable to specify the evidential basis explicitly. Polanyi's framework thus reframes learning not as a mechanical deduction from explicit premises but as a dynamic, personal commitment grounded in tacit coefficients, which evade complete formalization yet underpin all explicit knowledge. Critics, such as Michael Bradie (1982), have contended that this appeal to merely relocates the paradox rather than dissolving it, as the origins of such subsidiary awareness remain underspecified. Nonetheless, Polanyi's analysis highlights how tacit integration circumvents the stasis of Meno's dilemma by enabling provisional, guidance in exploration. The (POS) argument, formalized by in works such as Syntactic Structures (1957) and Aspects of the Theory of Syntax (1965), extends similar reasoning to : children encounter limited, often erroneous, and finite linguistic input yet reliably attain mastery of infinite, structure-dependent grammars, including rare phenomena like auxiliary inversion in questions (e.g., "Is the man who is tall happy?"). This disparity implies that explicit environmental data alone cannot suffice; instead, innate, domain-specific principles—manifesting as tacit cognitive priors—constrain hypothesis formation and enable generalization beyond observed examples. Empirical support includes children's avoidance of unattested structures, such as moving auxiliaries over long distances in questions, despite no direct negative evidence in input. In relation to Polanyi's paradox, POS exemplifies how tacit knowledge operates as an unarticulated scaffold for learning tasks resistant to pure stimulus-driven induction: just as Polanyi described skills like face recognition or balance, linguistic competence relies on subsidiary integration of phonological, syntactic, and semantic cues into focal understanding, defying complete algorithmic encoding. While Chomsky emphasized modular innateness, the parallel lies in the insufficiency of explicit rules or data to account for observed proficiency, reinforcing Polanyi's claim that "we know more than we can tell." Rationalist alternatives to POS, such as Bayesian models, attempt to derive similar outcomes from general learning biases and richer implicit data, but these still presuppose unverbalized priors akin to tacit coefficients. Both paradoxes thus converge on the causal reality that human cognition harnesses ineffable, integrative processes to transcend the limitations of articulable inputs.

Connections to Embodied Cognition

Polanyi's paradox underscores the prevalence of —intuitive abilities that humans perform effortlessly but cannot fully explicate in rules or instructions—which frequently manifests through embodied skills such as maintaining balance while cycling or intuitively grasping an object's affordances during manipulation. This resonates with theory, which argues that cognitive processes are inextricably linked to sensorimotor experiences and bodily interactions with the physical world, rather than being disembodied symbolic manipulations. Proponents of , drawing on phenomenological traditions, contend that much human understanding emerges from "enactive" engagement, where perception, action, and environment co-constitute knowledge, mirroring Polanyi's notion of subsidiary awareness through bodily tools and extensions. Scholars have explicitly bridged these concepts by extending Polanyi's framework with Maurice Merleau-Ponty's phenomenology, positing that embodiment enables tacit knowledge acquisition via perceptual-motor loops and habitual bodily practices, such as apprenticeships where novices attune to skilled demonstrations through mimetic and environmental immersion. For instance, in organizational settings, tacit expertise in crafts like surgery or craftsmanship relies on pre-reflective bodily know-how, where explicit instructions alone fail to transmit the full subsidiary particulars that the body integrates intuitively. This connection highlights why formal systems, including computational models, struggle to replicate such knowledge: they lack the organism's embedded, dynamic interface with its milieu, which embodied cognition identifies as foundational to tacit competencies. Empirical support for this interplay appears in studies of skilled performance, where reveals that tacit tasks activate distributed brain-body networks tuned by repeated embodied practice, defying reduction to abstract algorithms. Critics of purely representational theories invoke Polanyi's to argue that disembodied approximations overlook these corporeal substrates, potentially underestimating the causal role of physicality in generating intuitive understanding. Thus, the paradox reinforces embodied 's challenge to objectivist epistemologies, emphasizing that human knowing is irreducibly personal, proximal, and somatic.

References

  1. [1]
    [PDF] Polanyi's Revenge and AI's New Romance with Tacit Knowledge
    The polymath Polanyi bemoaned the paradoxical fact that human civili- zation focuses on acquiring and codify- ing “explicit” knowledge, even though a.
  2. [2]
    [PDF] Polanyi's Paradox and the Shape of Employment Growth
    I refer to this constraint as Polanyi's paradox, following Michael Polanyi's (1966) observation that, “We know more than we can tell.” When we break an egg ...
  3. [3]
    A New interpretation of Michael Polanyi's theory of tacit knowing
    The models are progressively refined forms of his first conception of tacit knowing: 'we know more than we can tell'. The three models are: the Gestalt ...
  4. [4]
    Guide to the Michael Polanyi Papers 1900-1975 - UChicago Library
    Michael Polanyi, chemist and philosopher, was born, Budapest, Hungary, 1891. He received his M.D. (1913) and Ph.D (1917) from the University of Budapest. He ...
  5. [5]
    Michael Polanyi—pupils and crossroads—on the 125th anniversary ...
    Aug 17, 2016 · Michael Polanyi (1891–1976) was a Hungarian-born British physician turned physical chemist turned philosopher.
  6. [6]
    Michael Polanyi
    Brief Biography of Michael Polanyi. Michael Polanyi (1891-1976), was born and raised in Budapest; he trained and served as a physician during World War I.Missing: intellectual background education
  7. [7]
    Michael Polanyi (1891-1976): Introduction to an Unfinished Revolution
    Oct 8, 2014 · and, more broadly, truly responsible, creative, and rational thinking. His years of experience in actually doing science, not simply theorizing
  8. [8]
    Michael Polanyi and the History of Science
    This essay is a study of Polanyi's career as scientist and philosopher from the point of view of the history of science, starting with the first step in his ...
  9. [9]
    Michael Polanyi and tacit knowledge - infed.org
    Michael Polanyi (1891-1976) made a profound contribution both to the philosophy of science and social science. Born in Budapest into a upper class Jewish family ...Missing: influences | Show results with:influences
  10. [10]
    [PDF] the tacit dimension peter smith publisher, inc. gloucester, ma 01930
    Jul 13, 2010 · Following the example set by Lazarus and Mc-. Cleary in 1949, psychologists call the exercise of this faculty a process of "subception." These ...
  11. [11]
    The Tacit Dimension, Polanyi, Sen - The University of Chicago Press
    The Tacit Dimension argues that tacit knowledge—tradition, inherited practices, implied values, and prejudgments—is a crucial part of scientific knowledge.
  12. [12]
    The Tacit Dimension [Excerpt] – TACK
    Here we have the basic definition of the logical relation between the first and second term of a tacit knowledge. It combines two kinds of knowing. We know the ...
  13. [13]
    [PDF] M.Polanyi , The Tacit Dimension: Chapter 1. (1966)1
    M.Polanyi , The Tacit Dimension: Chapter 1. (1966)1. The Tacit Dimension ... Beginning with his assertion that “we know more than we can tell”; in the first.
  14. [14]
    [PDF] TACIT KNOWLEDGE IN PLATO Aryeb Botwinick Temple University
    By the semantic aspect of tacit knowing, Polanyi is referring to the fact that "all meaning tends to be displaced away from ourselves" (1967, p. 13). The point ...
  15. [15]
    Tacit Knowing: What it is and Why it Matters | Episteme
    Oct 14, 2021 · Polanyi ( 1966: 20) stressed the pervasiveness of tacit knowing: “But suppose that tacit thought forms an indispensable part of all knowledge”, ...Missing: predecessors | Show results with:predecessors
  16. [16]
    [PDF] Knowledge and Learning Title: Aristotle Meets Polanyi: Exploring ...
    This paper looks to explore the tacit dimension of practical wisdom or phronesis. It shows that recently the tacit dimension has been increasingly ...
  17. [17]
    Aristotle meets Polanyi: exploring the tacit dimension of practical ...
    Sep 5, 2016 · Through a comparison of the work of Aristotle and Polanyi it shows that tacit knowledge is not a form of practical judgement, a process guided ...
  18. [18]
    [PDF] TACIT KNOWLEDGE - LSE
    The term “tacit knowledge” comes to us courtesy of Michael. Polyani, a chemical engineer turned philosopher of science. This biographical detail is not.Missing: influences | Show results with:influences
  19. [19]
    Oakeshott's Affinity for the Polanyian Vision of Human Activity
    Soon after its publication in 1958, Michael Oakeshott wrote a thoughtful review of Michael Polanyi's Personal Knowledge. Immediately capturing the reader's ...
  20. [20]
    [PDF] Tacit Knowledge Revisited -- We Can Still Learn from Polanyi
    The Tacit-Explicit dimension of knowledge is one of the most widely discussed topics in knowledge management. The pivotal work might be seen as. Nonaka and ...
  21. [21]
    [PDF] Rethinking Polanyi's Concept of Tacit Knowledge
    Half a century after Michael Polanyi conceptualised 'the tacit component' in personal knowing, management studies has reinvented 'tacit knowledge'—albeit in.<|control11|><|separator|>
  22. [22]
    The use of tacit and explicit knowledge in public health: a qualitative ...
    Mar 20, 2012 · According to Polanyi's concept, tacit knowledge is related to individual skills while embedded in context. Further, he saw tacit knowledge as ...
  23. [23]
    Steven Shapin · An Example of the Good Life: Michael Polanyi
    Dec 15, 2011 · 'We know more than we can tell' was Polanyi's dictum. We know how to ride a bicycle, but we can't write down how to do it, at least not in a ...
  24. [24]
    20th WCP: The Role of Tacit Knowledge in Religion
    (4) In The Tacit Dimension Polanyi takes as a starting point the fact that we know more than we can tell. For example, we recognize the face of an ...
  25. [25]
    Introducing Michael Polanyi to a post-truth world - Fulcrum Anglican
    Mar 15, 2018 · We are tacitly aware in our bodies of a knowledge that we cannot fully articulate. We know more than we can tell. But these subsidiary clues ...
  26. [26]
    [PDF] Tacit Dimension Michael Polanyi
    His concept of tacit knowledge emphasizes the importance of the unspoken, intuitive, and experiential aspects of knowing, which contrasts sharply with explicit ...
  27. [27]
    (PDF) Tacit knowledge, tacit knowing, or behaving? - ResearchGate
    Polanyi in fact wrote of 'tacit knowing', a process, and so may have been misinterpreted. His emphasis on process may prove fruitful as a perspective on knowing ...
  28. [28]
    [PDF] the roots of tacit knowledge: intuitive and personal judgment in ...
    Polanyi says that the concept of tacit knowledge is “necessarily fraught with the roots that it embodies” (TD, xviii). This paper demonstrates that these ...
  29. [29]
    Polanyi's Paradox - Encyclopedia.pub
    Nov 2, 2022 · Polanyi's paradox is mainly to explain the cognitive phenomenon that there exist many tasks which we, human beings, understand intuitively how to perform.
  30. [30]
    [PDF] Michael Polanyi on Creativity in Science - HAL-SHS
    Jun 13, 2020 · First, Polanyi's approach to tacit knowledge offers an interesting answer to the so-called Meno's paradox9 as acknowledged in the Foreword ...<|separator|>
  31. [31]
    [PDF] POLANYI'S CRITICISM OF POSITIVISM (1946-1952)
    According to Polanyi, the crucial steps leading to scientific discovery are a matter of intuition, creativity, instinct, and personal commitment. For these ...
  32. [32]
    Michael Polanyi on creativity - OpenEdition Journals
    More generally, his perspective on tacit knowledge as a process of tacit inference is strongly related to creativity. For Polanyi, tacit knowledge cannot be ...
  33. [33]
    Tacit Knowledge - an overview | ScienceDirect Topics
    The same ideas are used by Polanyi to explain why science is inherently creative ... we know more than we can tell. Instead, Polanyi stresses the importance of ...
  34. [34]
    [PDF] Personal Knowledge and Human Creativity - The Polanyi Society
    The keystone of Polanyi's epistemology is his idea that tacit knowing integrates subsidiary knowledge and creates personal meaning. However, Polanyi's ...
  35. [35]
    The combination of explicit and implicit learning processes in task ...
    Two experiments look at the combination of explicit and implicit learning processes on a single task. Subjects are required to control the rate of sugar output.
  36. [36]
    The combination of explicit and implicit learning processes in task ...
    Jun 1, 1987 · Two experiments look at the combination of explicit and implicit learning processes on a single task required to control the rate of sugar ...Missing: factory | Show results with:factory
  37. [37]
    Implicit learning of artificial grammars - ScienceDirect.com
    December 1967, ... Implicit Learning and Tacit Knowledge: An Essay on the Cognitive Unconscious. 2008, Implicit Learning and Tacit Knowledge an Essay on the ...
  38. [38]
    Implicit learning and tacit knowledge. - APA PsycNet
    Unpublished doctoral dissertation, Brown University. Reber, A. S. (1967). Implicit learning of artificial grammars. Journal of Verbal Learning & Verbal Behavior ...
  39. [39]
    Practical intelligence in real-world pursuits: The role of tacit ...
    Examined the role of tacit knowledge (knowledge that usually is not openly expressed or taught) in intellectual competence in real-world pursuits.
  40. [40]
    Tacit Knowledge, Practical Intelligence, and Expertise. - APA PsycNet
    In this chapter, the authors discuss a psychological approach to exploring expertise that is based on the theory of practical intelligence and tacit ...
  41. [41]
    (PDF) Implicit Learning and Tacit Knowledge - ResearchGate
    Implicit learning produces a tacit knowledge base that is abstract and representative of the structure of the environment.
  42. [42]
    (PDF) The Evolution of AI: From Rule-Based Systems to Data-Driven ...
    Jan 15, 2025 · PDF | The evolution of artificial intelligence (AI) reflects a transformative journey from rudimentary rule-based systems to sophisticated, ...
  43. [43]
    Understanding the Knowledge Acquisition Bottleneck in Artificial ...
    Apr 30, 2025 · This bottleneck refers to the inherent difficulty, cost, and time involved in extracting knowledge from human experts or other sources (like ...
  44. [44]
    The Knowledge Acquisition Bottleneck: Time for Reassessment?
    Abstract: Knowledge acquisition has long been considered to be the major constraint in the development of expert systems. Conventional wisdom also maintains ...
  45. [45]
    The Rise and Fall of Symbolic AI. Philosophical presuppositions of AI
    Sep 13, 2019 · One difficult problem encountered by symbolic AI pioneers came to be known as the common sense knowledge problem. In addition, areas that rely ...The Rise And Fall Of... · Philosophical... · Get Ranjeet Singh's Stories...
  46. [46]
    What is the brittleness problem in AI reasoning? - Zilliz
    The brittleness problem in AI reasoning refers to the tendency of artificial intelligence systems, particularly rule-based or logic-driven systems, to fail
  47. [47]
    The Evolution of Symbolic AI: From Early Concepts to Modern ...
    However, by the end of the decade, a second AI winter set in as these systems proved too brittle and complex to maintain.The Golden Age Of Ai · The Ai Winters: Challenges... · Neuro-Symbolic Ai: Bridging...
  48. [48]
    Polanyi's Paradox and the Shape of Employment Growth | NBER
    Sep 11, 2014 · This paper offers a conceptual and empirical overview of this evolution. I begin by sketching the historical thinking about machine displacement of human labor.
  49. [49]
    Polanyi's Revenge and AI's New Romance with Tacit Knowledge
    Feb 1, 2021 · The polymath Polanyi bemoaned the paradoxical fact that human civilization focuses on acquiring and codifying “explicit” knowledge, even though ...
  50. [50]
    [PDF] Artificial Intelligence and work: a critical review of recent research ...
    Apr 3, 2022 · "What is skill?" Work and occupations 17(4): 422-448. • Autor, D. (2014). Polanyi's paradox and the shape of employment growth, National ... for ...
  51. [51]
    The Moral Imperative of Artificial Intelligence
    May 1, 2016 · By relying on learned "intuition," AlphaGo is able to overcome the so-called Polanyi's Paradox. The philosopher Michael Polanyi observed in 1966 ...
  52. [52]
    Why AI might finally break Polanyi's Paradox - Exponential View
    Nov 27, 2024 · Polanyi's catchphrase explanation of tacit knowledge is: We know more than we can tell. Polanyi in 1931. Archiv der Max-Planck-Gesellschaft ...
  53. [53]
    Roles of Artificial Intelligence in Collaboration with Humans
    Oct 15, 2025 · Although machine learning is described by some as a way to resolve Polanyi's paradox and to close the knowledge gap between humans and AI ...
  54. [54]
    Human-Aware AI Tackles Polanyi Paradox in Scientific Discovery
    May 8, 2025 · According to the Polanyi Paradox, AI models that only learn from explicit knowledge will lack the tacit or intuitive knowledge of humans and may generate ...
  55. [55]
  56. [56]
    Difficulties in Diffusion of Tacit Knowledge in Organizations
    Aug 7, 2025 · Recent studies highlight the challenge of distinguishing general knowledge from TK and the difficulty of articulating TK, which adds to its ...
  57. [57]
    (PDF) Tacit Knowledge for the Development of Organizations
    Apr 19, 2017 · PDF | Knowledge is mainly divided into two types: tacit and explicit. The purpose of this study is to examine the concept of tacit knowledge ...
  58. [58]
    Importance of Organizational Tacit Knowledge: Barriers to ...
    This chapter incorporates the relevance of tacit knowledge and highlights some major barriers to knowledge sharing.
  59. [59]
    The use of tacit and explicit knowledge in public health: a qualitative ...
    Mar 20, 2012 · The findings of this study demonstrate that tacit knowledge is drawn upon, and embedded within, various stages of the process of program planning in public ...Missing: formulation | Show results with:formulation
  60. [60]
    Local knowledge, formal evidence, and policy decisions
    Tacit knowledge can be defined as knowledge held by an individual that is hard to formalize. It may be based on personal experiences and intuitions (Polanyi, ...
  61. [61]
    Exploring the impact of language models on cognitive automation ...
    Mar 6, 2023 · I call this Polanyi's Paradox: many of the things that we do, we don ... What we at Brookings are most interested in are the policy implications ...
  62. [62]
    ARC Prize
    The foundation is the steward of the ARC-AGI benchmark, which measures general intelligence through skill acquisition efficiency. Learn More. ARC Prize 2025.ARC-AGI-1 Leaderboard · What is ARC-AGI? · ARC Prize 2024 · Play
  63. [63]
    A new, challenging AGI test stumps most AI models | TechCrunch
    Mar 24, 2025 · The new test, called ARC-AGI-2, has stumped most models. “Reasoning” AI models like OpenAI's o1-pro and DeepSeek's R1 score between 1% and 1.3% on ARC-AGI-2.
  64. [64]
    OpenAI o3 Breakthrough High Score on ARC-AGI-Pub
    Dec 20, 2024 · OpenAI's new o3 system - trained on the ARC-AGI-1 Public Training set - has scored a breakthrough 75.7% on the Semi-Private Evaluation set at our stated public ...
  65. [65]
    Theory Is All You Need: AI, Human Cognition, and Causal Reasoning
    Dec 3, 2024 · An LLM does not reason. And an LLM has no way of postulating beyond what it has encountered in its training data. Next, we extend this problem ...
  66. [66]
    Frontier AI Models Still Fail at Basic Physical Tasks - LessWrong
    Apr 14, 2025 · Failure to Identify Risk: Gemini 2.5 consistently fails to recognize that the part, with a length-to-diameter ratio clearly exceeding 10:1 ( ...
  67. [67]
    Mind Children - Harvard University Press
    Jan 2, 1990 · Hans Moravec convincingly argues that we are approaching a watershed in the history of life—a time when the boundaries between biological and ...
  68. [68]
    [PDF] Polanyi's Paradox and the Shape of Employment Growth
    I refer to this constraint as Polanyi's paradox, following Michael Polanyi's (1966) observation that, “We know more than we can tell.” When we break an egg ...
  69. [69]
    Jay Richards: AI, Robots, and Moravec's Paradox | Mind Matters
    Sep 5, 2025 · The paradox was developed by Hans Moravec, professor of robotics at Carnegie Mellon University and author of Mind Children: The Future of Robot ...
  70. [70]
    A Theory-Based AI Automation Exposure Index - arXiv
    The study assesses the "Suitability for Machine Learning ... Autor (2014) calls this Polanyi's Paradox and identifies it as a main driver of labor market outcomes ...
  71. [71]
    [PDF] tacit knowing - The Polanyi Society
    This kind of knowing solves the paradox of the Meno by making it possi- ble for us to know something so indeterminate as a problem or a hunch, but when the use ...
  72. [72]
    Polanyi on the Meno Paradox | Philosophy of Science
    Mar 14, 2022 · Polanyi argues that a paradox discussed in the Meno cannot be solved without appeal to this notion of tacit knowledge. Here I want to argue ...
  73. [73]
    [PDF] The Poverty of the Stimulus Argument* - PhilArchive
    Abstract. Noam Chomsky's Poverty of the Stimulus Argument is one of the most famous and controversial arguments in the study of language and the mind.
  74. [74]
    [PDF] Poverty of the Stimulus? A Rational Approach
    The Poverty of the Stimulus (PoS) argument holds that children do not receive enough evidence to infer the exis- tence of core aspects of language, ...
  75. [75]
    Causal Reasoning and Meno's Paradox | AI & SOCIETY
    Aug 16, 2020 · What Polanyi's resolution of Meno's Paradox suggests is that inquiry into the nature of both causation and our knowledge about causation is ...
  76. [76]
    How Does Embodiment Enable the Acquisition of Tacit Knowledge ...
    Jan 17, 2024 · We provide a process account of how embodiment enables tacit knowledge acquisition, by developing further Polanyi's insights through Merleau- ...
  77. [77]
    Beyond skills: reflections on the tacit knowledge-brain-cognition ...
    Jul 1, 2024 · Tacit knowledge is commonly judged inferior to explicit knowledge in many professional settings [4, 71], and very often its implications and the ...Missing: hard | Show results with:hard
  78. [78]
    (PDF) How Does Embodiment Enable the Acquisition of Tacit ...
    We provide a process account of how embodiment enables TK acquisition, by developing further Polanyi's insights through Merleau-Ponty's phenomenology and ...
  79. [79]
    Tacit Knowledge of Caring and Embodied Selfhood - PMC
    Knowledge is said to be tacit when it cannot be explicitly articulated (Polanyi 1966) and when the body knows what to do without deliberation or forethought ( ...
  80. [80]
    (PDF) An embodied cognition framework for interactive experience
    Aug 9, 2025 · 15). Following Polanyi, we aim to understand the nature of the interactive art experience by. adopting an embodied cognition perspective. When ...