Fact-checked by Grok 2 weeks ago

Joshua Tenenbaum

Joshua Brett Tenenbaum is an American cognitive scientist specializing in computational models of human learning, reasoning, and perception. He is a professor of computational cognitive science in the Department of Brain and Cognitive Sciences at the Massachusetts Institute of Technology (MIT), where he is also a principal investigator at the Center for Brains, Minds, and Machines (CBMM), an investigator at the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Director of Science at the MIT Quest for Intelligence. Tenenbaum's research employs probabilistic and Bayesian statistical models to explain how humans efficiently acquire knowledge from limited data, bridging cognitive science, machine learning, and artificial intelligence. Tenenbaum earned a B.S. magna cum laude in physics from Yale University in 1993 and a Ph.D. in brain and cognitive sciences from MIT in 1999, under advisor Whitman Richards. Following his doctorate, he conducted postdoctoral research at the MIT AI Laboratory before joining Stanford University as an assistant professor in psychology and computer science from 1999 to 2002. He then returned to MIT, advancing from assistant professor (2002–2007) to associate professor (2007–2011), and has held the position of full professor since 2011. Tenenbaum's contributions include developing Bayesian frameworks for tasks such as , , , and intuitive physics, validated through behavioral experiments with children and adults. His work has advanced systems that mimic human-like learning from sparse data and has been published in leading journals like and . Among his honors are the 2019 MacArthur Fellowship for his innovative integration of computational modeling and behavioral studies, election to the American Academy of Arts and Sciences in 2020, the 2011 Troland Research Award from the , and the 2023 Schmidt Futures AI2050 Senior Fellowship.

Early Life and Education

Early Life

Joshua Tenenbaum was born on August 21, 1972, in . He grew up in in a family deeply engaged with , which fostered his early about the mind and intelligence. His father, Jay Martin "Marty" Tenenbaum, was a prominent figure in , having led research at SRI International's AI Center in the and later pioneering electronic commerce technologies. His mother, Bonnie Tenenbaum, was an specializing in , whose work emphasized how children learn and develop understanding. This household environment, blending AI innovation and cognitive studies, provided constant exposure to discussions on human and machine intelligence from a young age. Tenenbaum's interests in science and the workings of the mind emerged early, supported enthusiastically by his parents. For example, in , he undertook a project on optical illusions, guided by his father and using tools like , , and . The intellectual atmosphere at home—marked by his parents' professional pursuits—laid the groundwork for his lifelong focus on . This foundation propelled him toward formal studies, leading to his enrollment at for undergraduate work.

Academic Training

Joshua Tenenbaum earned a degree in physics, magna cum laude, from in 1993. During his undergraduate studies, he engaged in coursework in physics while encountering courses on the mind and brain taught by psychologists and cognitive scientists, which began shifting his interests toward cognitive modeling. A pivotal early research experience came from a summer collaboration with cognitive psychologist at , where Tenenbaum explored mathematics and computational approaches to studying mental processes, fostering his focus on probabilistic models of . Tenenbaum pursued graduate studies at the , receiving a Ph.D. in Brain and Cognitive Sciences in 1999. His dissertation, titled "A Bayesian Framework for ," examined probabilistic methods for human concept acquisition under the advisement of Whitman Richards, a pioneer in . This training in and cognitive modeling built directly on his undergraduate explorations, solidifying his interdisciplinary approach to understanding through computational lenses.

Professional Career

Early Positions

Following his Ph.D. in Brain and Cognitive Sciences from in 1999, where he developed foundational work in Bayesian learning, Joshua Tenenbaum served as a Postdoctoral Associate at the MIT Artificial Intelligence Laboratory for a brief period that year. He then transitioned to as of from 1999 to 2002. In 2000, he also held a courtesy appointment as in the Department of , reflecting the interdisciplinary nature of his expertise in and computation. At Stanford, Tenenbaum's primary responsibilities included teaching undergraduate and graduate courses central to and . He taught Psychology 205: Foundations of in the fall semesters of 1999, 2000, and 2001, providing students with an introduction to core principles of human . Additionally, he offered Psychology 224: Learning and in Humans and Machines in the spring semesters of 2000 and 2001, exploring computational approaches to learning processes across biological and systems. These courses underscored his role in bridging psychological theory with methodologies during his early academic career. Tenenbaum began establishing his research presence at Stanford through early supervision of students and collaborators, laying the groundwork for computational modeling in . He advised undergraduate researchers such as Mark Pearson and Neville Sanjana from 2000 to 2001, a Mark Steyvers from 2000 to 2002, and initiated supervision of Ph.D. student Thomas L. Griffiths, who completed his degree in 2004. These efforts marked the formation of his initial research group at the institution, focused on interdisciplinary projects at the intersection of and .

MIT Professorship

Joshua Tenenbaum joined the faculty of the (MIT) in 2002 as an Assistant Professor of Computational in the Department of Brain and Cognitive Sciences, following a brief tenure at where he had explored opportunities in cognitive . His career at MIT progressed rapidly, with promotion to in 2007, coinciding with the granting of tenure by the MIT Corporation. During this period, he was appointed to the Paul E. Newton Career Development in and in 2004, a position he continues to hold. He advanced to full of Computational in 2011, a position he continues to hold. At MIT, Tenenbaum serves as the principal investigator and leader of the Computational Cognitive Science group, fostering interdisciplinary research at the intersection of cognitive science and computation. He is also a principal investigator in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), where he has contributed to advancing computational approaches to intelligence since 2003. Tenenbaum plays a prominent role in MIT's Quest for Intelligence initiative, launched in 2018 to explore the mechanisms of and through multidisciplinary efforts; as of 2025, he serves as the Director of Science for the program. In addition to his leadership, Tenenbaum has made significant contributions to education at , developing and teaching core courses such as 9.66/6.804 Computational , which introduces probabilistic reasoning and frameworks for modeling , and 9.012 , offered regularly since 2002.

Research Contributions

Core Theories

Joshua Tenenbaum's core theoretical framework posits as a form of probabilistic within structured generative models, where the mind generates hypotheses about the world and updates beliefs based on observed data using Bayesian principles. This approach views human learning and reasoning as inverting generative processes that describe how data arise from underlying causes, allowing for efficient from limited . Structured generative models incorporate hierarchical and relational representations to capture the compositional nature of knowledge, enabling the mind to simulate potential outcomes and infer latent structures. In his seminal 2006 work, Tenenbaum, along with Thomas L. Griffiths and Charles Kemp, developed theory-based Bayesian models of inductive learning and reasoning, which integrate domain-specific intuitive theories with statistical inference to explain how humans generalize from sparse data. These models treat inductive processes as Bayesian updates over structured knowledge representations, where priors derived from theories constrain possible hypotheses, facilitating robust inferences about categories, properties, and causal relations. By embedding theories within probabilistic frameworks, the approach reconciles rule-based and probabilistic accounts of cognition, emphasizing how structured priors guide learning beyond mere similarity or frequency-based statistics. Tenenbaum further advanced this framework in concepts outlined in "How to Grow a Mind: Statistics, Structure, and Abstraction," co-authored with Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman, which integrates , structural representations, and as complementary mechanisms for . Statistics provide the tools for handling uncertainty and updating beliefs from data, while imposes hierarchical and relational organization on knowledge to enable compositionality and transfer across domains. allows for the extraction of high-level principles that guide efficient reasoning, forming a unified theory where these elements interact within Bayesian generative models to support rapid, flexible learning akin to . A central tenet of Tenenbaum's theories is that humans employ approximate inference algorithms to perform these computations efficiently, as exact in complex structured models is often computationally intractable. These approximations, such as sampling-based methods or variational techniques, enable the mind to balance accuracy and speed, mirroring how handle real-world ambiguity and scale to rich, multifaceted data. This idea underscores the practicality of probabilistic , allowing for without exhaustive computation.

Key Models and Applications

Tenenbaum's Bayesian for , introduced in his dissertation, models human from sparse positive examples by treating acquisition as probabilistic over a structured . The defines a of candidate hypotheses representing possible extensions, such as mathematical rules (e.g., powers of 2), spatial regions (e.g., axis-parallel rectangles parameterized by location and size), or hierarchical categories derived from similarity judgments. Priors over hypotheses encode background or biases, such as or Erlang distributions favoring compact or natural concepts, while likelihoods follow the "size principle," assigning higher probability to observations under smaller hypotheses as the number of examples n increases: p(X|h) = (1/|h|)^n if the examples are consistent with h (where |h| is the size), and 0 otherwise. The posterior p(h|X) \propto p(X|h) p(h) is then used to compute probabilities by marginalizing over hypotheses: p(y \in C | X) = \sum_h p(y \in C | h) p(h|X), enabling a shift from broad, similarity-based predictions with few examples to precise, rule-based ones with more data, as validated in experiments on numerical, spatial, and lexical concepts. Building on this foundation, Tenenbaum developed probabilistic models for and word learning that leverage hierarchical Bayesian structures to integrate theory-based priors with data-driven updates. In causal induction, the approach posits a of possible causal graphs or structures, with priors favoring sparse, domain-appropriate relations (e.g., linear or cyclic dependencies in intuitive physics), and inference via Bayesian structure learning to explain observed covariations, as in the theory-based causal induction model that predicts human judgments on interventions and counterfactuals by sampling from posterior distributions over causal models. For word learning, the model treats meanings as nested taxonomic hypotheses (e.g., subordinate to superordinate levels like "collie" to "animal"), using size-principled likelihoods p(X|h) \propto 1/(\text{height}(h) + \epsilon)^n and priors biased toward distinctive basic-level categories, allowing learners to infer referents from and graded generalization, with strong fits to adult (r = .99) and child data (r = .91) across experiments involving novel labels for objects. These models extend the Bayesian framework to handle probabilistic causal structures, enabling efficient learning in ambiguous environments like . Tenenbaum's applications to intuitive physics involve probabilistic simulation engines that model object motion and interactions by sampling trajectories from generative physical processes, approximating Newtonian mechanics with uncertainty in initial states and forces to predict outcomes like stability or collision paths. In one such model, an intuitive uses sampling (6 trials per prediction) over scene representations with geometric noise (\sigma = 0.2) and external forces (\phi = 0.01), achieving high correlation (\rho = 0.92) with human judgments on block-stacking stability and dynamic events like bumping, thus capturing everyday physical reasoning biases and illusions. For intuitive psychology, particularly , Tenenbaum contributed to rational Bayesian models that infer others' mental states (beliefs, goals) from observed actions via probabilistic inference over goal-directed plans, as in the false-belief task where the model computes posteriors over belief representations to predict performance transitions in children, integrating evidential support for mistaken beliefs through hierarchical sampling of rational action principles. To facilitate these simulations, Tenenbaum co-developed the Church probabilistic programming language, a Lisp-based universal system for specifying stochastic generative models of cognitive processes, enabling modular composition of priors, likelihoods, and inference routines. Church supports exact and approximate inference via methods like and , with features such as stochastic for non-parametric models (e.g., infinite Gaussian mixtures or hierarchical Dirichlet processes) and query functions for on observations, allowing researchers to prototype and test Bayesian models of learning, , and —such as inferring diagnostic causes from symptoms or clustering data into cognitive categories—directly as executable code.

Impact on AI and Cognitive Science

Tenenbaum's research has significantly bridged human cognition and machine learning by emphasizing one-shot learning and reverse engineering intelligence, enabling AI systems to mimic the efficiency of human generalization from limited data. His work demonstrates how probabilistic models can invert compositional causal processes to achieve rapid learning of visual concepts, contrasting with data-intensive deep learning approaches that require thousands of examples. This paradigm shift has influenced AI development toward more robust, human-like inference, as evidenced by his advocacy for integrating cognitive principles to overcome limitations in current machine learning architectures. Through advancements in , Tenenbaum has contributed foundational tools for constructing human-like systems, notably via the development of languages like that facilitate over generative models of . These frameworks allow machines to perform approximate inference on complex probabilistic programs, supporting tasks such as intuitive physics simulation and , which align more closely with human thought processes. His efforts have popularized probabilistic programming as a unifying approach for modeling in both humans and machines, fostering interpretable and flexible architectures. Tenenbaum's studies on child learning have profoundly shaped by drawing parallels between infant cognition and machine robustness, arguing that exploring how children acquire concepts from sparse interactions can inform more adaptive systems. For instance, his lab's behavioral experiments with young learners reveal mechanisms of rapid concept formation that inspire models capable of few-shot in dynamic environments. This interdisciplinary lens has spurred initiatives to replicate developmental trajectories in , enhancing beyond supervised datasets. His collaborations with leading AI labs, including DeepMind and MIT's CSAIL through projects like the Curious Minded Machine, have amplified his influence, integrating cognitive models into practical AI applications such as physics-aware simulations. As of November 2025, Tenenbaum's publications have garnered over 137,000 citations on , underscoring their widespread adoption in research. Recent extensions of his work include developments in world models for bilevel planning and domain-specific probabilistic programming languages like for Bayesian mechanics, advancing scalable systems that incorporate structured priors for complex and physical simulations. As a scientific director of 's Quest for Intelligence, Tenenbaum has driven efforts to cultivate collective human- intelligence, promoting initiatives that combine computational modeling with empirical studies to scale toward human-level understanding. This role has facilitated cross-disciplinary advancements, emphasizing ethical and foundational progress in aligned with principles.

Awards and Honors

Prestigious Fellowships

In 2019, Joshua Tenenbaum was selected as a MacArthur Fellow by the John D. and Catherine T. MacArthur Foundation, receiving an unrestricted grant of $625,000 disbursed over five years without any reporting requirements. The fellowship recognized his pioneering work in , particularly for developing innovative probabilistic modeling approaches that integrate computational models with behavioral experiments to illuminate human learning, reasoning, and perception. Selection for the MacArthur Fellowship occurs through an anonymous nomination process, with candidates identified and vetted by a diverse advisory of experts who prioritize exceptional and potential for future impact across fields. In 2023, Tenenbaum was named one of seven AI2050 Senior Fellows by Schmidt Sciences, an initiative of aimed at harnessing for long-term societal benefits. This fellowship provides up to $1 million in funding over three years to support ambitious, interdisciplinary projects addressing AI's risks and opportunities. Tenenbaum's project focuses on scaling probabilistic models of by drawing insights from human learning to create more robust and human-like AI systems, with the goal of reverse-engineering intelligence to advance for societal good. Senior Fellows are chosen through a competitive, multi-stage review by expert panels emphasizing high-risk, high-reward research that tackles "hard problems" in AI, such as ensuring equitable and beneficial outcomes by 2050. These awards underscore Tenenbaum's influence in and its applications to development.

Scientific Awards

In 2008, Joshua Tenenbaum received the Distinguished Scientific Award for an Early Career Contribution to from the , recognizing his innovative work in computational models of human cognition and learning. Tenenbaum was awarded the Troland Research Award from the in 2011 for developing a groundbreaking Bayesian framework that advances understanding of cognitive processes, including , reasoning, and learning. In 2016, he earned the Howard Crosby Warren Medal from the of , honoring his pioneering contributions to through computational approaches to inductive inference and intuitive physics. Tenenbaum was named R&D Magazine's Innovator of the Year in 2018 for his integrative research bridging and , particularly in building machine systems that mimic human-like reasoning and intelligence. In 2020, Tenenbaum was elected to the American Academy of Arts and Sciences, acknowledging his influential role in shaping computational cognitive science.

Selected Publications

Seminal Papers

Joshua Tenenbaum's seminal papers span , cognitive modeling, and , establishing key frameworks for understanding human-like and computation. These works emphasize probabilistic approaches to learning, structure in cognition, and scalable systems, influencing both theoretical and applied research. One of his earliest and most cited contributions is the 2000 paper A global geometric framework for nonlinear dimensionality reduction, published in Science, with over 18,000 citations as of 2025. Co-authored with Joshua B. Tenenbaum, Vin de Silva, and John C. Langford, it introduces the algorithm, which unfolds nonlinear manifolds in high-dimensional data by preserving distances, enabling effective visualization and analysis of complex datasets like facial images and handwritten digits. In , the 2006 paper Theory-based Bayesian models of inductive learning and reasoning, appearing in Trends in Cognitive Sciences, has exceeded 1,200 citations. Written with Thomas L. Griffiths and Charles Kemp, it develops a Bayesian framework where inductive generalizations draw on structured theories as priors, combined with statistical evidence, to explain phenomena like property and category-based reasoning in human cognition. The 2011 review How to grow a mind: Statistics, structure, and abstraction, published in Science, has garnered more than 2,400 citations. Co-authored with Charles Kemp, Thomas L. Griffiths, and Noah D. Goodman, it synthesizes evidence that children's rapid learning arises from integrating probabilistic inference over causal structures with abstract compositional representations, proposing this triad as essential for building intuitive . A highly influential 2017 target article, Building machines that learn and think like people, in Behavioral and Brain Sciences, has over 4,100 citations. With Brenden M. Lake, Tomer D. Ullman, Samuel J. Gershman, and Fiery A. Cushman, it argues for core cognitive systems—like intuitive physics, , and causal learning—modeled via probabilistic programs, to surpass deep learning's limitations in one-shot generalization and systematic reasoning. Reflecting Tenenbaum's recent work on language models, the 2023 paper Planning with Large Language Models for Code Generation, presented at ICLR, has nearly 200 citations. Co-authored with Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, and Chuang Gan, it shows how LLMs generate hierarchical plans to guide code synthesis, boosting accuracy on tasks requiring multi-step reasoning, such as algorithmic problem-solving.

Books and Reviews

Tenenbaum has co-authored several influential books that synthesize Bayesian approaches to modeling human , emphasizing probabilistic as a core mechanism for learning and reasoning. In Bayesian Models of Cognition: Reverse Engineering the Mind (MIT Press, 2024), co-authored with Thomas L. Griffiths and Nick Chater, he outlines a unified framework for reverse-engineering cognitive processes through generative models and approximate techniques, drawing on examples from , , and to illustrate how probabilistic methods capture the efficiency and flexibility of human . This builds on earlier work to provide both theoretical foundations and practical tools for applying Bayesian in computational models. Another key contribution is Probabilistic Models of Cognition (2nd edition, 2016), co-authored with Noah D. Goodman and a team of contributors including Daphna Buchsbaum and Joshua Hartshorne, which uses probabilistic programming languages like WebPPL to explore cognitive phenomena such as intuitive physics, , and social inference. The book demonstrates how structured probabilistic models can simulate human-like abstraction and generalization, serving as an accessible resource for integrating with cognitive theory. Tenenbaum's chapters in edited volumes further extend these ideas, often serving as synthetic reviews of Bayesian methods in specific domains. In The Probabilistic Mind: Prospects for Bayesian Cognitive Science (Oxford University Press, 2008), edited by Nick Chater and Mike Oaksford, his chapter "Compositionality in rational analysis: Grammar-based induction for concept learning," co-authored with Noah D. Goodman, Thomas L. Griffiths, and Jacob Feldman, examines how recursive grammatical structures facilitate inductive learning of complex concepts from sparse data. This work highlights the role of compositional priors in enabling scalable Bayesian inference for hierarchical knowledge representation. In the Cambridge Handbook of Computational Psychology (, 2008), edited by Ron Sun, Tenenbaum's chapter "Bayesian models of cognition," co-authored with Griffiths and Charles Kemp, reviews foundational principles of Bayesian modeling, including hierarchical priors and inference, as applied to problems in and causal discovery. Similarly, in the Oxford Handbook of Causal Reasoning (, 2017), edited by Michael R. Waldmann, his chapter "Intuitive theories," co-authored with Tobias Gerstenberg, synthesizes evidence for how innate theory-like structures guide probabilistic across development and adulthood. These contributions underscore Tenenbaum's emphasis on integrating structure and flexibility in probabilistic frameworks to explain core aspects of human cognition.

References

  1. [1]
    Joshua Tenenbaum - MacArthur Foundation
    Sep 25, 2019 · Joshua Tenenbaum received a BS (1993) from Yale University and a PhD (1999) from the Massachusetts Institute of Technology. He taught at ...Missing: background | Show results with:background
  2. [2]
    Joshua B Tenenbaum | Brain and Cognitive Sciences
    Josh Tenenbaum is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, a principal investigator at MIT's ...
  3. [3]
    Joshua B. Tenenbaum | American Academy of Arts and Sciences
    Sep 23, 2025 · Joshua B. Tenenbaum. Massachusetts Institute of Technology. Area. Social and Behavioral Sciences. Specialty. Psychological ...Missing: background | Show results with:background
  4. [4]
    [PDF] JOSHUA BRETT TENENBAUM Curriculum Vitae Brain and ... - MIT
    Professor of Computational Cognitive Science, MIT, 2011-present. Associate Professor of Computational Cognitive Science, MIT, 2007-2011.
  5. [5]
    Joshua Tenenbaum - Simons Foundation
    He received his Ph.D. from MIT in 1999, and after a brief postdoc with the MIT AI lab, he joined the Stanford University faculty as assistant professor of ...Missing: background | Show results with:background
  6. [6]
    Understanding Intelligence | MIT for a Better World
    Prof. Joshua Tenenbaum is helping to launch the MIT Intelligence Initiative, an unparalleled, multidisciplinary quest to reveal just how intelligence works.<|control11|><|separator|>
  7. [7]
    [PDF] AI Meets Web 2.0: Building the Web of Tomorrow, Today
    Jay M. “Marty” Tenenbaum spent the. 1970s at SRI's AI Center leading vision research, the 1980s at Schlumberger managing AI Labs, and the 1990s founding a ...
  8. [8]
    [PDF] The MIT Department of Brain and Cognitive Sciences
    His mother is a lawyer, his father is an industrial chemist, and his sister is a law professor. After attending medical school in Valencia, Spain, where he was ...
  9. [9]
    AI, Cognitive Science Researcher Josh Tenenbaum Named R&D ...
    Dec 18, 2018 · Growing up, his father worked as an engineer and AI researcher in California. His mother was a teacher who went on to get a PhD in education to ...Missing: family background
  10. [10]
    A Bayesian framework for concept learning - DSpace@MIT
    This thesis proposes a new computational framework for understanding how people learn concepts from examples, based on the principles of Bayesian inference. By ...
  11. [11]
  12. [12]
    MIT Corporation grants tenure to 50 faculty
    Nov 14, 2007 · Joshua B. Tenenbaum. (from assistant professor) Brain and Cognitive Sciences Education: B.S. 1993 (Yale University), Ph.D. 1999 (MIT)
  13. [13]
    Joshua B. Tenenbaum: Award for Distinguished Scientific Early ...
    Tenenbaum was born in Stanford, California, in 1972. His interests in science and the mind were kindled early on and supported by his parents for as long as he ...Missing: family background
  14. [14]
    Josh Tenenbaum - Computational Cognitive Science - MIT
    Josh Tenenbaum. Professor Department of Brain and Cognitive Sciences · Massachusetts Institute of Technology · Home Page. Email: jbt AT mit DOT edu
  15. [15]
    Joshua Tenenbaum | MIT CSAIL
    Josh Tenenbaum, a professor of brain and cognitive sciences at MIT, directs research on the development of intelligence at the Center for Brains, Minds, and ...
  16. [16]
    Joshua Tenenbaum | The MIT Quest for Intelligence
    Joshua Tenenbaum is a professor of computational cognitive science in MIT's Department of Brain and Cognitive Sciences, and a scientific director with MIT ...
  17. [17]
    Researchers | The MIT Quest for Intelligence
    Joshua Tenenbaum. Director of Science, MIT Quest for Intelligence. Professor, Department of Brain and Cognitive Sciences. Computer Science ...
  18. [18]
    Josh Tenenbaum, PhD - The Adaptive Mind
    His current research focuses on the development of common sense in children and machines, the neural basis of common sense, and models of learning as Bayesian ...Missing: childhood interests
  19. [19]
    ‪Joshua B. Tenenbaum‬ - ‪Google Scholar‬
    Joshua B. Tenenbaum. MIT. Verified email at mit.edu - Homepage · Cognitive scienceartificial intelligencemachine learningcomputational ...Missing: father Jay
  20. [20]
    Theory-based Bayesian models of inductive learning and reasoning
    Special Issue: Probabilistic models of cognition. Theory-based Bayesian models of inductive learning and reasoning.
  21. [21]
    Theory-based Bayesian models of inductive learning and reasoning
    Theory-based Bayesian models of inductive learning and reasoning. Trends Cogn Sci. 2006 Jul;10(7):309-18. ... Joshua B Tenenbaum , Thomas L Griffiths ...
  22. [22]
    Theory-based Bayesian models of inductive learning and reasoning
    Theory-based Bayesian models of inductive learning and reasoning. Joshua B ... Tenenbaum, J.B. ∙ Griffiths, T.L.. Generalization, similarity, and ...
  23. [23]
    How to Grow a Mind: Statistics, Structure, and Abstraction - Science
    Mar 11, 2011 · How to Grow a Mind: Statistics, Structure, and Abstraction. Joshua B. Tenenbaum, Charles Kemp, [...] , Thomas L. Griffiths, and Noah D ...
  24. [24]
    How to grow a mind: statistics, structure, and abstraction - PubMed
    Mar 11, 2011 · How to grow a mind: statistics, structure, and abstraction. Science ... Joshua B Tenenbaum , Charles Kemp, Thomas L Griffiths, Noah D ...
  25. [25]
    How to Grow a Mind: Statistics, Structure, and Abstraction
    Aug 6, 2025 · How to Grow a Mind: Statistics, Structure, and Abstraction ; Joshua B Tenenbaum ; Charles Kemp ; Thomas L Griffiths ; Noah D Goodman.<|control11|><|separator|>
  26. [26]
    [PDF] How to Grow a Mind: Statistics, Structure, and Abstraction REVIEW
    Oct 6, 2015 · This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more ...
  27. [27]
    [PDF] A tutorial introduction to Bayesian models of cognitive development
    Tenenbaum, Thomas L. Griffiths, and Fei Xu. “A Tutorial. Introduction to Bayesian Models of Cognitive Development.” Cognition 120, no. 3 (September.<|control11|><|separator|>
  28. [28]
    [PDF] A Bayesian Framework for Concept Learning - DSpace@MIT
    Feb 15, 1999 · To my parents, Marty and Bonnie Tenenbaum, without whom there wouldn't even be a hypothesis space, and to Mira, who believed in the Hazaka ...
  29. [29]
  30. [30]
    [1206.3255] Church: a language for generative models - arXiv
    Jun 13, 2012 · We introduce Church, a universal language for describing stochastic generative processes. Church is based on the Lisp model of lambda calculus.Missing: probabilistic | Show results with:probabilistic
  31. [31]
    One-shot learning by inverting a compositional causal process
    Here we present a Hierarchical Bayesian model based on compositionality and causality that can learn a wide range of natural (although simple) visual concepts.
  32. [32]
    [1604.00289] Building Machines That Learn and Think Like People
    Apr 1, 2016 · We review progress in cognitive science suggesting that truly human-like learning and thinking machines will have to reach beyond current engineering trends.
  33. [33]
    Probabilistic Models of Cognition - 2nd Edition
    This book explores the probabilistic approach to cognitive science, which models learning and reasoning as inference in complex probabilistic models.Generative models · Mixture models · Hierarchical models · Social cognition<|control11|><|separator|>
  34. [34]
    Josh Tenenbaum - AI2050 - Schmidt Sciences
    Josh Tenenbaum is Professor of Computational Cognitive Science at the Massachusetts Institute of Technology in the Department of Brain and Cognitive Sciences.Missing: Stanford responsibilities collaborations 1999-2002
  35. [35]
    A plan to advance AI by exploring the minds of children
    Sep 12, 2018 · So says Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and is the head of a major new AI project called the MIT Quest ...
  36. [36]
    The Development of Intelligent Minds
    During Advances in the quest to understand intelligence, held at MIT on Nov. 4, 2022, Professors Laura Schulz, Josh Tenenbaum, and Rebecca Saxe introduced the ...
  37. [37]
    MIT's Josh Tenenbaum on Intuitive Physics & Psychology in AI
    Apr 17, 2018 · Tenenbaum is widely respected for his interdisciplinary research in cognitive science and AI. Breakthroughs in AI and deep learning have prompted ...Missing: undergraduate | Show results with:undergraduate
  38. [38]
    Josh Tenenbaum - Cognitive and computational foundations for ...
    Feb 15, 2022 · Josh Tenenbaum - Cognitive and computational foundations for collective human intelligence ... Yale University•185K views · 41:31 · Go to channel ...
  39. [39]
    Josh Tenenbaum receives 2019 MacArthur Fellowship | MIT News
    Sep 25, 2019 · Josh Tenenbaum, a professor in MIT's Department of Brain and Cognitive Sciences who studies human cognition, has been named a recipient of a 2019 MacArthur ...
  40. [40]
    Second cohort of AI2050 Senior Fellows named | UCT News
    Oct 20, 2023 · ” The 2023 Senior Fellows will join the existing community of AI2050 Fellows ... Josh Tenenbaum, Massachusetts Institute of Technology; Stephanie ...
  41. [41]
    Community Perspective - Josh Tenenbaum - AI2050
    Tenenbaum's research combines computational modeling with behavioral experiments in adults and children to “reverse engineer” human intelligence and ...
  42. [42]
    BCS professor Josh Tenenbaum named Schmidt Futures AI2050 ...
    Oct 18, 2023 · BCS professor Josh Tenenbaum is among this year's Schmidt Futures AI2050 Senior Fellows, who will pursue interdisciplinary research in artificial intelligence ...
  43. [43]
    Tenenbaum wins Troland Award | MIT News
    Apr 4, 2011 · Tenenbaum, also a member of MIT's Computer Science and Artificial Intelligence Laboratory, is one of two scientists to receive the 2011 Troland ...
  44. [44]
    Prof. Joshua Tenenbaum awarded the Howard Crosby Warren ...
    May 9, 2016 · Josh Tenenbaum has been awarded the Howard Crosby Warren Medal, from the Society of Experimental Psychologists (SEP), for his pioneering and ...
  45. [45]
    Josh Tenenbaum named Innovator of the Year by R&D Magazine
    Dec 19, 2018 · R&D Magazine has named Josh Tenenbaum the 2018 Innovator of the Year. Tenenbaum, a professor of computational cognitive science in the Department of Brain and ...
  46. [46]
    Six from MIT elected to American Academy of Arts and Sciences for ...
    Apr 24, 2020 · MIT professors Robert Armstrong, Dave Donaldson, Catherine Drennan, Ronitt Rubinfeld, Joshua Tennenbaum, and Craig Wilder have been elected ...
  47. [47]
    Planning with Large Language Models for Code Generation - arXiv
    Mar 9, 2023 · Tenenbaum, Chuang Gan. View a PDF of the paper titled Planning with Large Language Models for Code Generation, by Shun Zhang and 5 other authors.
  48. [48]
    Bayesian Models of Cognition - MIT Press
    This textbook offers an authoritative introduction to Bayesian cognitive science and a unifying theoretical perspective on how the mind works.
  49. [49]
    The Probabilistic Mind - Nick Chater; Mike Oaksford
    $$100.00The Probabilistic Mind is a follow-up to the influential and highly cited 'Rational Models of Cognition' (OUP, 1998). It brings together developments in ...
  50. [50]
    Bayesian Models of Cognition (Chapter 3)
    This chapter discusses the basic principles that underlie Bayesian models of cognition and several advanced techniques for probabilistic modeling and inference.