Fact-checked by Grok 2 weeks ago

Eric Horvitz


Eric Horvitz is an American computer scientist and physician serving as Microsoft's Chief Scientific Officer, where he oversees company-wide scientific initiatives, particularly in artificial intelligence. He holds Ph.D. and M.D. degrees from Stanford University and has advanced AI through foundational work on probabilistic reasoning, decision-making under uncertainty, and human-AI interaction.
Horvitz's research has influenced practical systems in domains including healthcare, transportation, and , emphasizing and . He has received prestigious awards such as the Feigenbaum Prize and the ACM-AAAII Allen Newell Award for contributions bridging , human-computer interaction, and decision sciences. A fellow of the , Association for the Advancement of Artificial Intelligence, and , he previously directed labs globally. Horvitz chairs Microsoft's Aether Committee on AI effects and ethics, co-founded the Partnership on AI, and established the One Hundred Year Study on at to track long-term societal impacts. He has served on the National Security Commission on , advocating for balanced approaches to AI governance that prioritize empirical assessment over unsubstantiated alarmism.

Early Life and Education

Family Background and Upbringing

Eric Horvitz was raised in Merrick, , , as the son of two teachers who emphasized intellectual curiosity and access to knowledge. His father taught and at the high school level, while his mother instructed students. The family maintained a home library stocked with books, which Horvitz frequently explored, and he spent considerable time at the local Merrick Library delving into topics in and related fields. From early childhood, Horvitz exhibited intense inquisitiveness, posing questions about fundamental phenomena such as time, , and existential concepts. Following kindergarten, he disassembled a to investigate its internal circuitry, initiating a pattern of empirical tinkering. Exposure to mid-1960s animated , including depictions of robots and the character , sparked his fascination with intelligent machines; by second or third grade, he endeavored to assemble an "electronic brain" from salvaged components in the family . Horvitz attended Birch Elementary School, where teachers played a pivotal role in cultivating his scientific inclinations. Key figures included Mrs. Frank in , Mrs. O’Hara in , and Mr. Wilmott in . In third grade, he was elected chairperson of the science club, overseeing experiments on and harnessing. By , these experiences solidified his ambition to become a .

Academic Training and Degrees

Horvitz received a degree in from , now known as the at Binghamton. He pursued graduate studies at , earning a Ph.D. in 1990 for his dissertation titled Computation and Action Under Bounded Resources, which explored rational decision-making, metareasoning, and flexible computation under resource constraints, with applications to and expert systems. The work was supervised by Ronald A. Howard, a pioneer in . Horvitz's doctoral research laid foundational contributions to in , addressing and value-of-information computations. Concurrently or subsequently, Horvitz completed a (M.D.) degree at Medical School in 1994, reflecting his interdisciplinary interests at the intersection of , , and . This dual training equipped him to tackle problems in probabilistic reasoning and applications to healthcare.

Professional Career

Initial Positions and Research Roles

Horvitz completed his dissertation, Computation and Action Under Bounded Resources, at in December 1990, focusing on decision-theoretic frameworks for systems operating with limited computational resources. Following the joint completion of his and degrees at Stanford, he joined in 1993 as a principal researcher based in . In his initial role at , Horvitz led investigations into probabilistic reasoning, time-critical action, and decision-making under uncertainty, extending principles of bounded optimality to practical applications such as software and . These efforts included developing models for value-sensitive reasoning and anytime algorithms, which allowed systems to provide useful approximations under constraints. His early emphasized empirical validation through prototypes and collaborations, laying groundwork for subsequent advancements in and human- collaboration.

Advancement at Microsoft Research

Horvitz joined in 1993 as a principal researcher, where he conducted foundational work in , under , and probabilistic reasoning. Over the subsequent two decades, his contributions to AI systems—spanning , , and human-AI collaboration—earned him promotion to Technical Fellow, a distinguished rank recognizing exceptional technical leadership and impact. In May 2017, Horvitz advanced to director of Microsoft Research Labs, overseeing operations across multiple global sites including ; and ; ; ; and . In this capacity, he directed efforts that deployed innovations in domains such as healthcare, , e-commerce, operating systems, and , emphasizing and real-world applications. On March 11, 2020, amid a reorganization of Microsoft's research structure, Horvitz was elevated to the newly created position of , the company's first, transitioning leadership of core research labs to Peter Lee while retaining influence over strategic scientific initiatives. As , he spearheads company-wide efforts at the confluence of , biosciences, healthcare, and societal implications, including founding Microsoft's Aether Committee on responsible . His progression underscores a trajectory from individual technical contributions to executive oversight of interdisciplinary research scaling 's practical deployment.

Executive Leadership Positions

Eric Horvitz joined in 1993 as a principal researcher and advanced through research leadership roles, culminating in executive positions focused on and broader scientific strategy. In May 2017, Horvitz was appointed head of 's research centers outside , overseeing operations across labs in ; ; Cambridge, UK; and other locations, with a focus on advancing and related technologies. He served as director of Labs, managing a portfolio that included core research, decision-theoretic systems, and interdisciplinary projects until early 2020. On March 10, 2020, Microsoft elevated Horvitz to its inaugural role, a position created to provide company-wide guidance on scientific trends, technological frontiers, and their societal implications. In this , he spearheads initiatives bridging , , and policy, including oversight of ethics through the Aether Committee, which he founded and chairs to address potential risks and effects of systems. Horvitz retains his status as a Microsoft Technical Fellow, recognized for sustained contributions to the company's technical strategy.

Core Research Areas

Decision-Making Under Uncertainty

Horvitz's research on under emphasizes the integration of probabilistic reasoning and maximization to enable rational choices in systems facing incomplete information and computational constraints. He has advocated for as a foundational framework for , providing principles for and action amid probabilistic ambiguities, as detailed in his early surveys of expert systems where decision-theoretic methods address limitations of rule-based approaches by incorporating expected calculations. This work builds on Bayesian networks and influence diagrams to model dependencies and evaluate options, allowing systems to quantify risks and benefits explicitly rather than relying on approximations. A core innovation in Horvitz's contributions involves bounded optimality, where agents adapt inference depth dynamically based on available resources and decision stakes, as explored in his dissertation on computation under bounded resources. Techniques like bounded conditioning decompose complex probabilistic queries into tractable subproblems, modulating inference completeness to balance accuracy against time pressures—critical for real-time applications such as medical diagnostics or autonomous systems. For instance, in reformulating diagnostic tools like the Quick Medical Reference (QMR) system, Horvitz applied probabilistic decision models to weigh evidence probabilities against treatment utilities, enhancing reliability in high-uncertainty scenarios. Horvitz extended these principles to interactive domains, treating communication and as sequential decisions under . In models of , he formalized grounding processes—verifying mutual understanding—as utility-driven inferences over possible misunderstandings, using multilevel representations to prioritize clarifications based on . Similarly, in web-based , probabilistic cost-benefit analyses guide for evidence gathering, optimizing query resolution by estimating the marginal utility of additional computations. These approaches underscore Horvitz's emphasis on scalable, evidence-based , influencing systems that must operate in open-world settings with evolving uncertainties.

Machine Learning and Probabilistic Inference

Horvitz's research in emphasizes probabilistic models to address uncertainty in reasoning and prediction. He co-developed bounded conditioning, an inference algorithm that refines bounds monotonically for efficient in complex domains, avoiding exhaustive computation in belief networks. This approach enables scalable approximations in scenarios where exact is intractable, such as probabilistic forecasting. His work integrates with graphical models, including dynamic Bayesian networks (DBNs), which he compared favorably to hidden Markov models (HMMs) for layered architectures in , demonstrating DBNs' superior handling of temporal dependencies and multivariate observations. In applications, Horvitz applied to tasks like spam filtering, where a probabilistic model trained on labeled e-mail features achieved effective by estimating posterior probabilities of junk status. He extended these methods to predict computational runtime in hard inference problems, using to learn distributions over execution times from sampled instances, thereby guiding in exponential search spaces. Such techniques fuse empirical data with causal probabilistic structures, prioritizing evidence-based updates over approximations. Horvitz's contributions also include selective reformulation of belief networks to mitigate inference bottlenecks, employing stochastic simulation for targeted conditioning on problematic subgraphs. In crowdsourcing, he harnessed probabilistic models to combine human annotations with machine predictions, learning worker reliability to enhance large-scale labeling accuracy. These advancements underscore a commitment to model-based , where probabilistic provides interpretable foundations for learning from sparse or noisy data, influencing practical systems in perception and decision support.

Human-AI Interaction and Interfaces

Horvitz's research in human-AI interaction emphasizes collaborative systems where anticipates needs while respecting human agency and . Early work focused on intelligent interfaces that balance with direct manipulation, incorporating probabilistic reasoning to infer intentions and minimize disruptions. He advocated for "compelling" interfaces that leverage AI judiciously, avoiding over- that could erode control or trust. A cornerstone of his contributions is the development of principles for mixed-initiative user interfaces, introduced in a 1999 CHI conference paper. These principles promote fluid collaboration by having AI systems model user goals, attention, and costs of interruption, deciding when to act proactively or defer to human input. For instance, systems should perform value-sensitive analyses weighing the utility of interventions against potential distractions, and exploit opportunities for initiative when user attention is low. Horvitz applied these ideas in projects like the Lumiere system, which used Bayesian user modeling to predict needs in software environments, such as suggesting actions in based on inferred goals from partial inputs. In more recent efforts, Horvitz co-led the derivation of 18 empirically validated guidelines for human- interaction design, published in 2019. These guidelines, informed by iterative expert reviews and case studies across products, address core challenges like ensuring AI augments human capabilities without errors from over-reliance, providing appropriate into AI decisions, and handling errors gracefully to maintain user trust. Examples include recommendations that AI systems should err on the side of under-assistance to avoid incorrect assumptions and continuously learn from user feedback to refine interactions. This framework has influenced interface design in productivity tools, prioritizing human-AI symbiosis over replacement.

Applications in Healthcare and Biosecurity

Horvitz has contributed to AI-driven diagnostic and predictive modeling in healthcare, leveraging to analyze patient data for improved clinical decision-making. In 2010, he co-developed the Medical Bayesian Kiosk, a system employing probabilistic inference to assist in medical and by processing symptoms and generating likelihoods of conditions under uncertainty. His research emphasizes designing AI tools that enhance rather than replace human clinicians, such as AI-enabled collaboration in tumor boards, where models integrate multimodal data—including , , and clinical notes—to support oncologists in treatment planning. Horvitz advocates for priorities in AI deployment within healthcare, including ensuring safe and trustworthy systems through rigorous validation and fostering an AI-competent workforce via training programs. He has highlighted applications in , clinical decision support, and automated summarization of patient visits, while stressing the need for empirical evaluation to mitigate biases and errors in real-world settings. In a 2025 , he discussed AI's role in advancing spatial and for , underscoring measurable improvements in outcomes like early disease detection. In , Horvitz led the Paraphrase project at , initiated in 2025, which demonstrated how generative models can redesign proteins of concern—such as toxins—to evade existing DNA synthesis screening protocols used globally by commercial providers. The study tested open-source tools to generate variants of high-risk proteins, revealing that reformulated designs bypassed function-based and sequence-matching filters in 100% of simulated cases across multiple screening systems, potentially enabling synthesis of hazardous biologics without detection. This work, published in Science on October 2, 2025, proposes enhancements like -augmented screening that incorporates structural predictions and semantic paraphrasing resistance to close these gaps. Horvitz's biosecurity efforts extend to viewing AI as a dual-use tool for both threat amplification and defense, recommending function-oriented global standards for orders and proactive integration of AI into surveillance to counter risks. He has emphasized empirical testing over speculative fears, noting that while AI accelerates for beneficial and therapeutics, undetected adversarial uses could pose pandemic-scale threats if unaddressed.

AI Governance and Ethical Frameworks

Establishment of Microsoft's Aether Committee

In 2016, Eric Horvitz, then a managing director at , co-founded the Aether Committee alongside , Microsoft's president at the time, to address emerging challenges in , effects, and responsible engineering practices. The committee, whose name derives from the for AI and Ethics in Engineering and Research, was established as a cross-disciplinary body comprising experts in technology, , , and policy from across , aimed at identifying risks, studying societal impacts, and recommending policies, procedures, and best practices for AI development and deployment. Horvitz assumed the role of chair, leveraging the committee's authority to influence company-wide decisions, including the imposition of limitations on AI applications such as restrictions on data-driven for certain predictive uses. The Committee's foundational work focused on proactive , convening working groups to evaluate sensitive uses and formulate guiding principles that informed Microsoft's broader responsible strategy. Early efforts included the development of Microsoft's principles and the Sensitive Uses program, which scrutinized potential deployments to mitigate harms like misuse in or biased . By providing strategic guidance to senior leadership, the committee played a pivotal role in declining significant sales opportunities where technologies posed ethical risks, demonstrating its operational impact from inception. This establishment predated and contributed to the creation of Microsoft's Office of Responsible in 2019, which expanded on 's recommendations under dedicated leadership. Ongoing activities under Horvitz's chairmanship have included "" studies on nascent technologies and projects like Media Provenance in 2019, aimed at enhancing content authenticity amid deepfake concerns. The committee's structure, with co-chaired working groups of top internal talent, ensures multidisciplinary input to balance innovation with accountability, reflecting Microsoft's empirical approach to risks grounded in specific use-case analyses rather than speculative scenarios.

Contributions to Asilomar AI Principles

In 2008, as president of the Association for the Advancement of (AAAI), Eric Horvitz commissioned a presidential panel to examine the potential long-term societal influences of advances in , co-chairing the effort with Bart Selman. The panel, comprising 23 AI researchers and experts, conducted multi-month deliberations via , , and in-person meetings to assess the nature, timing, and implications of AI successes, including challenges such as economic disruption, privacy erosion, weaponization, and the emergence of systems exceeding human intelligence across domains. The panel's work culminated in a conference at the in , on February 27-28, 2009, deliberately echoing the 1975 Asilomar meeting on to foster proactive dialogue among scientists on . Horvitz emphasized the need for empirical assessment over speculation, directing the group to prioritize verifiable trends in capabilities, such as progress in and , while addressing risks through enhanced research on safety, robustness, and alignment with human values. The interim report from the panel chairs, released in July 2009, outlined findings that short-term AI impacts—such as automation displacing routine jobs and raising ethical concerns in applications like autonomous weapons—warranted immediate attention via policy and technical safeguards, but rejected calls to curtail AI research. On longer horizons, it acknowledged speculative scenarios like superintelligent systems but stressed uncertainties in timelines and advocated for proactive investments in verification, control mechanisms, and interdisciplinary studies rather than restrictions, concluding that "there are no foreseeable technical obstacles to creating systems that match or exceed the full spectrum of human abilities." This balanced approach influenced subsequent frameworks, including the One Hundred Year Study on Artificial Intelligence (AI100), which Horvitz helped initiate.

Involvement in Partnership on AI

Eric Horvitz co-founded the Partnership on AI (PAI) in September 2016, bringing together major technology companies including , , , , and to study and formulate best practices for technologies aimed at benefiting and society. As founding board chair, Horvitz led early efforts to promote collaboration among industry, academia, and on responsible development, emphasizing empirical assessment of AI's societal impacts over speculative concerns. Under his leadership, PAI initiated programs focused on safety, fairness, and in systems, including working groups that produced reports on topics such as AI's role in and mitigation. Horvitz advocated for PAI's approach of prioritizing data-driven insights and cross-sector partnerships to address challenges, contrasting with more restrictive regulatory proposals by highlighting the need for innovation-friendly guidelines. In December 2024, Microsoft established the SAIGE Fund in honor of Horvitz's PAI contributions, supporting new expert groups on AI governance and societal integration, with PAI CEO Rebecca Finlay crediting his vision for enabling such initiatives. Horvitz continued engaging with PAI through keynotes, such as at the 2024 Partner Forum, where he discussed collaborative strategies for managing AI's evolution. By July 2024, PAI transitioned board leadership to Jerremy Holland, acknowledging Horvitz's foundational role in sustaining the organization's focus on practical, evidence-based AI stewardship.

Perspectives on AI Risks and Societal Impact

Assessments of AI Capabilities and Limitations

Eric Horvitz has characterized as achieving notable progress in specialized domains, particularly through advances in and deep neural networks, enabling high performance in tasks such as image recognition, speech-to-text translation, and . For instance, systems like IBM's DeepBlue demonstrated superhuman capability in chess, while processes billions of postal items annually with substantial cost savings. More recently, large language models such as exhibit "sparks of more general intelligence," including emergent abilities in multimodal tasks like image generation and recognition without dedicated training data, achieving scores comparable to human experts on benchmarks like 85.5% on the MMLU subset. Despite these capabilities, Horvitz emphasizes fundamental limitations in AI systems, including in open-world environments, deficiencies in , and challenges with and robust planning. He notes that AI often confabulates or fails at basic arithmetic despite excelling in advanced , describing models as "fabulously brilliant and embarrassingly stupid" in different contexts. Historical expectations from the for rapid general have not materialized, with persistent gaps in replicating broader human cognition, such as common-sense reasoning and real-time without extensive data. Horvitz attributes much of AI's progress to scaling laws, where performance improves predictably with increased compute and data, yet warns that emergent behaviors remain poorly understood and could lead to unreliable outputs in high-stakes applications like healthcare or cybersecurity. He distinguishes AI from by highlighting the absence of , genuine —viewing AI outputs as syntheses of human-generated data rather than original —and the "deeply mysterious" qualities of human minds that enable intuitive leaps beyond . Biases inherited from training data further limit fairness in , as seen in classifiers for healthcare and systems. To address these limitations, Horvitz advocates prioritizing scientific research into 's inner workings, including mechanisms behind sudden capability jumps and methods for human oversight, as outlined in his co-authored priorities for AI development. This empirical focus aims to mitigate risks from over-reliance on opaque systems while harnessing verified strengths, such as in via , which has achieved near-human accuracy on empirical benchmarks.

Advocacy for Balanced Regulation and Innovation

Eric Horvitz has argued that thoughtfully designed regulation can enhance rather than obstruct technological progress, emphasizing the need for measures that incorporate reliability and controls to accelerate innovation. In June 2025, amid discussions of proposals to prohibit U.S. states from enacting regulations for a decade, Horvitz cautioned against simplistic anti-regulation rhetoric, stating, "We need to be very cautious about jargon and terms like or bumper stickers that say no because it’s going to slow us down. It can speed us up done properly." He highlighted that such "smart regulation" counters unsubstantiated fears of hindrance by fostering safeguards that enable faster, more trustworthy development. Horvitz has positioned scientists as key communicators to policymakers, urging education on how guidance and regulatory frameworks integrate with advancement. He asserted, "It is up to us as scientists to communicate to government agencies... Guidance, … reliability, controls, are part of advancing the field, making the field go faster, in many ways." This perspective aligns with his broader involvement in responsible initiatives at , where he promotes governance that balances risk mitigation with empirical progress, avoiding overly restrictive approaches that could cede ground to less-regulated competitors. Earlier contributions, such as his in the 2016 One Hundred Year Study on , similarly called for collaborative efforts among researchers, social scientists, and policymakers to pair technical innovations with mechanisms ensuring societal benefits without undue constraints.

Empirical Evidence on AI Benefits Versus Speculative Harms

Horvitz has highlighted empirical gains from deployment in healthcare, where predictive models have demonstrated reductions in hospital readmissions—occurring at rates of 20% within 30 days and 35% within 90 days, with associated costs exceeding $17.4 billion annually as of —and in curbing hospital-acquired infections, which impose roughly $7 billion in yearly U.S. expenses. These models also mitigate preventable medical errors, estimated to cause up to 400,000 deaths per year in the United States. In transportation, AI-driven collision avoidance systems offer potential to avert 1.2 million global road deaths annually and 30,000 U.S. fatalities, drawing on data from real-world safety enhancements. Such benefits contrast with more speculative concerns over AI harms, including long-term ethical lapses in autonomous systems operating in unstructured environments and potential for unintended actions by learning algorithms, which Horvitz notes demand "careful, methodical studies" rather than presumptive restrictions. He has advocated , urging investments in interdisciplinary research at the nexus of , , , and to quantify societal impacts empirically, rather than yielding to unverified projections of disruption. Tangible risks, such as job displacement from —projected by McKinsey to accompany a $2.2 trillion GDP uplift by 2025—warrant targeted mitigation, but Horvitz cautions against conflating these with hypothetical existential threats lacking comparable data. In biosciences, Horvitz points to verifiable AI accelerations, such as advances in and , which have empirically sped therapeutic development without corresponding evidence of outsized harms. He stresses monitoring "deep currents" of AI-society interactions—encompassing fairness, , and —for both beneficial and disruptive potentials, but grounded in ongoing scientific validation over alarmist forecasts. This approach aligns with his broader call for frameworks enabling data access and transparency to substantiate claims, prioritizing realized utility in domains like healthcare and discovery against risks that remain largely prospective.

Criticisms from Alarmist and Restrictive Viewpoints

Critics advocating for stringent restrictions on development, such as proponents of temporary halts in training advanced systems, have faulted Horvitz for opposing measures they deem essential to mitigate catastrophic risks. In March 2023, over 1,000 experts, including safety researchers like and Stuart Russell, signed an from the calling for a six-month pause on giant experiments more powerful than to allow time for safety protocols. Horvitz publicly rejected this , describing it as an "ill-defined request" and arguing that continued empirical and , rather than cessation, better address uncertainties. Alarmist perspectives, emphasizing existential threats from superintelligent AI, have highlighted Horvitz's earlier dismissals of doomsday scenarios as underestimating misalignment risks. In 2015, Horvitz stated that "will not end the human race," directly countering warnings from figures like , , and about potential extinction-level dangers from uncontrolled AI advancement. AI safety communities, such as those on , have characterized Horvitz as a leading skeptic of high-stakes x-risk narratives, implying his industry-aligned optimism—rooted in Microsoft's commercial imperatives—prioritizes deployment over precautionary alignment research. Restrictive viewpoints have also scrutinized Horvitz's role in initiatives like the , co-founded in 2016 by tech giants including , as a mechanism for self-regulation that dilutes independent oversight. Detractors in ethics circles argue such forums enable to frame safety discussions on terms favoring rapid scaling, sidelining calls for binding treaties or compute governance to curb recursive self-improvement risks. Horvitz's advocacy for "guiding" progress without pauses aligns with this critique, seen by some as evading the urgency of verifiable containment strategies amid accelerating capabilities.

Long-Term AI Studies and Policy Engagement

One Hundred Year Study on Artificial Intelligence

The One Hundred Year Study on (AI100), initiated in 2014, is a longitudinal effort to examine the development, capabilities, and societal impacts of AI over a century, with periodic reports issued by rotating panels of experts. Eric Horvitz, then at , conceived the project during his tenure as a director there, drawing from an earlier one-year internal study on long-term AI futures commissioned in 2009–2010. He and his wife, Mary Horvitz, provided initial funding exceeding $5 million, establishing the study at , where Horvitz had earned his degrees in and . The framework involves a standing committee selecting study panels every 2–5 years to produce self-standing reports on AI's trajectory, grounded in empirical trends rather than speculation, with the first panel chaired by Horvitz himself. The inaugural panel, under Horvitz's leadership, released its report in September 2016 titled Artificial Intelligence and Life in 2030, focusing on realistic projections of AI integration into daily life, such as autonomous vehicles and personalized assistants, while emphasizing verifiable progress in narrow AI domains over general . Subsequent panels have continued this empirical approach; for instance, the 2021 report Gathering Strength, Gathering Storms assessed advances in and alongside risks like bias amplification, advocating for data-driven governance rather than premature restrictions. Horvitz has described the study's intent as providing "datapoints" for , enabling future panels to discern patterns in AI's evolution and societal effects, such as productivity gains in sectors like healthcare and . Horvitz's involvement underscores a commitment to balanced, evidence-based foresight, contrasting with more speculative narratives on AI existential risks; he has presented updates on AI100's progress, highlighting its role in informing through rigorous, interdisciplinary review rather than advocacy for halting . As of 2021, the had produced two major reports, with plans for ongoing cycles to track metrics like AI adoption rates and ethical deployment challenges, ensuring continuity beyond individual funders or panels. This structure prioritizes of AI's actual influences, informed by Horvitz's probabilistic expertise, over unsubstantiated projections.

Recent Initiatives in AI-Biosecurity Integration

In October 2025, Eric Horvitz led a initiative known as the Paraphrase Project, which investigated vulnerabilities in global screening systems for synthetic DNA orders amid advances in AI-assisted . Researchers, under Horvitz's direction, used open-source models to generate reformulated genetic sequences encoding known protein toxins, such as those from and , that evaded detection by standard screening tools employed by commercial providers. These "paraphrased" sequences maintained functional protein output while altering nucleotide patterns to bypass homology-based and classifiers, demonstrating how could enable "zero-day" biothreats by creating novel variants undetectable by existing protocols. The study, published in Science on October 2, 2025, tested over 1,000 such AI-generated designs against real-world screening pipelines, finding that a significant fraction—up to % in some cases—slipped through initial checks. Horvitz's team collaborated with DNA synthesis companies and biosecurity experts to develop and deploy mitigations, including enhanced AI-resilient screening algorithms that incorporate adversarial training to detect paraphrased threats. These updates, rolled out by providers handling millions of annual synthesis orders, improved detection rates for AI-obfuscated sequences by integrating multi-modal analysis of sequence intent and protein function predictions. Horvitz emphasized the dual-use nature of AI in biology, noting in a Microsoft Research blog post on October 6, 2025, that while such tools accelerate therapeutic protein engineering—potentially yielding new vaccines and enzymes—they necessitate proactive safeguards to prevent misuse without stifling innovation. The initiative underscored empirical gaps in current systems, originally designed for known threats rather than generative AI outputs, and advocated for international standards harmonizing screening with advancing computational biology. Building on these findings, Horvitz announced a multi-year biosecurity study on October 9, 2025, aimed at balancing open model sharing with in . This effort focuses on longitudinal assessments of 's role in preparedness, including simulations of biothreat evolution and integration with surveillance networks, while prioritizing verifiable, data-driven protocols over speculative restrictions. The study draws on prior collaborations, such as Horvitz's contributions to -enabled clinical tools, to explore causal pathways from -generated designs to real-world outcomes, with initial phases emphasizing scalable, transparent defenses deployable by October 2026.

Publications and Recognition

Major Books and Articles

Horvitz has authored no major standalone books, though he has contributed forewords, chapters, and editorial content to volumes on AI innovation and ethics. His scholarly output consists primarily of over 900 peer-reviewed articles, conference papers, and technical reports, amassing more than 114,000 citations and an exceeding 130 as of 2024. These works span , , human-AI collaboration, and AI governance, often emphasizing empirical integration of probabilistic models with real-world applications in , , and policy. Foundational contributions include the 1988 paper "Decision Theory in Expert Systems and ," co-authored with John S. Breese and Max Henrion, which formalized value-of-information computations for handling in systems, influencing subsequent developments in Bayesian decision networks. In "Rise of Concerns about : Reflections and Directions" (2015), co-written with Thomas Dietterich, Horvitz examined the surge in risk discussions, advocating for balanced empirical studies over speculative scenarios and highlighting verifiable in deployment. On AI policy and risks, Horvitz's 2021 collaborative report "Key Considerations for the Responsible and Fielding of " provides guidelines for mitigating biases, ensuring robustness, and aligning systems with human values through iterative testing and transparency mechanisms, drawing from deployments. More recently, in "Now, Later, and Lasting: Ten Priorities for Research, , and " (2024), he outlines priorities such as advancing verifiable metrics, fostering interdisciplinary integration, and prioritizing near-term empirical benefits like AI-assisted diagnostics over indefinite existential risk postponement. Other influential works address human-AI dynamics, such as "Artificial Intelligence in the Open World: Representations and Policies for Guided Exploration" (presentation and related publications circa 2020), which explores adaptive policies for unknown environments, and contributions to bounded optimality models that quantify trade-offs in resource-constrained reasoning. These publications underscore Horvitz's emphasis on causal, data-driven approaches to AI advancement, critiquing overreliance on unverified worst-case assumptions in favor of measurable progress indicators.

Notable Awards and Honors

Horvitz received the AAAI Feigenbaum Prize in 2015 for sustained and high-impact contributions to , including the development of computational models of , under , and principles of human-aware systems. He was awarded the ACM-AAAI Allen Newell Award in 2015 for groundbreaking contributions spanning , human-computer interaction, and decision sciences. Horvitz has been elected to prestigious fellowships and academies recognizing his advancements in and related fields. He is a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), honored for significant contributions to principles and applications of probability and utility in computation, including reasoning and under . He holds fellowship in the Association for Computing Machinery (ACM). Horvitz was inducted into the CHI Academy for contributions at the intersection of and human-computer interaction. He is a member of the and a fellow of the American Academy of Arts and Sciences. In 2021, he was appointed to the President's Council of Advisors on Science and Technology (PCAST).

References

  1. [1]
    Eric Horvitz, Chief Scientific Officer - Microsoft
    His research contributions have advanced AI through innovations in perception, reasoning, and decision-making under uncertainty. Dr. Horvitz is known for his ...Missing: achievements | Show results with:achievements
  2. [2]
    Dr. Eric Horvitz - ACM Awards
    Eric Horvitz is the recipient of the 2015 Newell Award for groundbreaking contributions in artificial intelligence and human-computer interaction.
  3. [3]
    Your Computer May Know You Have Parkinson's. Shall It Tell You?
    Horvitz grew up in Merrick, on Long Island, the son of two public school teachers. By fifth grade, he knew that he was going to become a scientist. “I remember ...
  4. [4]
    Dr. Eric Horvitz: Chief Scientific Officer, Microsoft - Behind the Tech ...
    ERIC HORVITZ: Lots of books. My parents had a home library filled with lots of books. We had the Merrick Library -- Merrick, Long Island -- where I would spend ...Missing: upbringing | Show results with:upbringing
  5. [5]
    Microsoft's new head of research has spent his career building ...
    In college, Horvitz pursued similar questions, while earning an undergraduate degree in biophysics from Binghamton University in upstate New York. After ...<|control11|><|separator|>
  6. [6]
    Eric Horvitz - Advisor at AIM Intelligent Machines - The Org
    Education includes a PhD and MD from Stanford University and a Bachelor's degree in Biophysics from Binghamton University. Location. Redmond, United States.Missing: undergraduate | Show results with:undergraduate
  7. [7]
    [PDF] COMPUTATION AND ACTION UNDER BOUNDED RESOURCES
    have one or more common parents, but no arc between them, they are conditionally independent of each other given their common parents. A node is ...
  8. [8]
    [PDF] COMPUTATION AND ACTION UNDER BOUNDED RESOURCES
    In this dissertation, we explore theoretical and empirical work on rational decision making that addresses topics of interest to investigators in computer ...
  9. [9]
    Eric Horvitz - Wikipedia
    Eric Joel Horvitz is an American computer scientist, and Technical Fellow at Microsoft, where he serves as the company's first Chief Scientific Officer.Biography · Work · AI and society · Publications
  10. [10]
  11. [11]
    Eric Horvitz | IEEE Xplore Author Details
    Biography. Eric Horvitz received the Ph.D. degree in biomedical informatics and M.D. degree from Stanford University, Stanford, CA, USA, in 1991 and 1994 ...
  12. [12]
    Alumni: Eric Horvitz - ICSI Berkeley
    Eric Horvitz is distinguished scientist and managing co-director at Microsoft Research. ... Education. PhD, Stanford University; MD, Stanford University; BA, ...Missing: background | Show results with:background
  13. [13]
    Microsoft taps Eric Horvitz as first chief scientific officer - GeekWire
    Mar 11, 2020 · Horvitz started at Microsoft in 1993 as a principal researcher. He most recently was director of Microsoft Research Labs, focusing on artificial ...Missing: early career
  14. [14]
    Microsoft appoints its first-ever chief scientific officer - Engadget
    Mar 11, 2020 · Before the promotion, Horvitz had been a technical fellow and director at Microsoft's Research Labs. He joined Microsoft in 1993 and spent the ...
  15. [15]
    Publications by topic area - Eric Horvitz
    Computation and Action under Bounded Resources. PhD Dissertation, Stanford University, 1990 (pdf). D. Heckerman and E. Horvitz. Problem Formulation as the ...
  16. [16]
    Microsoft unifies research groups as it appoints a science chief
    Mar 11, 2020 · Eric Horvitz, a technical fellow and director of Microsoft Research Labs, is being promoted into the role of chief scientific officer, a ...
  17. [17]
    Tech Moves: Meet Microsoft Research Labs' new director; Accolade ...
    May 9, 2017 · Horvitz has been conducting research on artificial intelligence and machine learning at Microsoft's Redmond, Wash., headquarters for more than ...
  18. [18]
    Microsoft quietly adds a chief scientific officer to its upper leadership ...
    Mar 10, 2020 · Microsoft promoted Eric Horvitz – a longtime technical fellow and Microsoft Research director – to chief scientific officer, a new C-suite ...
  19. [19]
    Eric Horvitz, Former CCC Council Member, is New Head ... - CCC Blog
    May 2, 2017 · Eric Horvitz, Former CCC Council Member, is New Head of Research at Microsoft · ACM Recognizes Former CCC Council Member Eric Horvitz · CCC ...
  20. [20]
    Eric Horvitz PhD, MD - Faculty Page - Columbia DBMI
    Before moving into the role of Chief Scientific Officer, he served as director of Microsoft Research overseeing research labs in Redmond, Washington; Cambridge, ...Missing: biography | Show results with:biography
  21. [21]
    Microsoft Readjusts Research Group: Eric Horvitz As Chief Scientist ...
    Mar 11, 2020 · Microsoft technical fellow and Microsoft Research Labs director Eric Horvitz was promoted to chief scientific officer.
  22. [22]
    Eric Horvitz - Stanford HAI
    Eric Horvitz is a technical fellow at Microsoft, where he serves as the company's first Chief Scientific Officer. He has made contributions on addressing ...Missing: biography | Show results with:biography
  23. [23]
    [PDF] Decision Theory in Expert Systems and Arti cial Intelligence
    This work was supported by a NASA. Fellowship to Eric Horvitz under Grant NCC ... 2.5 Decision Theory Is Normative ...
  24. [24]
    Decision theory in expert systems and artificial intelligence
    ... decision theory and decision analysis. We describe early experience with ... Eric Horvitz is a NASA Fellow supported under Grant NCC-220-51 to Stanford ...
  25. [25]
    [PDF] Bounded Conditioning: Flexible Inference for Decisions Under ...
    In this paper, we explore the modulation of completeness of probabilistic inference by decomposing a problem into a set of inference subproblems, and by ...
  26. [26]
    [PDF] A Probabilistic Reformulation of the Quick Medical Reference System
    Decision theory is based on probability theory and utility theory. We limit our discussion in this paper to the probabilistic component of QMR-DT. We believe ...<|control11|><|separator|>
  27. [27]
    Uncertainty, Utility, and Misunderstanding: A Decision-Theoretic ...
    The methods are informed by psychological studies and founded on principles of decision making under uncertainty. We delineate four distinct levels of analysis ...
  28. [28]
    Web-Based Question Answering: A Decision-Making Perspective
    Oct 19, 2012 · Abstract:We describe an investigation of the use of probabilistic models and cost-benefit analyses to guide resource-intensive procedures ...
  29. [29]
    Coordinate: Probabilistic Forecasting of Presence and Availability
    Abstract. We present methods employed in COORDINATE, a prototype service that supports collaboration and communication by learning predictive models that.
  30. [30]
    Keynote: Model-Based Machine Learning - Microsoft Research
    Jul 18, 2017 · We show how probabilistic graphical models, coupled with efficient inference ... Portrait of Eric Horvitz. Eric Horvitz. Chief Scientific Officer.
  31. [31]
    [PDF] A Comparison of HMMs and Dynamic Bayesian Networks for ...
    Abstract. We present a comparative analysis of a layered architecture of Hidden Markov Models (HMMs) and dynamic Bayesian networks.
  32. [32]
  33. [33]
    [PDF] A Bayesian Approach to Tackling Hard Computational Problems∗
    We focus on using machine learning to characterize variation in the run time of instances observed in in- herently exponential search and reasoning problems.
  34. [34]
    Reformulating inference problems through selective conditioning
    Jul 17, 1992 · We describe how we selectively reformulate portions of a belief network that pose difficulties for solution with a stochastic-simulation ...
  35. [35]
    [PDF] Combining Human and Machine Intelligence in Large-scale ...
    We show how learned probabilistic models can be used to fuse human and machine contributions and to predict the behaviors of workers. We employ multiple ...
  36. [36]
    [PDF] Compelling Intelligent User Interfaces – How Much AI? - of Eric Horvitz
    ABSTRACT. Efforts to incorporate intelligence into the user interface have been underway for decades, but the commercial impact of this work has not lived ...
  37. [37]
    Principles of mixed-initiative user interfaces - ACM Digital Library
    In this paper, we review principles that show promise for allowing engineers to enhance human-computer interaction through an elegant coupling of automated ...
  38. [38]
    [PDF] Principles of Mixed-Initiative User Interfaces - of Eric Horvitz
    Agents should employ models of the attention of users and consider the costs and benefits of deferring action to a time when action will be less distracting. (4) ...
  39. [39]
    Guidelines for Human-AI Interaction - ACM Digital Library
    May 2, 2019 · We propose 18 generally applicable design guidelines for human-AI interaction. These guidelines are validated through multiple rounds of evaluation.
  40. [40]
    Guidelines for human-AI interaction design - Microsoft Research
    Feb 1, 2019 · “We're still in the early days of harnessing AI technologies to extend human capabilities,” said Eric Horvitz, director of Microsoft Research ...
  41. [41]
    Eric Horvitz | RAISE Health - Stanford Medicine
    Horvitz's research spans machine learning, reasoning, and human-AI interaction. For decades, he has advanced AI applications in healthcare settings, including ...
  42. [42]
    Medical Bayesian Kiosk (2010) - Microsoft Research
    Jan 1, 2010 · Medical Bayesian Kiosk (2010) · Dan Bohus · Eric Horvitz · Research Area · Research Lab · Group · Project · Publication · Watch Next.
  43. [43]
    Homepage of Eric Horvitz
    Eric Horvitz. Chief Scientific Officer of Microsoft. Publications Links. New. AI and Biosecurity: Paraphrase project ...
  44. [44]
    [PDF] Artificial Intelligence In Health And Health Care: Priorities For Action
    Jan 22, 2025 · We focus on four strategic areas: ensuring safe, effective, and trustworthy use of AI; promotion and development of an AI-competent health care ...
  45. [45]
    Pursuing Equity With Artificial Intelligence in Health Care
    Jan 31, 2025 · These applications range from drug discovery and clinical decision support to drafting visit summaries and portal messages. Many stakeholders ...
  46. [46]
    Microsoft's Eric Horvitz headlines UT San Antonio two-day ...
    Sep 26, 2025 · Microsoft's Eric Horvitz headlines UT San Antonio two-day symposium on the future of human health ... Health through Spatial Omics and AI.” The ...
  47. [47]
    The Paraphrase Project: Designing defense for an era of synthetic ...
    What began as a curiosity and concern raised by Eric Horvitz has culminated in a Science paper marking firsts in AI and biosecurity. The study demonstrates how ...
  48. [48]
    Strengthening nucleic acid biosecurity screening against generative ...
    Oct 2, 2025 · We evaluated the ability of open-source AI-powered protein design software to create variants of proteins of concern that could evade detection ...
  49. [49]
    When AI Meets Biology: Promise, Risk, and Responsibility - Microsoft
    Oct 6, 2025 · Strengthening nucleic acid biosecurity screening against generative protein design tools. Meet the authors. Portrait of Eric Horvitz. Eric ...
  50. [50]
    AI-Enabled Protein Design: A Strategic Asset for Global Health and ...
    Oct 28, 2024 · AI-Enabled Protein Design: A Strategic Asset for Global Health and Biosecurity ... Eric Horvitz, MD, PhD, is Chief Scientific Officer of Microsoft ...
  51. [51]
    Biothreat hunters catch dangerous DNA before it gets made - Nature
    Oct 2, 2025 · “The diversified proteins essentially flew through the screening techniques” that were tested, Horvitz says. AI could pose pandemic-scale ...
  52. [52]
    Aether Committee at Microsoft - of Eric Horvitz
    T. he Aether committee was created to bring together top talent in AI principles and technology, ethics, law, and policy from across Microsoft to formulate ...
  53. [53]
    Advancing AI trustworthiness: Updates on responsible AI research
    Feb 1, 2022 · Numerous efforts at Microsoft have been nurtured by its Aether Committee, a coordinative cross-company council comprised of working groups ...
  54. [54]
    Microsoft is turning down some sales over AI ethics, top ... - GeekWire
    Apr 9, 2018 · Concerns over the potential abuse of artificial intelligence technology have led Microsoft to cut off significant sales, says Eric Horvitz.
  55. [55]
    AAAI Presidential Panel on Long-Term AI Futures: 2008-2009
    The AAAI President has commissioned a study to explore and address potential long-term societal influences of AI research and development.
  56. [56]
    Asilomar Study on Long-Term AI Features - Microsoft Research
    Asilomar Study on Long-Term AI Features. Eric Horvitz ,; Bart Selman. July 2009. Download BibTex. Highlights of 2008-2009 AAAI Study: Presidential Panel on ...Missing: report | Show results with:report
  57. [57]
    [PDF] Interim Report from the Panel Chairs AAAI Presidential Panel on ...
    After several months of discussion by email and phone, a face-to-face meeting was held at Asilomar, at the end of Feburary 2009. Asilomar was selected as a site ...
  58. [58]
    [PDF] AAAI Presidential Panel on Long-Term AI Futures - of Eric Horvitz
    The AAAI Panel on Long-Term AI Futures resonated broadly with the 1975 Asilomar meeting by molecular biologists on recombinant DNA—in terms of the high ...Missing: Conference | Show results with:Conference
  59. [59]
    [PDF] Long-Term Trends in the Public Perception of Artificial Intelligence
    We took inspiration from the Asilo- mar Study of 2008-09 (Horvitz and Selman 2009) and the One Hundred Year Study on Artificial Intelligence (Stanford ...
  60. [60]
    Reflections and Framing | One Hundred Year Study on Artificial ...
    ERIC HORVITZ | 2014 ... There is much to be done with leveraging logical theorem proving, decision theory, and program verification to develop new approaches to ...
  61. [61]
    Preface | One Hundred Year Study on Artificial Intelligence (AI100)
    The One Hundred Year Study is modeled on an earlier effort informally known as the “AAAI Asilomar Study. ... Eric Horvitz Alan Mackworth Tom Mitchell
  62. [62]
    The Partnership on Artificial Intelligence to Benefit People and ...
    Sep 28, 2016 · "The Partnership on AI to Benefit People and Society was established to study and formulate best practices on AI technologies, to advance the ...Missing: involvement | Show results with:involvement
  63. [63]
    Eric Horvitz: AI pioneer, human-centered AI, Partnership ... - LinkedIn
    Apr 16, 2025 · He brought together Amazon, Google, IBM, Facebook, Microsoft, and others to launch the Partnership on AI, ensuring responsible AI development ...
  64. [64]
    Partnership on AI Appoints Six New Members to Board of Directors
    Jul 24, 2024 · ... board,” said Jerremy Holland, Board Chair, Partnership on AI. “The ... I'd also like to thank Founding Board Chair Eric Horvitz and ...
  65. [65]
  66. [66]
    Partnership on AI and Microsoft Launch Fund Supporting New AI ...
    Dec 4, 2024 · Microsoft's gift honors Eric Horvitz, Chief Scientific Officer at Microsoft and PAI's Founding Board Chair, whose passion and leadership for ...
  67. [67]
    Partnership on AI's Partner Forum 2024
    In this keynote conversation, Eric Horvitz of Microsoft emphasizes the importance of collaboration in shaping AI's societal impact. The discussion will ...Missing: involvement | Show results with:involvement
  68. [68]
    Eric Horvitz & Rebecca Finlay - AI and Collaboration - YouTube
    Dec 13, 2024 · The discussion will reflect how partnerships and community engagement ... Eric Horvitz Microsoft Rebecca Finlay Partnership on AI Moderator.
  69. [69]
    None
    ### Summary of Eric Horvitz's Assessments on AI
  70. [70]
  71. [71]
    [PDF] Scientific Progress in AI - History, Status and, and Futures
    Feb 16, 2024 · University of Pennsylvania Press. Scientific Progress in Artificial Intelligence: History, Status, and Futures. Eric Horvitz and Tom M. Mitchell.Missing: assessments | Show results with:assessments
  72. [72]
    Microsoft's Chief Scientific Officer weighs in on the dangers of A.I. ...
    Apr 30, 2023 · In an interview with Fortune, Eric Horvitz lays out what comes next for A.I., and what exactly it is that distinguishes people from ...
  73. [73]
    [PDF] AI, people, and society - of Eric Horvitz
    Jul 3, 2020 · Growing exuberance about AI has come in the wake of sur- prising jumps in the accuracy of machine pattern recognition using methods referred to ...Missing: limitations | Show results with:limitations
  74. [74]
    [PDF] Ten Priorities for AI Research, Policy, and Practice - arXiv
    Eric Horvitz is cofounder of the One Hundred Year Study on AI. He is the Chief Scientific Officer of. Microsoft, a member of the President's Council of ...
  75. [75]
    Regulation 'done properly' can help with AI progress, says Microsoft ...
    Jun 22, 2025 · Eric Horvitz's comments come as Donald Trump plans to ban US states from AI regulation for 10 years. Robert Booth UK technology editor.Missing: balanced | Show results with:balanced
  76. [76]
    Microsoft Chief Scientist Calls for Smart, Not Strict, AI Regulation
    Jun 26, 2025 · Chief scientist at Microsoft, Dr. Eric Horvitz, says AI regulation ... While Horvitz supports smart regulation, Microsoft's actions suggest a ...
  77. [77]
    [PDF] One Hundred Year Study on Artificial Intelligence - of Eric Horvitz
    “To support a longitudinal study of influences of AI advances on people and society, centering on periodic studies of developments, trends, futures ...Missing: Principles involvement
  78. [78]
    [PDF] A Moment in Time for AI: Reflections on Science and Society
    University of. Pennsylvania Press. Forthcoming. A Moment in Time for AI: Reflections on Science and Society. Eric Horvitz. June 2024.
  79. [79]
    Microsoft Exec: No Need to Pause AI, Despite What Elon Musk Says
    May 2, 2023 · Microsoft's chief scientific officer says he disagrees with people calling for a pause on AI development, including Elon Musk. Eric Horvitz ...
  80. [80]
    Artificial intelligence 'will not end human race' - The Guardian
    Jan 28, 2015 · The head of Microsoft's main research lab has dismissed fears that artificial intelligence could pose a threat to the survival of the human race.
  81. [81]
    Artificial intelligence not a threat: Microsoft's Eric Horvitz contradicts ...
    Jan 29, 2015 · In a later blog, Horvitz admitted the procession of AI towards super-intelligence would present challenges in the realms of privacy, law and ...Missing: risk criticism
  82. [82]
    AI Researchers On AI Risk - LessWrong
    May 22, 2015 · Eric Horvitz is another expert often mentioned as a leading voice of skepticism and restraint. His views have been profiled in articles like ...
  83. [83]
    'Partnership on AI' formed by Google, Facebook, Amazon, IBM and ...
    Sep 28, 2016 · 'Partnership on AI' formed by Google, Facebook, Amazon, IBM and Microsoft ... Microsoft's Eric Horvitz, one of the partnership's two ...
  84. [84]
    Tech behemoths form artificial- intelligence nonprofit
    Sep 28, 2016 · Microsoft's representative will be Eric Horvitz, who oversees the company's Redmond research lab and will serve as interim co-chair of the ...
  85. [85]
    About | One Hundred Year Study on Artificial Intelligence (AI100)
    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz ...
  86. [86]
    [PDF] One-Hundred Year Study on Artificial Intelligence: Reflections and ...
    ERIC HORVITZ | 2014. The One Hundred Year Study on Artificial ... more speculative long-term outcomes, and a third team taking a special focus ...
  87. [87]
    Study to Examine Effects of Artificial Intelligence - The New York Times
    Dec 15, 2014 · Dr. Horvitz and his wife, Mary Horvitz, agreed to fund the initiative, called the “One Hundred Year Study on Artificial Intelligence.” In an ...
  88. [88]
    A Century-Long Commitment to Assessing Artificial Intelligence and ...
    Dec 1, 2018 · He was Chair of the inaugural study panel of the One Hundred Year Study on Artificial Intelligence.
  89. [89]
    [PDF] ARTIFICIAL INTELLIGENCE AND LIFE IN 2030
    Jun 25, 2016 · 49 Eric Horvitz, Johnson Apacible, Raman Sarin, and Lin Liao ... were more likely to answer when they were born if the computer first stated.
  90. [90]
    New report assesses progress and risks of artificial intelligence
    Sep 16, 2021 · Those conclusions are from a report titled “Gathering Strength, Gathering Storms: The One Hundred Year Study on Artificial Intelligence ...<|separator|>
  91. [91]
    New report from AI100 Study Panel examines biggest promises and ...
    Sep 17, 2021 · “As [AI100 founder] Eric Horvitz observed: one report provides a datapoint, but two reports form a line,” says McIlraith. “This suggests a ...
  92. [92]
    Updated report says AI's promises and perils are getting real
    Sep 16, 2021 · AI100 was initiated by Eric Horvitz, Microsoft's chief scientific officer, and hosted by the Stanford University Institute for Human-Centered ...<|control11|><|separator|>
  93. [93]
    A 100-year study of artificial intelligence? Microsoft Research's Eric ...
    Jan 9, 2015 · A 100-year study of artificial intelligence? Microsoft Research's Eric Horvitz explains. Long-term project aims to track technology's impact.Missing: achievements | Show results with:achievements<|separator|>
  94. [94]
    Researchers find — and help fix — a hidden biosecurity threat
    Oct 2, 2025 · Eric Horvitz, chief scientific officer of Microsoft and project lead. Can you explain why everyday people should care about AI being used in ...
  95. [95]
    How Microsoft Used AI to Uncover Flaws in Biosecurity Tech
    Oct 7, 2025 · Eric Horvitz, Microsoft's Chief Scientific Officer, initiated the project based on a single question: "Could today's late-breaking AI ...
  96. [96]
    Microsoft CSO Eric Horvitz launches biosecurity study to balance ...
    Oct 9, 2025 · Microsoft Chief Scientific Officer Eric Horvitz unveils a multi-year biosecurity study and tackles a core scientific dilemma: how to share ...<|separator|>
  97. [97]
    Eric Horvitz | Official Publisher Page - Simon & Schuster
    Books by Eric Horvitz. The Insider's Guide to Innovation at Microsoft . Thank you for signing up, fellow book lover! Tell us what you like and we'll ...
  98. [98]
    ‪Eric Horvitz‬ - ‪Google Scholar‬
    A Bayesian approach to filtering junk e-mail. M Sahami, S Dumais, D Heckerman, E Horvitz. Learning for Text Categorization: Papers from the 1998 workshop 62, 98 ...
  99. [99]
    Eric Horvitz - DBLP
    List of computer science publications by Eric Horvitz. ... Major life changes and behavioral markers in social media: case of childbirth. CSCW 2013: 1431 ...
  100. [100]
    Key Considerations for Responsible Development & Fielding of ...
    Jul 22, 2020 · 31. Thomas Dietterich & Eric Horvitz, Rise of Concerns about AI: Reflections and Directions,. Communications of the ACM, Vol. 58 No. 10, ...
  101. [101]
    Key Considerations for the Responsible Development and Fielding ...
    Aug 19, 2021 · We review key considerations, practices, and areas for future work aimed at the responsible development and fielding of AI technologies.
  102. [102]
    Now, Later, and Lasting: Ten Priorities for AI Research, Policy ... - arXiv
    Apr 6, 2024 · View a PDF of the paper titled Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice, by Eric Horvitz and 3 other ...<|control11|><|separator|>
  103. [103]
    AAAI Feigenbaum Prize
    2015. Eric Horvitz (Microsoft Research) For sustained and high-impact contributions to the field of artificial intelligence through the development of ...
  104. [104]
    Elected AAAI Fellows
    Eric Horvitz, Microsoft Research For significant contributions to principles and applications of probability and utility in computation, including reasoning ...<|control11|><|separator|>
  105. [105]
    Eric Horvitz, MD, PhD | PCAST | The White House
    Eric Horvitz, PhD, MD, is a computer scientist and a leading researcher in artificial intelligence (AI) and issues at the intersection of technology, ...Missing: biography degrees