Fact-checked by Grok 2 weeks ago

Computational economics

Computational economics is a methodology for solving economic problems through the application of computing machinery, providing value-added approaches applicable to any branch of economics. It is inherently interdisciplinary, drawing from , , and to enhance economic analysis. This field integrates computational techniques with economic theory to address issues that are often too complex for purely analytical or deductive methods, such as dynamic optimization, equilibrium computations, and stochastic simulations. The discipline gained significant momentum in the 1990s, driven by rapid advances in hardware, software, and numerical algorithms, which enabled economists to explore realistic models beyond simplified assumptions. It was formalized through initiatives like the establishment of the Society for Computational Economics in 1994 and the launch of the journal Computational Economics in 1997, reflecting its growing recognition as an independent area of research. Early contributions, such as the Handbook of Computational Economics (Volume 1, 1996), surveyed foundational numerical methods for solving economic models, including perturbation techniques and value function iteration. Key methods in computational economics include simulations for uncertainty analysis, dynamic programming for sequential decision-making, and agent-based modeling for studying emergent behaviors in decentralized systems. (ACE), a prominent subfield, models economies as evolving systems of autonomous, interacting agents—ranging from individuals to institutions—to investigate phenomena like market formation, policy impacts, and financial crises without relying on representative-agent assumptions. Applications span (e.g., analysis), (e.g., option pricing), and (e.g., auction design), often leveraging tools like , , or for implementation. The field's importance lies in its ability to bridge theoretical economics with empirical , revealing quantitative insights into model behaviors and testing hypotheses in high-dimensional settings. As computational power continues to grow, recent developments incorporate for pattern recognition in and real-time simulations for policy evaluation, positioning computational economics as essential for understanding complex, data-rich modern economies.

Introduction

Definition and Scope

Computational economics is defined as the application of computational methods—such as simulations, algorithms, and numerical techniques—to solve economic problems that prove analytically intractable through traditional deductive approaches. This field employs computing machinery to model and analyze complex economic phenomena where closed-form solutions are infeasible, enabling quantitative exploration of theoretical implications. The scope of computational economics spans theoretical modeling to derive insights from abstract economic structures, empirical analysis to process and interpret large-scale , and policy evaluation to assess potential outcomes under various scenarios. By harnessing computational power, it addresses challenges involving high dimensionality, nonlinearity, , and vast datasets that overwhelm manual or analytical methods. Computational economics is distinct from , which centers on and estimation from historical data to test hypotheses, whereas computational approaches prioritize the numerical and of forward-looking models. It also differs from , which examines human in controlled laboratory settings, by instead utilizing algorithmic agents to replicate and study emergent economic behaviors in virtual environments. At its core, computational economics involves discretizing continuous economic models into computable grids or states, applying iterative algorithms to converge on solutions, and rigorously validating results against empirical data or stylized facts to confirm their economic . These elements ensure that approximations remain accurate despite inherent computational trade-offs like error bounds and speed.

Importance and Interdisciplinary Nature

Computational economics has gained prominence by enabling economists to tackle complex problems that defy analytical solutions, such as high-dimensional dynamic models involving heterogeneity and strategic interactions. Traditional economic theory often relies on simplifying assumptions to achieve tractability, but computational methods allow for the exploration of realistic scenarios, including non-equilibrium dynamics and emergent phenomena like market crashes or innovation diffusion. For instance, these approaches facilitate quantitative assessments of impacts, revealing insights that qualitative cannot provide, such as the welfare effects of regulatory changes in oligopolistic markets. The field's importance extends to bridging theoretical predictions with empirical data through and , enhancing the robustness of economic models. By simulating economies as evolving systems, computational economics supports counterfactual and stress-testing of theories, as seen in studies reversing earlier findings on through numerical exploration. This has proven vital in areas like and , where computational tools quantify the scale of economic fluctuations or the efficiency of monetary policies, providing policymakers with evidence-based guidance. Interdisciplinarily, computational economics integrates with , , and natural sciences to model adaptive systems. Drawing from agent-oriented programming in , it constructs autonomous agents capable of learning and interaction, while incorporating evolutionary principles from and complex adaptive systems from physics to capture real-world heterogeneity and adaptation. This fusion, exemplified in agent-based models, fosters collaboration across disciplines, allowing economists to leverage numerical algorithms for solving optimization problems and simulating social structures that traditional methods overlook.

Historical Development

Early Foundations (Pre-1980s)

The foundations of computational economics trace back to the and , when emerged as a discipline to address challenges in military and industrial contexts. , a Soviet mathematician, laid early groundwork in 1939 with his development of a method, outlined in Mathematical Methods in the Organization and Planning of Production, which optimized production processes under resource constraints using multipliers akin to Lagrange methods. This approach formalized the efficient allocation of scarce resources, earning Kantorovich the 1975 in Economic Sciences for contributions to optimum theory. Similarly, advanced these ideas during as a statistician for the British Merchant Shipping Mission, where he solved the Hitchcock transportation problem to minimize shipping costs from supply origins to demand destinations, introducing activity analysis as a precursor to . Koopmans's work emphasized interpreting input-output relationships in production for efficiency, also recognized in the 1975 . In the 1950s and 1960s, the advent of computers enabled the practical implementation of these theoretical models, particularly in input-output analysis and econometric simulations. Wassily Leontief's input-output framework, initially conceptualized in to model intersectoral dependencies in , relied on solving large systems of linear equations; by 1949, Leontief utilized Harvard's 25-ton computer to process extensive from the , marking one of the earliest applications of computing to economic modeling. This computational effort produced the first comprehensive U.S. input-output table, facilitating quantitative analysis of economic interdependencies and earning Leontief the 1973 for developing the method. Concurrently, econometricians began leveraging computers for simulations, with pioneering efforts at the in the early 1950s using early computers to estimate parameters in large-scale models and forecast economic variables. These simulations addressed complex systems intractable by hand, such as multivariate regressions, and by the 1960s, programming computers became essential for generating econometric knowledge through iterative numerical solutions. The saw the maturation of (CGE) models, which integrated Walrasian with empirical data to simulate economy-wide policy impacts. Herbert Scarf's 1967 for computing fixed points in general equilibrium systems provided the computational backbone, enabling the solution of nonlinear equations representing across sectors. Building on this, economists in the developed CGE frameworks incorporating functions, , and linkages, often using early numerical methods like Scarf's simplicial approximation to approximate . These models allowed for policy evaluations, such as effects, by balancing under constraints. Key figures like further influenced these foundations through conceptual innovations in . Von Neumann's 1951 work on cellular automata, detailed in "The General and Logical Theory of Automata," modeled self-reproducing systems via simple local rules on a grid, providing an early paradigm for studying emergent behaviors in complex, decentralized systems—ideas that prefigured computational approaches to economic interactions. Complementing this, early numerical methods for solving economic equations, such as George Dantzig's 1947 for , enabled iterative optimization of problems by traversing feasible regions in high-dimensional spaces. These techniques, along with iterative solvers like Gauss-Seidel for linear systems in input-output models, formed the computational toolkit that bridged theoretical with practical before the 1980s.

Key Milestones (1980s–Present)

The 1980s marked a pivotal era in computational economics, driven by the proliferation of microcomputers that democratized access to advanced techniques. These affordable personal computing devices enabled economists to perform extensive simulations, which involve generating random samples to approximate solutions for complex stochastic models in areas like risk assessment and policy evaluation. Concurrently, the establishment of the in 1984 fostered groundbreaking work on early agent-based models, exploring complex adaptive systems and emergent economic behaviors through computational experiments that simulated interactions among heterogeneous agents. In the , computational economics advanced significantly with the integration of architectures, which accelerated the solution of large-scale (DSGE) models by distributing computational tasks across multiple processors. This era also saw influential contributions from Finn Kydland and Edward Prescott, whose real models relied heavily on numerical methods and computer simulations to analyze macroeconomic fluctuations, earning them the 2004 Nobel Prize in Economic Sciences for pioneering computational approaches in dynamic . A key institutional milestone was the founding of the Society for Computational Economics in 1994, which promoted the adoption of computational methods through conferences, journals, and collaborative networks among researchers. The 2000s and 2010s ushered in the big data revolution, where the explosion of digital data sources—such as transaction records and online behaviors—spurred the adoption of machine learning techniques in economic research for tasks like causal inference and prediction. Seminal works, including those by Susan Athey and Guido Imbens, demonstrated how machine learning algorithms could enhance econometric analysis by handling high-dimensional data without strong parametric assumptions, influencing fields from labor economics to policy design. Open-source and commercial tools like MATLAB further facilitated economic simulations; its toolboxes for econometrics and optimization became staples for implementing DSGE models and agent-based simulations, enabling reproducible research and broader accessibility. Entering the 2020s, computational economics has increasingly incorporated (AI) to develop adaptive, data-driven models that capture nonlinear dynamics in economic systems, such as generative AI for forecasting market trends and simulating policy impacts amid uncertainty. As of September 2025, projections indicate that generative AI could increase levels by 1.5% by 2035 in advanced economies through enhanced decision-making tools, with a contribution of 0.01 percentage points to total factor productivity growth in 2025 itself. Parallel explorations in have begun targeting economic optimization problems, with early applications in portfolio management and risk modeling leveraging quantum algorithms for faster solutions to combinatorial challenges intractable for classical computers.

Core Concepts and Methods

Computational Modeling Fundamentals

Computational economic modeling relies on transforming theoretical economic frameworks into numerically tractable forms, primarily through discretization techniques that approximate variables. Continuous-time models, which describe economic processes evolving smoothly over time, and continuous-state models, featuring infinite possible values for variables like or , are often intractable for direct computation due to their mathematical complexity. addresses this by converting time into discrete periods (e.g., quarters or years) and state spaces into finite , enabling the use of algorithms like methods or techniques to approximate solutions. For instance, in dynamic models of , a continuous production function might be evaluated on a grid of levels, balancing accuracy against computational cost. Economic models are broadly categorized by their treatment of uncertainty and temporal structure. Deterministic models assume fixed relationships between inputs and outputs, yielding unique, predictable trajectories without , which simplifies analysis but overlooks real-world variability such as shocks to . In contrast, models incorporate probabilistic elements, like random disturbances, to capture , often using techniques like simulation for solution. Models also differ in : equilibrium models focus on steady-state conditions where variables balance unchangingly, while dynamic models trace paths over time, accounting for transitions and adjustments. These distinctions guide model selection, with dynamic approaches prevalent in for realism. Ensuring model reliability requires rigorous validation, starting with to align parameters with observed , such as matching moments like average output growth from historical records. then tests how outputs vary with parameter perturbations, identifying robust findings or fragile assumptions. Out-of-sample testing evaluates predictive accuracy on unseen , mitigating and confirming generalizability. These steps, rooted in empirical discipline, enhance credibility, as seen in real models calibrated to U.S. postwar . A cornerstone of dynamic optimization in computational economics is the Bellman equation, which formalizes recursive decision-making. For an agent maximizing utility over capital k, it states: V(k) = \max_{c} \, u(c) + \beta V(k'), subject to k' = f(k) - c, where u(c) is the utility from consumption c, \beta is the discount factor, and f(k) is the production function determining next-period capital. This equation equates the value of current state to the optimal choice of immediate reward plus discounted future value. Solutions employ value function iteration: begin with an initial guess V^0(k), compute policy functions to update V^{n+1}(k) iteratively until convergence to the fixed point V(k), often requiring hundreds of iterations for precision in economic applications like growth models.

Algorithmic and Simulation Techniques

Computational economics relies on a variety of algorithmic techniques to solve optimization problems and partial differential equations (PDEs) that arise in economic models. is a fundamental optimization algorithm used to minimize objective functions in these contexts, iteratively updating parameters in the direction of the negative gradient to converge to a local minimum. This method is particularly valuable for handling high-dimensional parameter spaces in dynamic economic models, where traditional analytical solutions are infeasible. For instance, in solving problems derived from economic dynamics, gradient descent enables efficient approximation of value functions or policy rules. Finite difference methods provide numerical solutions to PDEs commonly encountered in continuous-time growth models, approximating derivatives by differences on a grid to transform the continuous problem into a solvable algebraic . These methods are essential for modeling heterogeneous economies or dynamics, where PDEs describe the evolution of or distributions over time. By discretizing the state space and applying schemes like explicit or implicit s, researchers can simulate long-run growth paths and assess policy impacts with high accuracy. Simulation techniques play a central role in addressing and computing s in economic models. methods estimate s by generating random draws from probability distributions and averaging function evaluations, providing robust approximations for integrals that lack closed-form solutions. The core idea is to approximate the E[g(X)] as \frac{1}{N} \sum_{i=1}^N g(x_i), where x_i are independent random samples from the distribution of X, and is checked by monitoring the variance of the estimates or using intervals as N increases. This approach is widely applied in computational economics to quantify in stochastic growth or models, allowing for the incorporation of complex shock distributions. complements by resampling empirical data to estimate the distribution of statistics, enabling in models with unknown underlying distributions without parametric assumptions. In econometric applications, it resamples observations with replacement to construct bias-corrected estimators or bands for coefficients. To handle the computational demands of large-scale simulations, frameworks leverage graphics processing units (GPUs) for massive parallelism, distributing workloads across thousands of cores to accelerate operations and iterative solvers. In economic simulations, GPUs enable rapid evaluation of expectations or policy functions over expansive state spaces, reducing solution times from days to hours for models with millions of grid points. This is achieved through libraries like , which facilitate the parallel execution of draws or iterations.

Specific Modeling Approaches

Agent-Based Modeling

Agent-based modeling (ABM) in computational economics represents a bottom-up approach to simulating economic systems as decentralized networks of heterogeneous agents that interact within an artificial environment, following simple behavioral rules to generate complex, emergent outcomes. This methodology, often termed (ACE), treats economic processes as open-ended dynamic systems where individual agents—such as consumers, firms, or traders—make decisions based on local information, leading to aggregate phenomena without relying on centralized assumptions. Unlike traditional models that aggregate behaviors, ABM emphasizes the role of agent heterogeneity, stochasticity, and adaptive interactions in driving economic dynamics. Key components of ABM include the agents themselves, their decision rules, the shared environment, and the emergent properties arising from interactions. Agents operate with , incorporating learning and mechanisms, such as updating strategies based on past experiences or evolutionary selection processes. For instance, agent rules might involve search or to adjust behaviors in response to environmental feedback, while the environment provides the spatial or for interactions like trading or . occurs when these micro-level interactions produce macro-level patterns, such as market crashes triggered by cascading individual selling or the spontaneous formation of economic from localized trades. Representative examples illustrate ABM's application to specific economic questions. In simulations of inequality, agents accumulate resources through and rules, revealing how initial conditions and topologies can lead to persistent disparities, as seen in models where trade amplify wealth concentration among a small group of agents. Similarly, for , agents adopt new technologies via learning or , demonstrating how network effects and behaviors spread innovations unevenly across populations, mirroring real-world patterns of technological in markets. Validation of ABMs typically involves empirical calibration and comparison to observed stylized facts, ensuring that simulated outcomes align with key economic regularities. For example, models are tested against power-law distributions in firm sizes, where ABMs reproduce the empirical observation that a few large firms dominate while many small ones exist, validating the approach through goodness-of-fit metrics like Kolmogorov-Smirnov tests on generated distributions. However, ABM faces criticisms from mainstream economics regarding difficulties in interpreting simulation dynamics, generalizing results beyond specific setups, and limited acceptance in top journals due to perceived lack of theoretical rigor compared to equilibrium-based models. A common mechanism for agent adaptation in such validations is replicator dynamics, which governs the evolution of strategy proportions in populations of interacting agents: \dot{x}_i = x_i (f_i - \bar{f}) Here, x_i denotes the proportion of agents using strategy i, f_i is the fitness (e.g., payoff) of that strategy, and \bar{f} is the average fitness across all strategies, allowing successful behaviors to proliferate over time in economic simulations. As of 2025, recent developments in ABM include integration with large language models to create more adaptive and realistic agent behaviors, enhancing simulations of complex interactions, and growing adoption by central banks for policy analysis and stress testing.

Dynamic Stochastic General Equilibrium (DSGE) Models

Dynamic stochastic general equilibrium (DSGE) models represent a class of macroeconomic frameworks that combine microeconomic foundations with disturbances to simulate and forecast economic fluctuations. These models assume that economic agents, typically represented by a single and firm, optimize their behavior under , leading to a general where markets clear dynamically over time. Shocks, such as exogenous changes in or rates, drive deviations from the , allowing the models to replicate patterns observed in data. The core structure of DSGE models builds on representative agent optimization, where the maximizes lifetime subject to a , and the firm maximizes profits given a , often featuring nominal rigidities in the New Keynesian variant. imply that agents form forecasts based on all available information, ensuring consistency between individual decisions and aggregate outcomes. Key shocks include disturbances, which affect supply-side dynamics, and innovations, which influence demand through adjustments. This setup enables the analysis of how interventions propagate through the . To solve these nonlinear systems, DSGE models are typically log-linearized around the non-stochastic , transforming the equations into a that preserves local dynamics while enhancing computational tractability. This approximation facilitates the derivation of functions and variance decompositions to shocks. The resulting linear is then solved using methods like the Blanchard-Kahn approach, which decomposes the model into and unstable components to a unique bounded solution exists; specifically, the number of unstable eigenvalues must equal the number of non-predetermined (jump) variables for saddle-point . A foundational equation in the basic New Keynesian DSGE model is the Euler equation for , which links current and expected future to the real interest rate: c_t^{-\sigma} = \beta E_t \left[ c_{t+1}^{-\sigma} (1 + r_{t+1}) \right] Here, c_t denotes at time t, \sigma > 0 is the coefficient of aversion (inverse of the intertemporal ), \beta \in (0,1) is the subjective discount factor, E_t is the operator conditional on time-t information, and r_{t+1} is the real interest rate between periods t and t+1. This equation reflects the representative household's first-order condition for optimal and , balancing today against expected discounted tomorrow adjusted for the of . The Blanchard-Kahn technique solves the full model by stacking such Euler equations with others (e.g., for labor supply and ) into a , iterating forward for jump variables like prices while iterating backward for predetermined states like . Despite their prominence, DSGE models face criticism for assuming a representative , which abstracts from heterogeneity in preferences, endowments, and behaviors across households and firms, potentially understating and distributional effects in . The 2008 global financial crisis exposed limitations in capturing financial amplification mechanisms, prompting refinements that integrate banking sectors through models of financial intermediaries subject to constraints and leverage limits. These extensions, such as incorporating credit spreads and bank capital requirements, allow DSGE models to better simulate how financial shocks propagate to the real economy, improving their relevance for and prudential policy evaluation. Post-2020 updates have further addressed challenges from the by incorporating unusual shocks like health-related wedges and news about recovery paths, while advancements in fat-tailed distributions better capture extreme events and behavioral expectations replace strict to model boundedly rational .

Machine Learning in Economics

Machine learning has emerged as a powerful tool in for handling high-dimensional data, improving predictive accuracy, and enabling in complex environments where traditional econometric methods may falter due to assumptions like or low dimensionality. By leveraging algorithms that learn patterns from data without relying on predefined functional forms, economists can analyze vast datasets from sources such as transaction records, , or to forecast economic outcomes or identify behavioral regularities. This integration addresses key challenges in economic research, such as in predictive models and bias in causal estimation, while complementing classical approaches through flexible, data-driven techniques. Supervised learning techniques, such as regression trees, have been applied to predict demand in economic settings by partitioning data based on features like price, income, and seasonality to estimate heterogeneous responses. For instance, , an ensemble of regression trees, enhance prediction stability and have been used to model consumer demand curves from scanner data, outperforming linear models in capturing nonlinearities. methods, including clustering algorithms like k-means, facilitate by grouping consumers or firms into homogeneous clusters based on unobserved patterns in spending or production data, aiding in targeted policy design or pricing strategies. , which optimizes sequential through trial-and-error interactions, supports policy evaluation by simulating agent behaviors in dynamic environments, such as optimizing under uncertainty. A key adaptation in economics is Double Machine Learning (Double ML), which combines with econometric principles to estimate causal effects while mitigating and from nuisance parameters. In Double ML, two stages of —first for nuisance functions like propensity scores, then for the outcome model—enable robust on treatment effects in high-dimensional settings, as formalized in the partially where the causal parameter \theta_0 is identified via orthogonalization. This approach has been pivotal in applications requiring valid , such as evaluating policy interventions. Lasso regularization exemplifies such adaptations for variable selection in high-dimensional : \hat{\beta} = \arg\min_{\beta} \| y - X\beta \|^2 + \lambda \| \beta \|_1 Here, the L1 penalty \lambda \| \beta \|_1 shrinks irrelevant coefficients to zero, selecting sparse models that interpret economic relationships, such as identifying key predictors of firm productivity from thousands of covariates. Practical examples illustrate these techniques' impact. Machine learning models, including natural language processing for text data, have predicted U.S. recessions by extracting sentiment from news articles and financial reports, achieving higher accuracy than traditional indicators like yield spreads. In auction design, reinforcement learning algorithms optimize bidding strategies in spectrum or ad auctions, approximating equilibrium outcomes and improving revenue efficiency over heuristic rules. These applications underscore machine learning's role in enhancing economic foresight and mechanism design. As of 2025, generative AI and large language models are increasingly applied in economics for simulating policy scenarios and assessing environmental, social, and governance factors, with projections indicating AI could affect 40% of global jobs and boost GDP by $7 trillion.

Applications

Macroeconomic Analysis

Computational economics employs (DSGE) models and (CGE) models to simulate aggregate economic phenomena, such as business cycles, long-term growth, and responses to policy interventions, enabling economists to evaluate the economy-wide effects of shocks and reforms. These methods integrate microeconomic foundations with processes to capture how macroeconomic variables like output, , and evolve over time under . By solving systems of nonlinear equations numerically, computational macroeconomics provides a framework for testing theoretical predictions against historical data and forecasting policy outcomes. In monetary policy analysis, DSGE models are extensively used by central banks, including the , to assess the transmission of changes and quantify their impacts on and output gaps. For instance, the Smets-Wouters DSGE model, estimated on U.S. , incorporates nominal rigidities and real frictions to evaluate how monetary shocks propagate through the economy, informing decisions on paths. Similarly, CGE models simulate trade shocks by tracing adjustments in production, consumption, and trade balances across sectors, as seen in analyses of changes or regional trade agreements that reveal effects and resource reallocations. Case studies highlight the practical utility of these approaches; during the , DSGE models were adapted to incorporate health-related shocks and fiscal responses. For , integrated CGE frameworks project that a 2°C warming could reduce global GDP by approximately 1-2% by 2100, primarily via losses in and labor supply disruptions in vulnerable regions. Computational challenges arise in handling nonlinearities, particularly in fiscal multipliers, where standard linear DSGE approximations fail to capture state-dependent effects like constraints, requiring global solution methods that increase dimensionality and estimation complexity. Key metrics in these analyses include functions (IRFs), which trace the dynamic paths of variables like GDP and following a unit shock, such as a disturbance, revealing peak impacts and persistence— for example, a tightening might contract output by 0.5% at its trough before gradual recovery.

Microeconomic and Behavioral Studies

Computational economics employs agent-based modeling (ABM) to simulate microeconomic phenomena such as labor markets and processes, where individual agents interact to reveal emergent market dynamics. In labor market simulations, agents representing workers and firms engage in search, matching, and wage negotiations, often incorporating realistic frictions like information asymmetries and social networks. For instance, the WorkSim model calibrates agent interactions to replicate gross flows between , , and inactivity in the French labor market. Similarly, endogenous preferential matching models show that boundedly rational agents, using simple decision rules rather than full optimization, lead to persistent heterogeneity in job outcomes due to coordination failures and shirking behaviors. Bargaining in these ABM frameworks extends to computational game theory applications, particularly in auctions, where agents bid strategically under incomplete information. Seminal work using zero-intelligence (ZI) agents in double auctions reveals that market efficiency arises primarily from institutional rules rather than agent sophistication, as even non-strategic bidders achieve near-optimal allocations. In Treasury auction simulations, reinforcement learning agents adapt bidding strategies, illustrating how discriminatory versus uniform pricing rules influence revenue and bidder participation, with outcomes sensitive to learning speeds and market thickness. These models highlight microeconomic inefficiencies, such as the winner's curse, where overbidding due to optimistic biases reduces participant welfare. Behavioral integration in computational economics incorporates through heuristics, deviating from classical assumptions of perfect foresight. Agents employ simple rules like "" or social mimicry, as in evolutionary programming approaches that evolve decision strategies over time to approximate real-world . For example, simulating in consumer choices, agents copy popular options based on observed behaviors, leading to path-dependent market shares and reduced variety in product adoption. Network effects in simulations further amplify this, where agents in interconnected graphs propagate preferences, resulting in clustered adoption patterns that favor incumbents and hinder . Such models uncover emergent inefficiencies from cognitive biases, including the formation of speculative bubbles in micro settings like consumer durables markets. When agents exhibit overconfidence or anchoring heuristics, simulated interactions generate price deviations from fundamentals, culminating in booms and crashes driven by collective irrationality rather than external shocks. These outcomes underscore how fosters market fragility, emphasizing the need for policy interventions like information disclosure to mitigate biases.

Financial and Policy Simulations

Computational economics plays a pivotal role in financial simulations by employing methods to price options, which involve generating numerous random paths for underlying asset prices to estimate expected payoffs under risk-neutral measures. This approach, pioneered by Boyle in 1977, enables the valuation of complex derivatives where closed-form solutions are unavailable, such as path-dependent options, by averaging discounted payoffs across simulated scenarios. In banking, models integrate through agent-based or network simulations to assess how shocks propagate across institutions, capturing interdependencies like credit exposures and liquidity spillovers. For instance, frameworks developed by the use integrated micro-macro models to evaluate the resilience of major U.S. banks under adverse economic conditions, revealing potential capital shortfalls during crises. A key metric in these financial applications is Value-at-Risk (VaR), computed via Monte Carlo simulations to quantify potential portfolio losses at a given confidence level over a specified horizon, often by sampling from multivariate distributions of risk factors. Efficient implementations, such as those using importance sampling, reduce computational demands while maintaining accuracy for large portfolios with nonlinear instruments. In policy simulations, agent-based models serve as virtual laboratories to trial interventions like universal basic income (UBI), where heterogeneous agents interact in labor and consumption markets to reveal distributional effects and macroeconomic feedbacks. Simulations of UBI scenarios, for example, demonstrate reduced poverty but potential labor supply distortions, as explored in models linking agent decisions to income transfers. Machine learning enhances policy impact evaluation by refining causal inference in simulated environments, such as double machine learning for estimating treatment effects from observational data. Specific examples include agent-based simulations of , which model trader behaviors and network effects to forecast extreme price swings under regulatory changes up to 2025, aiding for . Similarly, policy labs simulate effects using integrated assessment models, where agents adapt production and consumption to carbon pricing, projecting emission reductions alongside economic costs under frameworks like the EU's 2025-2030 targets. These simulations highlight trade-offs, such as emission reductions balanced against modest GDP impacts.

Tools and Implementation

Programming Languages and Environments

Computational economics relies on programming languages that balance computational efficiency, ease of use, and integration with specialized tools for economic modeling. Key languages include , , , and , each offering distinct advantages for tasks such as simulations, econometric analysis, and optimization. Python stands out for its versatility in machine learning applications and economic simulations, supported by libraries like NumPy for array operations and SciPy for scientific computing, which enable efficient handling of large datasets and numerical methods common in economic models. Julia excels in high-performance computing for solving dynamic stochastic general equilibrium (DSGE) models, providing speeds comparable to C while maintaining the syntax simplicity of higher-level languages, as demonstrated in implementations by the Federal Reserve Bank of New York. MATLAB is widely used for matrix-based operations in econometrics, offering built-in toolboxes for time series analysis, regression, and forecasting that streamline econometric workflows. R serves as a primary environment for statistical computing in economics, with robust capabilities for data manipulation, visualization, and inferential statistics tailored to empirical economic research. Selection of these languages in computational economics often hinges on criteria such as execution speed, ease of parallelization for handling complex simulations, and availability of domain-specific libraries that reduce development time for economic algorithms. For instance, Julia's facilitates on multicore systems, ideal for computationally intensive tasks like agent-based models, while Python's ecosystem supports through extensions. Interactive environments enhance in economic modeling. Jupyter notebooks provide a flexible platform for iterative , allowing economists to combine , visualizations, and explanatory text in a single document, which is particularly useful for prototyping simulations and sharing reproducible research. To illustrate, the following snippet simulates a basic Solow growth model, where capital evolves according to k_{t+1} = s y_t + (1 - \delta) k_t with output y_t = k_t^\alpha, using NumPy for array operations over time periods.
python
import numpy as np
import matplotlib.pyplot as plt

# Parameters
alpha = 0.3  # Capital share
s = 0.2      # Savings rate
delta = 0.1  # Depreciation rate
k0 = 0.1     # Initial capital
T = 100      # Time periods

# Simulation
k = np.zeros(T + 1)
k[0] = k0
for t in range(T):
    y = k[t]**alpha
    k[t+1] = s * y + (1 - delta) * k[t]

# Plot
plt.plot(k)
plt.xlabel('Time')
plt.ylabel('Capital per worker')
plt.title('Solow Growth Model Simulation')
plt.show()
This structure highlights Python's role in straightforward economic simulations, computing steady-state convergence without advanced optimization.

Software Libraries and Frameworks

Dynare is a widely used platform designed specifically for solving, simulating, and estimating dynamic stochastic general equilibrium (DSGE) models in . It supports Bayesian estimation techniques for DSGE, (VAR), and hybrid DSGE-VAR models, enabling economists to handle complex macroeconomic models with features like optimal policy computation and . Developed initially at CEPREMAP by Michel Juillard in 1994, Dynare integrates seamlessly with , , or , facilitating model specification through a that abstracts away numerical solver details. NetLogo serves as a programmable modeling environment tailored for agent-based modeling (ABM) in computational economics, allowing users to simulate interactions among heterogeneous agents to study emergent economic phenomena such as market dynamics or policy impacts. Its multi-agent programming paradigm supports rapid prototyping of models where agents follow simple rules, making it accessible for exploring micro-to-macro transitions in economic systems. NetLogo's built-in library includes extensions for economic applications, and its open-source nature has led to community-contributed models for topics like wealth distribution and innovation diffusion. In the realm of applications for , provides a robust library for implementing supervised and algorithms, such as , clustering, and , to analyze like forecasting demand or detecting anomalies in financial . Economists leverage for tasks including and high-dimensional , where it integrates with economic datasets to improve prediction accuracy over traditional methods in areas like labor . Its emphasis on scikit-learn's consistent enables reproducible workflows for economic pipelines, from to model evaluation. QuantEcon offers open-source libraries in both and , focused on quantitative economic modeling for educational and research purposes, including tools for solving dynamic programming problems and simulations central to computational . The Python version supports lectures on topics like linear algebra applications in economics and Markov processes, while the Julia counterpart emphasizes for advanced models such as real analysis. These libraries promote pedagogical clarity by providing pre-built solvers and visualization tools, aiding in the dissemination of computational methods. The General Algebraic Modeling System (GAMS) is a high-level modeling framework for formulating and solving large-scale optimization problems in , particularly for linear, nonlinear, and mixed-integer programming in and analysis. In economic contexts, GAMS excels at multi-sectoral models like (CGE) simulations, where it interfaces with solvers to optimize welfare or trade policies under constraints. Its algebraic syntax allows economists to express models declaratively, separating formulation from solution algorithms. Integration of libraries enhances computational economics workflows; for instance, combining for data manipulation with enables efficient economic data pipelines, from cleaning time-series datasets to feeding them into models for tasks like econometric forecasting. This synergy supports scalable analysis, such as processing before applying techniques, reducing preprocessing overhead in research. Open-source trends in computational economics software have accelerated accessibility, with platforms like and Dynare fostering collaborative development and free distribution, thereby lowering barriers for global researchers to adopt advanced tools without proprietary costs. Cloud-based options, such as , further democratize implementation by providing browser-based Jupyter environments for running economic models, including QuantEcon lectures and simulations, without local installations. This shift supports remote collaboration and rapid experimentation in fields like policy simulation.

Publications and Community

Major Journals

The field of computational economics has several prominent peer-reviewed journals that specialize in or significantly feature research at the of computational methods and economic . These outlets emphasize rigorous, reproducible computational approaches, distinguishing them from traditional journals that prioritize analytical over simulation and algorithmic . One of the leading journals is the Journal of Economic Dynamics and Control (JEDC), published by since its inception in 1979. It focuses on theoretical and empirical studies of economic dynamics, , and computational techniques for modeling complex systems, including agent-based simulations and numerical solutions to dynamic optimization problems. With an of 106 and an of 2.3 as of 2025, JEDC has played a pivotal role in advancing simulation-based research since the , when computational power began enabling more sophisticated dynamic models. Another key publication is , the official journal of the Society for Computational Economics, published by since 1988. This multidisciplinary outlet integrates with , covering topics such as algorithmic modeling, optimization, and empirical simulations across micro- and . It boasts an of 51 and an of 2.2 in 2025, reflecting its growth alongside the field's expansion in the late , particularly with the rise of accessible tools. The International Journal of Computational Economics and Econometrics, launched by Inderscience in 2009, complements these by emphasizing computational and econometric methods for , , and data-driven . It prioritizes innovative applications of algorithms in , with a scope that includes high-dimensional and simulation-based , contributing to the field's archival literature on practical computational tools. These journals have evolved since the to support the shift from purely theoretical toward computationally intensive research, with metrics indicating sustained influence—JEDC's at 106 underscoring its foundational status. Their peer-review processes ensure emphasis on verifiable code, data, and results, setting them apart from theory-centric venues like the American Economic Review. By 2025, these publications show increasing integration of and techniques, with growing numbers of articles on AI-driven economic modeling and large-scale datasets, reflecting broader trends in handling non-linear dynamics and real-time policy simulations. For instance, recent issues feature applications of neural networks for econometric prediction and analytics for macroeconomic forecasting.

Conferences and Professional Organizations

The Society for Computational Economics (SCE), founded in 1994, serves as the primary professional organization dedicated to advancing computational methods in economics and finance. It promotes research encompassing numerical methods, , techniques, large-scale computing, programming languages, software libraries, and economic databases, fostering collaboration among economists, computer scientists, and related fields. Membership in the SCE provides access to resources, networking opportunities, and participation in events that support and funding pursuits in computational economics. The SCE organizes the annual International Conference on Computing in Economics and Finance (CEF), a key event that attracts 300-400 participants and covers computational aspects of , , , and interdisciplinary applications. This conference features presentations on topics such as , regional science, , , and statistics, with program structures designed to reflect members' research interests. Complementing the CEF, the Computational Economics Workshop series, including events like the International Workshop of Computational Economics and Econometrics (IWCEE), focuses on specialized sessions for discussing innovative data sources, computational advances, and interdisciplinary tools. These gatherings emphasize workshops on practical tools, such as software implementation and , alongside networking sessions that connect researchers with funding opportunities from academic institutions and grants. The SCE's activities also include prizes and contests to recognize outstanding contributions, enhancing . With a global footprint, SCE conferences rotate locations across hemispheres, including recent events in (2024), (, 2023), (, 2022), virtual (2021), alongside earlier meetings in (, 2019) and (, 2020, cancelled). Post-2020, the organization adapted to the by shifting to virtual formats, such as the fully online 27th CEF in 2021, enabling broader international participation from , the , , and beyond.

Challenges and Future Directions

Current Limitations and Criticisms

Computational economics faces significant limitations in handling high-dimensional models, where the curse of dimensionality leads to exponentially increasing computational costs, often requiring vast resources that constrain and applications. For instance, sparse high-dimensional models in demand advanced optimization techniques to manage proliferation, yet even these can become prohibitive without substantial computing infrastructure. This issue is exacerbated in agent-based models, which simulate heterogeneous agents and interactions, necessitating intensive processing power that limits accessibility for researchers without access. A related challenge is the black-box nature of complex simulations, particularly in (DSGE) models, where intricate numerical solutions obscure underlying mechanisms and hinder interpretability. Heavy computational demands in these models, such as those incorporating household heterogeneity and , contribute to this opacity, making it difficult for policymakers and academics to trace how inputs translate to outputs. Such limitations reduce trust in results and complicate classroom or adoption. Critics argue that computational economics over-relies on restrictive assumptions, as seen in DSGE frameworks that assume and representative agents, failing to incorporate nonlinear dynamics or financial frictions adequately. remains a persistent issue, with inadequate documentation, non-standardized code practices, and environment dependencies leading to failures in replicating computational results, despite calls for integrated tools like . Ethical concerns arise from biases in applications for economic predictions, where dataset selection biases—such as noise from crowdsourced data—distort models of , potentially perpetuating inequalities in forecasts. Additionally, unequal access to advanced tools raises issues, as resource-intensive models disadvantage researchers in underfunded institutions or developing regions. These limitations were starkly evident in the , where DSGE models failed to anticipate the event due to their neglect of shadow banking growth, excessive leverage, and rollover risks, relying instead on data from eras with minimal financial disturbances. This predictive shortfall underscored broader criticisms of the field's inability to model rare, systemic shocks effectively. One prominent trend in computational economics involves the integration of agent-based modeling (ABM) with (ML) techniques to create hybrid models that enhance predictive accuracy and adaptability in simulating complex economic behaviors. These hybrid approaches leverage ABM's ability to represent heterogeneous agents and emergent phenomena while using ML to infer agent rules from data or optimize model parameters, addressing limitations in traditional econometric methods. For instance, ML-assisted ABM has been applied to study and market dynamics, enabling more realistic simulations of policy impacts without assuming . Another emerging trend is the use of technology to facilitate decentralized simulations, allowing secure, distributed computation of economic models across networks without central authority. enables tamper-proof ledgers for tracking agent interactions in simulations, particularly useful for modeling economies or supply chains, where trust and verifiability are critical. Research models -enabled economic networks as dynamical systems, incorporating elements like token economies to analyze growth and incentives. Innovations in quantum algorithms are showing promise for solving large-scale optimization problems in , with theoretical models from the early demonstrating potential economic advantages over classical methods in scenarios like . Quantum approaches, such as variational quantum eigensolvers, have been proposed for , offering speedups in combinatorial complexity where classical computers struggle. Concurrently, explainable (XAI) is advancing transparency by providing interpretable insights into computational economic models, allowing policymakers to understand decision drivers in simulations of fiscal or monetary policies. Frameworks for XAI in economic modeling emphasize post-hoc explanations and feature attribution to demystify black-box predictions, fostering accountability in algorithmic . Sustainability efforts in computational economics focus on practices to minimize the environmental footprint of resource-intensive simulations, such as those involving massive for macroeconomic forecasting. Principles like energy-aware design and efficient utilization have been proposed to reduce carbon emissions from centers running economic models, with studies indicating potential energy savings of 25% to 50% in computational workflows without sacrificing accuracy. By 2025, advances in have enabled privacy-preserving analysis of , where models are trained collaboratively across decentralized institutions—such as banks or governments—without sharing sensitive raw , thus complying with regulations like GDPR while improving forecasts for or . This approach has been particularly impactful in , where it safeguards personally identifiable information during collaborative risk assessments. As of 2025, frameworks for in economic modeling have emerged to address ethical AI use in simulations.

References

  1. [1]
    (PDF) What Is Computational Economics? - ResearchGate
    Aug 10, 2025 · Abstract. Computational Economics is rapidly evolving into an independent branche in economics. ... nomics. Jel codes: C63. ... 1. What is ...
  2. [2]
    [PDF] Computational Economics and Economic Theory: Substitutes or ...
    Abstract. This essay examines the idea and potential of a “compu- tational approach to theory,” discusses methodological issues raised by such computational ...
  3. [3]
    What is computational economics? - Boston College
    To quote Hans Amman's 1997 editorial in the journal Computational Economics, "Computational Economics is a new methodology for solving economic problems with ...
  4. [4]
    [PDF] Agent-Based Computational Economics: Overview and Brief History1
    This perspective provides an overview of ACE, a brief history of its development, and its role within a broader spectrum of experiment-based modeling methods. ...
  5. [5]
    Computational Economics - an overview | ScienceDirect Topics
    Abstract. This chapter of the Handbook of Computational Economics is mostly about research on active learning and is confined to discussion of learning in ...
  6. [6]
    Aims and scope | Computational Economics
    The journal Computational Economics publishes scientific research in a rapidly growing multidisciplinary field that uses advanced computing capabilities.
  7. [7]
    Computational Economics - General Announcements - Rice University
    This program bridges economic theory, econometrics, and high-performance computing, preparing graduates for careers in research, policy analysis, finance, and ...
  8. [8]
    How Economists Can Get Alife: Abbreviated Version
    Feb 12, 2025 · Roughly defined, Agent-based Computational Economics (ACE) is the computational study of economic processes (including whole economies) modeled ...
  9. [9]
    [PDF] An Investigation into Numerical Solutions to Discrete- & Continuous ...
    May 9, 2023 · Turning to discrete-time solution methods, Carroll (2006) introduces the endogenous grid method (EGM) and applies it to a neoclassical ...
  10. [10]
    [PDF] Iterative methods for solving economic models - DSpace@MIT
    Iterative methods form an important subset of the methods of successive relaxation, and because their mathematics is relatively straightforward, they serve as a ...Missing: discretization | Show results with:discretization
  11. [11]
    [PDF] agent-based computational economics: a constructive approach to ...
    Dec 18, 2005 · Tesfatsion and K. L. Judd (editors), Handbook of Computational Economics, Volume 2: Agent-Based Computational Economics, Handbooks in Economics ...
  12. [12]
    Kantorovich, Leonid V. - INFORMS.org
    In 1939, Kantorovich developed a linear, solution-deriving method (which he called “Lagrange resolving multipliers”) that is quite similar to today's linear ...Missing: 1940s | Show results with:1940s
  13. [13]
  14. [14]
    Koopmans, Tjalling C. - Informs.org
    As World War II spread to Western Europe in 1940, Koopmans migrated to the United State with aid from Princeton University's Sam Wilks. He joined the Allied war ...
  15. [15]
    Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1975
    ### Summary of Tjalling C. Koopmans's Contributions
  16. [16]
    A Brief History of Wassily Leontif and Input-Output Analysis
    Aug 12, 2025 · Late in the summer of 1949, Wassily Leontief fed the 25-ton computer the last of the stiff paper punched with precisely-placed holes.Missing: 1950s- 1960s
  17. [17]
    Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel 1973
    ### Summary of Leontief's Input-Output Model and Computational Aspects
  18. [18]
    [PDF] Econometric Software: The first Fifty Years in Perspective*
    Sep 15, 2003 · Somewhat earlier, the first use of the computer with econometric models apparently occurred at the University of Michigan in conjunction with ...
  19. [19]
    From Computors to Computers: The Early History of Econometric ...
    In the 1950s, programming digital computers became a crucial and demanding skill for econometricians in producing econometric knowledge.
  20. [20]
    Computable General Equilibrium - an overview | ScienceDirect Topics
    The computable general equilibrium (CGE) approach to modeling an economy was made possible by Herbert Scarf's algorithm for solving complete general equilibrium ...
  21. [21]
    Applied General Equilibrium Analysis: Birth, Growth, and Maturity
    Dec 1, 2017 · We examine the evolution of applied, computational general-equilibrium models (AGE), from their inception in the 1970s through the end of ...
  22. [22]
    [PDF] The emergence of agent-based modeling in economics: Individuals ...
    On the model of biological reproduction of cells, von Neumann conceived of cellular automata as abstract systems capable of self-reproduction. His 1951 ...
  23. [23]
    [PDF] Numerical Methods in Economics - Kenneth L. Judd
    Aug 19, 2020 · ▷ “Interior point” methods were first proposed by Ragnar Frisch, but ... ▷ Fact: There are basic methods capable of solving models with.
  24. [24]
    [PDF] Finance and Monte Carlo Simulation | Semantic Scholar
    106 Journal of Financial Planning/November 2001 1980s and 1990s with the microcomputer, think again. The 1980s and 1990s are just history repeating itself.
  25. [25]
    Complexity Economics - W. Brian Arthur - Santa Fe Institute
    Complexity economics was pioneered in the 1980s and 1990s by a small team at the Santa Fe Institute led by W. Brian Arthur.
  26. [26]
    [PDF] Finn Kydland and Edward Prescott's Contribution to Dynamic ...
    Oct 11, 2004 · In their model solution, Kydland and Prescott relied on numerical solution and computer simulation to an extent not previously implemented in ...
  27. [27]
    Artificial Intelligence in Economics Research: What Have We ...
    Mar 24, 2025 · We investigate the role of AI in economics by reviewing the literature (2231 articles) during the last 34 years (1990 to November 2024).<|control11|><|separator|>
  28. [28]
    Econometrics Toolbox - MATLAB - MathWorks
    Econometrics Toolbox enables you to estimate, simulate, and forecast economic systems using models, such as regression, ARIMA, state-space, GARCH, and more.Missing: 2000s | Show results with:2000s
  29. [29]
    The Projected Impact of Generative AI on Future Productivity Growth
    Sep 8, 2025 · We estimate that AI will increase productivity and GDP by 1.5% by 2035, nearly 3% by 2055, and 3.7% by 2075. AI's boost to annual ...
  30. [30]
    [PDF] Chapter 9 Discrete Time Continuous State Dynamic Models: Methods
    This chapter discusses numerical methods for solving discrete time continuous state dynamic economic models, with particular emphasis on Markov decision.
  31. [31]
    [PDF] Discrete State Space Methods for Dynamic Economies - Duke People
    Discrete state space methods are computational techniques used to obtain the solutions of dynamic economic models. If the model in question assumes that the ...
  32. [32]
    [PDF] On Deterministic And Stochastic Structures. - Princeton University
    It is usual in econometrics to distinguish between deterministic and stochastic structures and also between deterministic and stochastic models. This ...
  33. [33]
    Calibration and validation of macroeconomic simulation models by ...
    In this paper, we propose a general procedure to both calibrate and (at a subsequent stage) validate macroeconomic models that are sufficiently complex that ...
  34. [34]
    Deep learning for solving dynamic economic models. - ScienceDirect
    We introduce a unified deep learning method that solves dynamic economic models by casting them into nonlinear regression equations.Missing: seminal | Show results with:seminal
  35. [35]
    [PDF] Deep learning for solving dynamic economic models.
    We introduce a unified deep learning method that solves dynamic economic models by casting them into nonlinear regression equations.
  36. [36]
    [PDF] PDE Models in Macroeconomics - Benjamin Moll
    The purpose of this article is to get mathematicians interested in studying a number of PDEs that naturally arise in macroeconomics. These PDEs come.
  37. [37]
    Bootstrap Methods in Econometrics - Annual Reviews
    Aug 2, 2019 · The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one's data or a model estimated ...
  38. [38]
    The Bootstrap in Econometrics - Project Euclid
    Abstract. This paper presents examples of problems in estimation and hypothesis testing that demonstrate the use and performance of the bootstrap.
  39. [39]
    [PDF] GPU Computing in Economics - eScholarship
    Oct 15, 2012 · The objective of this paper will be to demonstrate the applicability of massively parallel computing to economic problems and to highlight ...
  40. [40]
    Solving dynamic equilibrium models with graphics processors
    This paper shows how to build algorithms that use graphics processing units (GPUs) to solve dynamic equilibrium models in economics. In particular, we rely on ...
  41. [41]
    Agent-based computational economics - Scholarpedia
    Aug 21, 2009 · Agent-based computational economics (ACE) is the computational study of economic processes modeled as dynamic systems of interacting agents.Missing: definition | Show results with:definition
  42. [42]
    ACE: A Completely Agent-Based Modeling Approach (Tesfatsion)
    Jan 6, 2025 · Roughly defined, completely Agent-Based Modeling (c-ABM) is the computational modeling of processes as open-ended dynamic systems of interacting agents.ACE Overview · Completely Agent-Based... · ACE Agent Stochasticity
  43. [43]
    Agent-Based Modeling in Economics and Finance: Past, Present ...
    Mar 24, 2025 · We review ABM in economics and finance and highlight how it can be used to relax conventional assumptions in standard economic models. ABM has ...
  44. [44]
    [PDF] Agent-Based Modeling in Economics and Finance: Past, Present ...
    Jun 21, 2022 · Because ABMs are simulated at the micro level, there are typically no equations that explicitly relate aggregate states to agent-level states ( ...<|control11|><|separator|>
  45. [45]
    Agent-based Computational Economic Models - Turing Finance
    Agent-based models simulate the emergent characteristics of complex adaptive systems. This is achieved by simulating the actions and interactions of agents ...
  46. [46]
    A Wealth Distribution Agent Model Based on a Few Universal ...
    Aug 19, 2023 · We propose a new agent-based model for studying wealth distribution. We show that a model that links wealth to information (interaction and trade among agents)
  47. [47]
    Calibrating Agent-Based Models of Innovation Diffusion with Gradients
    Jun 30, 2022 · The most common methods used for modeling the Diffusion of Innovations are Statistical Models, Equation-based Models, and Agent-based Models.
  48. [48]
    [PDF] Validation of agent-based models in economics and finance
    Sep 23, 2017 · The baseline evaluation process focussing on the replication of stylized facts has been naturally embedded in most of Agent-Based models,.
  49. [49]
    What Do Agent-Based and Equation-Based Modelling Tell Us About ...
    In section 4 we describe our model and an algorithm to implement the replicator dynamic onto it. Our model is based on the interaction of strategies of ...<|control11|><|separator|>
  50. [50]
    [PDF] Solution and Estimation Methods for DSGE Models
    This paper provides an overview of solution and estimation techniques for dynamic stochastic general equilibrium (DSGE) models. We cover the foundations of ...
  51. [51]
    Where modern macroeconomics went wrong - Oxford Academic
    Jan 5, 2018 · Because the 2008 crisis was a financial crisis, the standard DSGE models are particularly poorly designed to analyse its origins and ...
  52. [52]
    [PDF] Demand Estimation with Machine Learning and Model Combination
    The idea is to sample the data with replacement B times, train a regression tree on each resampled set of data, and then predict the outcome at each x through ...Missing: supervised | Show results with:supervised
  53. [53]
    Approximating Auction Equilibria with Reinforcement Learning - arXiv
    Oct 17, 2024 · This paper introduces a self-play based reinforcement learning approach that employs advanced algorithms such as Proximal Policy Optimization ...
  54. [54]
    Policy Analysis Using DSGE Models: An Introduction
    This article introduces the basic structure, logic, and application of the DSGE framework to a broader public by providing an example of its use in monetary ...
  55. [55]
    Shocks and Frictions in US Business Cycles: A Bayesian DSGE ...
    Using a Bayesian likelihood approach, we estimate a dynamic stochastic general equilibrium model for the US economy using seven macroeconomic time series.
  56. [56]
    DSGE Models for Monetary Policy Analysis - ScienceDirect.com
    Monetary DSGE models are widely used because they fit the data well and they can be used to address important monetary policy questions.
  57. [57]
    [PDF] Demystifying Modelling Methods for Trade Policy
    This paper focuses on two classes of quantitative tools – computable general equilibrium (CGE) models and gravity models. These are perhaps the most ...
  58. [58]
    The Effects of Climate Change on GDP by Country and the Global ...
    Jul 13, 2018 · Computable general equilibrium (CGE) models are a standard tool for policy analysis and forecasts of economic growth.
  59. [59]
    [PDF] Fiscal Multipliers and the State of the Economy
    The challenge in computing IRFs in a nonlinear model is that they should allow not only the shock impact to depend on the regime itself, but also the regime to ...
  60. [60]
    Impulse response matching estimators for DSGE models
    Structural impulse responses play a central role in modern macroeconomics. It is common to estimate the structural parameters of a dynamic stochastic ...
  61. [61]
    WorkSim: An Agent-Based Model of Labor Markets - JASSS
    WorkSim is an agent-based model for studying labor markets, focusing on gross flows between employment, unemployment, and inactivity, using stock-flow ...
  62. [62]
  63. [63]
    Bounded rationality in agent‐based models: experiments with ...
    Abstract. This paper examines the use of evolutionary programming in agent‐based modelling to implement the theory of bounded rationality.
  64. [64]
  65. [65]
  66. [66]
    Multiasset financial bubbles in an agent-based model with noise ...
    We present an agent-based model (ABM) of a financial market with $n>1$ risky assets, whose price dynamics result from the interaction between rational ...
  67. [67]
    [PDF] options: a monte carlo approach - Ressources actuarielles
    The purpose of the present paper is to show that Monte Carlo simulation provides a third method of obtaining numerical solutions to option valuation problems.
  68. [68]
    [PDF] A Framework for Assessing the Systemic Risk of Major Financial ...
    Our stress testing methodology, using an integrated micro-macro model, takes into account dynamic linkages between the health of major US banks and macro-.
  69. [69]
    [PDF] Efficient Monte Carlo methods for value-at-risk
    The calculation of value-at-risk (VAR) for large portfolios of complex derivative securities presents a tradeoff between speed and accuracy.
  70. [70]
    [PDF] Agent-Based Simulation of Community Currencies with Basic Income
    Jul 6, 2022 · An agent-based model was used to give individuals variations and random fluctuations. The simulator then ran through various scenarios for a ...
  71. [71]
  72. [72]
    Cryptocurrency Exchange Simulation | Computational Economics
    Jan 2, 2024 · In this paper, we consider the approach of applying state-of-the-art machine learning algorithms to simulate some financial markets.
  73. [73]
    Tackling emissions and inequality: policy insights from an agent ...
    We extend the DSK integrated-assessment agent-based model to combine an income class-based analysis of inequality with an improved accounting of emissions. We ...
  74. [74]
    Which programming language is best for economic research: Julia ...
    Aug 20, 2020 · The most widely used programming languages for economic research are Julia, Matlab, Python and R. This column uses three criteria to compare the languages.
  75. [75]
    [PDF] A Comparison of Programming Languages in Economics
    Most languages allow for the used of mixed programming. This is particularly useful in. Matlab and R, where one can send computer-intensive parts of the code to ...
  76. [76]
    13. SciPy - Python Programming for Economics and Finance
    SciPy is a package that contains various tools that are built on top of NumPy, using its array data type and related functionality.
  77. [77]
    11. NumPy - Python Programming for Economics and Finance
    NumPy arrays power a very large proportion of the scientific Python ecosystem. To create a NumPy array containing only zeros we use np.zeros. a = np ...
  78. [78]
    The FRBNY DSGE Model Meets Julia - Liberty Street Economics
    Dec 3, 2015 · We have implemented the FRBNY DSGE model in a free and open-source language called Julia. The code is posted here on GitHub, a public repository hosting ...
  79. [79]
    Introduction to Econometrics with R
    Over the last few years, the statistical programming language R has become an integral part of the curricula of econometrics classes we teach at the University ...
  80. [80]
    25. The Solow-Swan Growth Model
    The model is used to study growth over the long run. Although the model is simple, it contains some interesting lessons. We will use the following imports ...
  81. [81]
    Dynare
    Use Dynare to solve and estimate your model, compute optimal policy, perform identification and sensitivity analysis, and more!Download · The Dynare Reference Manual · Dynare · GitLab · Resources
  82. [82]
    Estimation (Dynare Reference Manual)
    Using Bayesian methods, it is possible to estimate DSGE models, VAR models, or a combination of the two techniques called DSGE-VAR. Note that in order to avoid ...
  83. [83]
    [PDF] Dynare Reference Manual
    Nov 14, 2013 · Using Bayesian methods, it is possible to estimate DSGE models, VAR models, or a combination of the two techniques called DSGE-VAR. Note ...
  84. [84]
    NetLogo Models Library
    If you download NetLogo, all of the models in the models library are included. You may also run the models here, in your browser.
  85. [85]
    scikit-learn: machine learning in Python — scikit-learn 1.7.2 ...
    Applications: Spam detection, image recognition. Algorithms: Gradient boosting, nearest neighbors, random forest, logistic regression, and moreInstall · 1. Supervised learning · Examples · API Reference
  86. [86]
    Machine Learning in Economics - QuantEcon DataScience
    Machine learning is increasingly being utilized in economic research. Here, we discuss three main ways that economists are currently using machine learning ...
  87. [87]
    Introduction to Machine Learning and AI — Coding for Economists
    We'll mainly be using the scikit-learn package to do our machine learning ... Note that, for many applications in economics, you may not need deep ...
  88. [88]
    QuantEcon
    QuantEcon runs remote and in-person workshops and short courses on quantitative economics and high-performance computing using Python and Julia. Past locations ...MATLAB--Python--Julia · Python Programming for · Julia · QuantEcon DataScience
  89. [89]
    Intermediate Quantitative Economics with Python - QuantEcon
    This website presents a set of lectures on quantitative economic modeling. Tools and Techniques. 1. Modeling COVID 19 · 2. Linear Algebra · 3. QR Decomposition ...1. Modeling COVID 19 · Modeling Career Choice · 2. Linear Algebra · Time Iteration
  90. [90]
    Quantitative Economics with Julia - QuantEcon
    This website presents a set of lectures on quantitative economic modeling. Getting Started with Julia. 1. Setting up Your Julia Environment · 2. Introductory ...3. Julia Essentials · Our getting started lecture · 2. Introductory Examples
  91. [91]
    GAMS - Cutting Edge Modeling
    The General Algebraic Modeling Language is the easiest way to formulate complex optimization problems.Download · Discover GAMS language · GAMS MIRO · GAMS Engine
  92. [92]
    [PDF] Economic Equilibrium Modeling with GAMS
    This document describes a mathematical programming system for general equilibrium analysis named MPSGE which operates as a subsystem to the mathematical ...<|separator|>
  93. [93]
    Introduction - GAMS
    In this chapter we will briefly describe some of the origins of GAMS and provide background information that shaped early design decisions.
  94. [94]
    14. Pandas - Python Programming for Economics and Finance
    Pandas is a package of fast, efficient data analysis tools for Python. Its popularity has surged in recent years, coincident with the rise of fields such as ...
  95. [95]
    Run QuantEcon Lectures with Google Colab!
    Apr 11, 2019 · You can now open a lecture as a Jupyter notebook in Google Colab, allowing code to be run and edited live in the cloud.
  96. [96]
    Computational economics workshop at the IMF, March 2024 - GitHub
    Mar 25, 2024 · Some work will be done remotely using Google Colab --- a Google account is required. Required Python libraries (much of which is found in ...
  97. [97]
    Journal of Economic Dynamics and Control - ScienceDirect.com
    The journal provides an outlet for publication of research concerning all theoretical and empirical aspects of economic dynamics and control.Special issues and article... · Articles in press · View full editorial board · All issues
  98. [98]
    Computational Economics
    Computational Economics is a multidisciplinary journal that integrates computational science with all branches in economics, to understand and solve complex ...Aims and scope · Contact the journal · Journal updates · Volumes and issues
  99. [99]
    Journal of Economic Dynamics and Control - SCImago
    The journal provides an outlet for publication of research concerning all theoretical and empirical aspects of economic dynamics and control.
  100. [100]
    Journal of Economic Dynamics and Control - Impact Factor (IF ...
    The Impact IF 2024 of Journal of Economic Dynamics and Control is 2.40, which is computed in 2025 as per its definition. Journal of Economic Dynamics and ...
  101. [101]
    Computational Economics - SCImago
    The official journal of the Society for Computational Economics, presents new research in a rapidly growing multidisciplinary field that uses advanced ...
  102. [102]
    Computational Economics - Impact Factor (IF), Overall Ranking ...
    Computational Economics is a journal covering the technologies/fields/categories related to Computer Science Applications (Q2); Economics, Econometrics and ...
  103. [103]
    International Journal of Computational Economics and Econometrics
    Topics covered include. Computational Economics. Computational techniques applied to economic problems and policies; Agent-based modelling; Control and game ...Missing: scope | Show results with:scope
  104. [104]
    Bayesian Networks and Machine Learning Approaches Applied to ...
    Oct 18, 2025 · This paper applies Bayesian and machine learning techniques to analyze Mexico's Social Backwardness Index data from 2000 to 2020.
  105. [105]
    Machine learning for economics research: when, what and how
    We aim to showcase the benefits of using ML in economics research and policy analysis and offer suggestions on where, when and how to effectively apply ML ...Missing: seminal | Show results with:seminal
  106. [106]
    About – Society for Computational Economics
    The society was founded in 1995 and is a 501(c)3 organization. The By-Laws formally describe the Society. ... Economics. 1.2: The purpose of the Society shall be ...Missing: history establishment
  107. [107]
    Call for papers - XIII International Workshop of Computational ...
    Sep 15, 2025 · The workshop aims to explore innovative data sources, computational advances, and interdisciplinary approaches as drivers of actionable ...
  108. [108]
    Society for Computational Economics
    The Society for Computational Economics (SCE) promotes computational methods in economics and finance. The SCE sponsors an annual scientific conference.Computational Resources · 31st CEF Conference · SCE Conferences · AboutMissing: history establishment
  109. [109]
    SCE Conferences - Society for Computational Economics
    SCE Conferences · 31st Conference, Santiago · 30th Conference, Singapore · 29th Conference, Nice · 28th Conference, Dallas · 27th Conference, Virtual · 26th ...
  110. [110]
    CEF 2021 - Society for Computational Economics
    The Society for Computational Economics · 27th International Conference · Computing in Economics and Finance · ONLINE · Wednesday, June 16 through Friday, June 18, ...Missing: SCE history<|control11|><|separator|>
  111. [111]
    Sparse High Dimensional Models in Economics - PMC - NIH
    This paper reviews the literature on sparse high dimensional models and discusses some applications in economics and finance.
  112. [112]
    [PDF] Agent Based Computational Economics: A Review, Challenges And ...
    May 23, 2025 · In the final section, we will explore the future development of agent-based computational economics and its limitations and criticisms. 2.
  113. [113]
    [PDF] 8 Some scattered thoughts on DSGE models - CREI-UPF
    On the negative side, the heavy computational requirements associated with the analysis of those models and the consequent black-box nature of some of their ...
  114. [114]
    Reproducible research in computational economics: guidelines ...
    Mar 22, 2007 · In this paper, we propose some basic standards to improve the production and reporting of computational results in economics for the purpose of ...
  115. [115]
    Modelling dataset bias in machine-learned theories of economic ...
    Jan 12, 2024 · This so-called dataset bias, which includes the prominent selection bias, is pervasive in modern machine learning and has been described ...
  116. [116]
    [PDF] On DSGE Models - American Economic Association
    Dynamic stochastic general equilibrium (DSGE) models are the leading tool for ... In a seminal paper, Sims. (1986) argued that one should identify monetary ...
  117. [117]
    A survey on agent‐based modelling assisted by machine learning
    May 13, 2023 · This survey reviews significant developments in the integration of ML into the agent-based modelling and simulation process in recent years.
  118. [118]
    [PDF] Combining Machine Learning and Agent-Based Modeling to Study ...
    Jun 1, 2022 · This. Review aims to summarize how ABM and ML have been integrated in diverse contexts that span spatial scales including multicellular, or ...
  119. [119]
    The development of a blockchain-based business model simulator ...
    Our research describes the development and application of a business model simulator, beginning with use case analysis, followed by the simulator's design.
  120. [120]
    On modeling blockchain-enabled economic networks as stochastic ...
    Mar 19, 2020 · We establish a formal mathematical framework, based on dynamical systems, to model the core concepts in blockchain-enabled economies.
  121. [121]
    Quantum Economic Advantage | Management Science - PubsOnLine
    Dec 2, 2022 · A quantum computer exhibits quantum advantage when it can perform a calculation that a classical computer is unable to complete.<|separator|>
  122. [122]
    [PDF] Empirical evaluation of a quantum accelerated approach for the ...
    Aug 28, 2023 · Quantum linear system algorithms (QLSAs) have been applied to optimization problems since the early 2020s. One such application is the ...
  123. [123]
    Explainable Artificial Intelligence (XAI) in Economic Modeling and ...
    XAI addresses the need for transparency in AI forecasts, opening the 'black box' to provide clarity and understanding of AI outputs.
  124. [124]
    Explainable and transparent artificial intelligence for public ...
    Feb 16, 2024 · This paper illustrates a collection of AI solutions that can empower data scientists and policymakers to use AI/ML for the development of explainable and ...
  125. [125]
    Computational Aspects of Sustainability | Computational Economics
    Jun 20, 2021 · The main purpose of the special issue is to feature the computational practice of green policy performance measurement.
  126. [126]
    [PDF] GREENER principles for environmentally sustainable computational ...
    Jun 26, 2023 · Dedicate research efforts to green computing to improve our understanding of power usage, support sustainable software engineering and.
  127. [127]
    Federated Learning: A Survey on Privacy-Preserving Collaborative ...
    Aug 12, 2025 · This decentralized approach addresses growing concerns around data privacy, security, and regulatory compliance, making it particularly ...
  128. [128]
    (PDF) Federated Learning for Privacy-Preserving: A Review of PII ...
    Aug 5, 2025 · This approach guarantees the confidentiality of personal identifiable information (PII) during data analysis and strengthens cybersecurity ...